Google Renounces AI for Weapons, But Will Still Sell to Military

Google Renounces AI for Weapons, But Will Still Sell to Military

Google on Thursday said it would not allow its artificial intelligence programme to be used to develop weapons or for surveillance efforts that violate worldwide laws.

While Google is rejecting the use of its AI for weapons, "we will continue our work with governments and the military in many other areas", Google CEO Sundar Pichai wrote in a blog post.

But can Google realistically stick to its now-public principles?

Google's Project Maven with the US Defence Department came under fire from company employees concerned about the direction it was taking the company.

The document, which also enshrines "relevant explanations" of how AI systems work, lays the groundwork for the rollout of Duplex, a human-sounding digital concierge that was shown off booking appointments with human receptionists at a Google developers conference in May. Last week, cloud chief Diane Greene said Google would not renew the deal when it expires next year - an unusual withdrawal from a business deal. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions. Peter Highnam, the deputy director of the Defense Advanced Research Projects Agency, the Pentagon agency that did not handle Project Maven but is credited with helping invent the Internet, said there are "hundreds if not thousands of schools and companies that bid aggressively" on DARPA's research programs in technologies such as AI.

More news: Cut melons linked to salmonella outbreak in Michigan, Indiana

Technologies whose goal contravenes widely accepted principles of global law and human rights. However, the AI principles do not make clear whether Google would be precluded from working on a project like Maven-which promised vast surveillance capabilities to the military but stopped short of enabling algorithmic drone strikes.

Google said it will not pursue development of AI when it could be used to break worldwide law.

No Google AI technology will ever be used as a weapon or for surveillance, the policy states.In addition, the company will refuse to develop any AI projects that will "cause or are likely to cause overall harm". Asaro praised Google's ethical principles for their commitment to building socially beneficial AI, avoiding bias, and building in privacy and accountability. "For example, we will continue to work with government organizations on cybersecurity, productivity tools, healthcare, and other forms of cloud initiatives".

Google promotes the benefits of artificial intelligence for tasks like early diagnosis of diseases and the reduction of spam in email. "In the absence of positive actions, such as publicly supporting an worldwide ban on autonomous weapons, Google will have to offer more public transparency as to the systems they build".

Related Articles