Google Promises Its A.I. Will Not Be Used for Weapons

Image

Sundar Pichai, Google’s chief executive, at its annual Google I/O developer conference last month in Mountain View, Calif. On Thursday he laid out objectives for the company’s use of A.I. technology.CreditCreditJim Wilson/The New York Times

SAN FRANCISCO — Google, reeling from an employee protest over the use of artificial intelligence for military purposes, said Thursday that it would not use A.I. for weapons or for surveillance that violates human rights. But it will continue to work with governments and the military.

The new rules were part of a set of principles Google unveiled relating to the use of artificial intelligence. In a company blog post, Sundar Pichai, the chief executive, laid out seven objectives for its A.I. technology, including “avoid creating or reinforcing unfair bias” and “be socially beneficial.”

Google also detailed applications of the technology that the company will not pursue, including A.I. for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people” and “technologies that gather or use information for surveillance violating internationally accepted norms of human rights.”

But Google said it would continue to work with governments and military using A.I. in areas including cybersecurity, training and military recruitment.

“We recognize that such powerful technology raises equally powerful questions about its use. How A.I. is developed and used will have a significant impact on society for many years to come,” Mr. Pichai wrote.

Concern over the potential uses of artificial intelligence bubbled over at Google when the company secured a contract to work on the Pentagon’s Project Maven program, which uses A.I. to interpret video images and could be used to improve the targeting of drone strikes.

More than 4,000 Google employees signed a petition protesting the contract, and a handful of employees resigned. In response, Google said it would not seek to renew the Maven contract when it expired next year and pledged to draft a set of guidelines for appropriates uses of A.I.

Mr. Pichai did not address the Maven program or the pressure from employees. It’s not clear whether these guidelines would have precluded Google from pursuing the Maven contract, since the company has insisted repeatedly that its work for the Pentagon was not for “offensive purposes.”

Google has bet its future on artificial intelligence, and company executives believe the technology could have an impact comparable to the development of the internet.

Google promotes the benefits of artificial intelligence for tasks like early diagnosis of diseases and the reduction of spam in email. But it has also experienced some of the perils associated with A.I., including YouTube recommendations pushing users to extremist videos or Google Photos image-recognition software categorizing black people as gorillas.

While most of Google’s A.I. guidelines are unsurprising for a company that prides itself on altruistic goals, it also included a noteworthy rule about how its technology could be shared outside the company.

“We will reserve the right to prevent or stop uses of our technology if we become aware of uses that are inconsistent with these principles,” the company said.

Like most of the top corporate A.I. labs, which are laden with former and current academics, Google openly publishes much of its A.I. research. That means others can recreate and reuse many of its methods and ideas. But Google is joining other labs in saying it may hold back certain research if it believes others will misuse it.

DeepMind, a top A.I. lab owned by Google’s parent company, Alphabet, is considering whether it should refrain from publishing certain research because it may be dangerous. OpenAI, a lab founded by the Tesla chief executive Elon Musk and others, recently released a new charter indicating it could do much the same — even though it was founded on the principle that it would openly share all its research.