Weeks after facing both internal and external blowback for its contract selling AI technology to the Pentagon for drone video analysis, Google on Thursday published a set of principles that explicitly states it will not design or deploy AI for "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people."

Google committed to seven principles to guide its development of AI applications, and it laid out four specific areas for which it will not develop AI. In addition to weaponry, Google said it will not design or deploy AI for:

Technologies that cause or are likely to cause harm.

Technologies that gather or use information for surveillance violating internationally accepted norms.

Technologies whose purpose contravenes widely accepted principles of international law and human rights.

While Google is rejecting the use of its AI for weapons, "we will continue our work with governments and the military in many other areas," Google CEO Sundar Pichai wrote in a blog post. "These include cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue. These collaborations are important and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe."

In his blog post, Pichai said the seven principles laid out Thursday "are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions."

The seven principles state that AI should: be socially beneficial, avoid creating or reinforcing unfair bias, be built and tested for safety, be accountable to people, incorporate privacy design principles, uphold high standards of scientific excellence, and be made available for uses that accord with these principles.

While Google's work with the Pentagon came under scrutiny, other major companies are also facing questions about the ethical principles guiding their AI development: Amazon, for instance, has been called out by the ACLU for providing facial recognition tools to law enforcement.

Thank You

By registering you become a member of the CBS Interactive family of sites and you have read and agree to the Terms of Use, Privacy Policy and Video Services Policy. You agree to receive updates, alerts and promotions from CBS and that CBS may share information about you with our marketing partners so that they may contact you by email or otherwise about their products or services.
You will also receive a complimentary subscription to the ZDNet's Tech Update Today and ZDNet Announcement newsletters. You may unsubscribe from these newsletters at any time.