Google is developing a set of ethical guidelines for military AI partnerships

The company won't develop AI for weaponry

Recap: We initially reported that Google was working with the US Department of Defense to develop drone footage-analyzing AI back in March. Later, thousands of Google employees expressed their distaste with the project via a petition and roughly a dozen others resigned. In response, Google now says they're developing ethical guidelines that will steer their military work in the right direction.

Google sparked a bit of controversy in March when the public discovered the company was working with the Pentagon on a military AI project dubbed "Project Maven."

Google's role in the project was to develop AI that could analyze drone footage. As you might imagine, many of Google's employees weren't too keen on the concept of developing tech for military use.

As such, in April, over 3100 employees signed a petition demanding that Google leave the Pentagon partnership. In response, the company claimed the results of their work with the Department of Defense would be used for purely non-offensive purposes.

Google's explanation wasn't enough for some employees, though. In May, roughly a dozen employees left the company due to Project Maven.

It seems the resignations may have been at least mildly effective, according to the New York Times. The outlet says Google is now working hard to develop ethical guidelines that will dictate how the company makes decisions regarding defense contracts in the future.

Specifically, Google reportedly told the Times that their new guidelines would "[preclude] the use of A.I. in weaponry." Whether or not these guidelines will be enough to soothe internal and external fears about Google's future military work remains to be seen.