A former Googler leading the charge against AI weapons says her time at Google taught her that even 'nice' people can make bad moral decisions

Lilly Irani, an assistant professor and former Google employee has co-authored a letter signed by more than 200 academics and researchers demanding that Google pull out of a controversial military program and join in calling for a ban on autonomous weapons.

Google's motto may be "Don't be evil," but Irani says Google is staffed by humans and humans don't always make the right call. She says she saw that herself during her time at Google.

Irani and the other signers of the letter want a larger debate on autonomous weapons.

Lilly Irani wasn't all-together surprised to see Google, her former employer, caught up in a controversy over management's decision to participate in Project Maven, a military program that critics say could help improve the accuracy of drone missile strikes.

An early Google employee for four years before leaving to go to grad school in 2007, Irani says she knows that good people work at Google; people interested in making the world a better place.

But she also knows that good people don't always make the right call. Even all those years ago, during Irani's stint as a software-product designer at Google, she saw how financial pressures could bear down on managers just like they do at other profit-making companies.

And like elsewhere, she said these forces sometimes lead people to make questionable moral or ethical decisions.

She recalled that while working on Google's search history, one of her project managers told her: "Privacy is kind of like boiling a frog. If you go too far or too fast, people will freak out. But if you do it little by little, people will slowly get used to what you're doing."

Irani, who is now an assistant professor at the University California, San Diego said at the time she didn't feel she could challenge the assertion. She says now that she believes she was one of many tech workers back then who wanted to discuss some of the moral responsibilities facing big tech companies but didn't know how. That's one of the reasons why she's now speaking out about Project Maven.

She is one of the authors of a letter published Monday and signed by at least 260 researchers, academics and scholars that demands Google pull out of Project Maven and commit to never developing military technologies. The signers, including Noam Chomsky, the MIT scientist and political activist, also want Google to join them in calling for a ban on autonomous weapons. A Google spokeswoman did not respond to an interview request.

The fact that they are so nice and well meaning is an important sign of danger

And in April, The New York Times reported that a petition had circulated within Google demanding management end involvement with Project Maven and commit to never developing weapons. More than 3,000 Google workers signed, according to the Times. On Monday, Gizmodo reported that a dozen Google employees have decided to resign in protest over the issue.

Project Maven is an effort to help the Pentagon use artificial intelligence to interpret surveillance video. Sounds harmless enough but critics say the technology could help improve the accuracy of missile strikes.

"With Project Maven, Google becomes implicated in the questionable practice of targeted killings," read a copy of the letter co-authored by Irani, now an assistant professor at the University of California, San Diego. "These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage."

If Google's leaders didn't know it before, they are now fully aware that AI spooks many people — and the prospect of combining AI with weapons is especially controversial. The researchers and academics said in their letter that if Google's managers assist the Pentagon with AI, the company is helping to move the world closer to "authorizing autonomous drones to kill automatically, without human supervision or meaningful human control."

Killer robots sound scary but is the statement true? Google has said in the past that company's contribution to Maven is not offensive in nature and won't be used to kill people.

And what if Google doesn't participate? One of the company's rivals will likely do the work and pocket the fees instead. In the end, signing petitions and tendering resignations from Google probably won't stop the military from obtaining AI technology.

Irani said that Google should always at least listen to workers but regardless, she would like to see a much larger debate take place across society about autonomous weapons and AI. She doesn't think Google or the military should have the final word on AI weapons.

According to Irani, the fact that a company like Google, with its "Don't be evil" motto, can find itself linked to a program associated with Hellfire missiles, Predator drones and "surgical strikes" is an indication that something is wrong with the current state of affairs.

"Google is full of super nice, very intelligent people, many of whom generally want the best for the world," Irani said. "But even at Google we get a situation where our data might be integrated into the fabric of unaccountable killing. The fact that they are so nice and well meaning and this activity is ongoing is an important sign of danger."