Do We Need To Ban Killer AI?

Elon Musk, Stephen Hawking, and a host of other science and tech luminaries today added their names to a document from the Future of Life Institute called, "Autonomous Weapons: an Open Letter from AI & Robotics Researchers." The text of the letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, calls for stopping a weaponized AI arms race in its tracks before countries and companies are forced to compete with each other to build deadlier and deadlier autonomous weapons.

Advertisement - Continue Reading Below

SpaceX's Musk, in particular, makes occasional headlines warning humanity against building weaponized AI, saying that it's an existential threat with the very real possibility of wiping humanity off the face of the planet. The letter argues that most AI scientists would support international agreements to restrict these technologies, just as scientists in other fields supported bans on chemical, biological, and nuclear weapons that possess the power to kill so many.

Most Popular

We already have weaponized drones, of course, flying over war zones and dropping deadly air strikes. The difference is that humans still control the drones, and humans make the decisions about killing. Robots with some level of autonomy are already joining our troops on the battlefield, but they are not deciding matters of life and death.

But not every scientist thinks AI should be banned from having kill orders. We asked Ronald Arkin, Director of the Mobile Robot Laboratory at Georgia Tech, about the letter, because he's spent years researching what he calls the "ethical governor"—a system of protocols that would guide artificial intelligence in control of weapons through a decision-making process that dictates when it's okay to strike, and when it's not (say, a medium-value target is in view, but the AI identifies civilians nearby and decides it's too risky.

The difference is that humans still control the drones, and humans make the decisions about killing

"As I have always said, I am not averse to a ban," he says, "but I believe we can do better by continuing to research ways into reducing noncombatant casualties with technology. The status quo with respect to non-combatant atrocities and casualties is utterly and wholly unacceptable and refusing to look for ways to have technology assist in preventing these atrocities and mistakes, leaves civilians in the lurch."

Here we get into tricky territory. Today's open letter says, "There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people." Truth. But what about machines making decisions about how to use the tools for killing people that we already have? Who's to say they'd be more evil than humans are?

As Arkin ends his note to PM: "It does seem odd to me that people who fear that we can make make machines ultimately exceed human intelligence balk at the fact that we can make them possibly more moral than we are, given the relative low bar humans have in adhering to ethical behavior."

Indeed, perhaps we could build weapon-wielding AI that's a reflection of our better angels, programmed with a strict moral code and unwilling to make ethical compromises under stress or fire, as human commanders and soldier so often must. That would seem to fit the open letter's description of a way to use AI to make the battlefield safer.

It also means that the machine is the moral reflection of its creator. In that sense, it's easy to see to understand Musk and company's worries about the proliferation of killer AI. Maybe we could make it moral. But not everybody is going to want to.

[*Editor's note: As of Monday, July 27, Stephen Hawking is in the middle of a Reddit AMA. Questions were submitted today; Hawking will answer them on Tuesday. Expect to hear the esteemed physicist's take on the danger of AI. The current highest-voted question asks about the "Terminator conversation"—whether the media over-inflates the danger of machines wiping out humanity.]