Should Robots Have A License to Kill?

Way back in 1942, science fiction author Isaac Asimov proposed his famous Three Laws of Robotics in a short story entitled “Runaround”:

1.) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3.) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Despite the enduring influence of these tenets, there’s nonetheless a push underway to give robots what’s been termed “lethal autonomy” – that is, the ability to kill without direct human involvement. Killing by algorithm. That’s no longer science fiction. Not only has it become technologically possible but increasingly likely to occur, if not here, then overseas. For some, the advantages of automation in human conflict are just too great a temptation. That’s a fundamental shift that could very well change our geopolitical landscape.

Whether you are for or against combat drones, they’re here to stay. In fact, their use will only expand. You might be disturbed by this news, but that doesn’t change the reality. Even if the United States and its allies stopped using and building drones, Pandora’s Box has already been opened. Fifty other nations are now developing combat drones of their own. That’s what happens with useful technologies – they spread. Haven’t heard of the South African Bateleur drone or the Indian Rustom? What about the Chinese Dark Blade? No? How about the Turkish Anka, or the Pakistani Burraq? The Argentinian Nostromo Yarará or the Russian Zala? There’s a Cambrian explosion of drone species underway.

And yet these drones are basically remotely piloted aircraft – with a human being still calling the kill shots (albeit at a considerable distance). But a confluence of events has created the perfect conditions for the incubation and rapid spread of autonomous robotic weapons throughout the world. The same global economy that expanded high-tech manufacturing to the cheaper corners of the globe has also inadvertently enabled the manufacture of combat drones in exotic locales not especially known for democratic principles, rule of law, or human rights. Likewise cyber espionage – that invisible yet pervasive activity that’s spiriting away terabytes of proprietary data to parts unknown each month – is also the handmaiden to the drone arms race. Advanced tech is fast leaking out of the West.

But what’s the compelling force pushing more autonomous decision-making onto combat drones themselves -and away from human beings? The primary reason is that any reasonably sophisticated adversary can jam the electro-magnetic spectrum on the battlefield, severing the radio connection with your drone, or they can “spoof” GPS signals to make your drone think it’s somewhere where it’s not. Iran used one or both of these tactics in their capture of a U.S. RQ-170 drone late last year. If instead a combat drone is pre-programmed to carry out its mission and ignores instructions from the outside while it does so, then it becomes harder to divert or compromise.

It also becomes harder to trace back to its owner.

And that’s another major reason why autonomous weapons are spreading fast. In a world where cheap, generic drones proliferate, possibly with pilfered designs, who’s to say who sent a killer drone? Its components were made in China? So what? It could have been assembled elsewhere with dual-use, off-the-shelf components. Attribution of an attack in wartime would suddenly become a major challenge, meaning powerful countries would have difficulty directing their firepower against an attacker – and this anonymity could make violence an appealing policy option to everyone from rogue states, to narco-traffickers, to private companies, to major nations of the world. It would also mean war could be conducted by robots without buy-in from society at large – a dangerous precedent corrosive to the foundation of popular government.

What can we do about this? First, we must establish an international legal framework for the use and development of combat robotics, and we need to do so before these machines come into widespread use. If that sounds idealistic, remember that we did the same for nuclear, biological and chemical weapons – and as daunting as that seemed and as imperfect as those agreements are, we’re still here. Secondly, the United States should lead the way by prohibiting the development of autonomous machines that kill. No matter what your view of the ethics of robotic killing, such technology has the capacity to centralize power in very few (possibly unseen) hands – a model antithetical to representative democracy and the separation of powers.

Does that mean we abandon drone development? No, far from it. In fact, we’ll need to develop the very best autonomous drones to destroy other drones, lest our elected leaders and public voices fall prey to anonymous robotic attack from less humane adversaries in coming years.

A former systems consultant to Fortune 1000 companies, Daniel Suarez has designed and developed mission-critical software for the defense, finance, and entertainment industries. An avid gamer and technologist, he now writes fiction and his latest novel is “Kill Decision,” published by Dutton. His website is here.