The Future of War

How will Artificial Intelligence change war? Hollywood has it wrong. It won’t be Terminator, robots with sentience, that transform warfare. It will be much simpler technologies that are, depending on your perspective, at best or at worst less than a decade away.

Take a Predator drone. This is a semi-autonomous weapon. It can fly itself much of the time. However, there is still a soldier, typically in a container in Nevada, in over all control. And importantly, it is still a soldier who makes the final life-or-death decision to fire one of its Hellfire missiles.

But it is a small technical step to replace that soldier with a computer.

Indeed, it is technically possible today. And once we build such simple autonomous weapons, there will be an arms race to develop more and more sophisticated versions. Indeed, we can see the beginnings of this arms race. In every theatre of way, in the air, on land, on and under the sea, there are prototype autonomous weapons under development.

This will be a terrible development in warfare. But it is not inevitable. In fact, we get to choose whether we go down this particular road. Since 2015, I and thousands of my colleagues, other researchers in Artificial Intelligence and Robotics have been warning of these dangerous developments. We’ve been joined by founders of AI and Robotics companies, Nobel Peace Laureates, church leaders, and many members of the public.

India has played an important role in the discussions about what to do about such autonomous weapons. In 2018, Amandeep Singh Gill, India’s Ambassador and Permanent Representative to the UN Conference on Disarmament in Geneva, chaired discussions at the United Nations on this topic. 26 nations have so far called for a pre-emptive ban, with Pakistan being the first to do so. Most recently, the European Parliament voted in support of the idea.

Strategically, autonomous weapons are a military dream. They let a

military scale their operations unhindered by manpower constraints. One

programmer can command hundreds of autonomous weapons. This will industrialise warfare. Autonomous weapons will greatly increase strategic options. They will take humans out of harm’s way opening up the opportunity to take on the riskiest of missions. You could call it War 4.0.

There are many reasons, however, why the military’s dream of lethal autonomous weapons will turn into a nightmare. First and foremost, there is a strong moral argument against killer robots. We give up an essential part of our humanity if we hand over the decision of whether someone should live to a machine. Machines have no emotions, compassion or empathy. Are machines then fit to decide who lives and who dies?

Beyond the moral arguments, there are many technical and legal reasons to be concerned about killer robots. In my view, one of the strongest is that they will revolutionise warfare. Autonomous weapons will be weapons of immense destruction. Previously, if you wanted to do harm, you had to have an army of soldiers to wage war. You had to persuade this army to follow your orders. You had to train them, feed them, and pay them. Now just one programmer could control hundreds of weapons.

Lethal autonomous weapons are more troubling, in some respects, than nuclear weapons. To build a nuclear bomb requires technical sophistication. You need the resources of a nation state, and access to fissile material. You need some skilled physicists and engineers. Nuclear weapons have not, as a result, proliferated greatly. Autonomous weapons require none of this.

Autonomous weapons will be perfect weapons of terror. Can you imagine how terrifying it will be to be chased by a swarm of autonomous drones? They will fall into the hands of terrorists and rogue states who will have no qualms about turning them on civilians. They will be an ideal weapon with which to suppress a civilian population. Unlike humans, they will not hesitate to commit atrocities, even genocide.

We stand at a crossroads on this issue. I believe it needs to be seen as morally unacceptable for machines to decide who lives and who dies. In this way, we may be able to save ourselves and our children from this terrible future.

(Toby Walsh is Professor of Artificial Intelligence at the University of New South Wales in Sydney, Australia. He is a Fellow of the Australian Academy of Science and author of the recent book, “2062: The World that AI Made” that explores the impact AI will have on society, including the impact on war. The author is in India as part of Australia Fest, a six month long celebration of Australian culture and creativity).