Robots and the morality of a push-button war

The digital age has brought many changes to all spheres of life – not least in warfare.

There are at this very moment pilotless aeroplanes patrolling the skies of the near east, hanging around until their targets present themselves then blasting them to pieces with air-to-ground ordnance.

How long is it, we must ask ourselves, until these weaponised robots are capable of total autonomy? And what are the moral problems posed by allowing them to utilise it?

Robots, in the sense of remotely controlled vehicles, have been used since World War II, and many such platforms exist today. Most perform dangerous, non-combat roles: mine clearing, shifting obstacles, defusing bombs and reconnaissance. In some senses, many modern munitions are robots too. A cruise missile, for example, flies itself to its target in order to explode there. But robots as depicted in science fiction, mobile machines capable of performing many tasks with no human input, do not exist. Yet.

Besides aerial drones, other weapon-bearing machines have been deployed on the world’s battlefields. In 2007, for example, the US Army field-tested SWORDS units, tracked robots carrying weapons, in Iraq.

Autonomy is creeping in too. General Atomics’ Reaper drone, used in many of the USAF’s strikes against Taliban and al-Qaeda targets in Pakistan’s tribal regions, can fly on its own. The Dutch Goalkeeper, the Korean SGR-A1 and the Israeli Guardium, all sentry bots, can act without human input.

Robots are still far less versatile than human beings, however. Guarding something is a relatively well-defined task – if someone is in this area then it is permissible to shoot them. The problem for military scientists (and for their civilian counterparts, for that matter), is getting robotic systems to distinguish between, well, everything. This is of enormous importance in warfare. Robots do not think. They cannot readily tell the difference between a civilian and a soldier.

Unsurprisingly, human beings are loath to give machines such power. There’s no difference between a cruise missile and a bullet in terms of self-determination – both are missiles cast by human hand at a target. But the SWORDS, controlled by people though they were, were never given leave to open fire. When a Reaper lets loose a missile, there’s (usually) a pilot somewhere holding a joystick. All but the Guardium in our list of sentries above are fixed installations, and the Guardium carries only non-lethal weapons.

How long this state of affairs will persist is another matter. The idea of a robot soldier, whose remains can be brought home in a crate rather than a body bag, is very attractive to generals and politicians alike. Although our innate reluctance to put such deadly force fully into mechanical hands might delay the advent of Terminator-like warrior machines, it will not halt it, nor will the tricky science of it all. War and technological advances go hand in hand, and the multiple technical challenges that have kept robots from reality are being solved one after another.

Pitting non-living machines against live targets seems a gross violation of the rules of engagement, for should war not carry the tacit understanding that both sides are at risk? Any nation that can deploy an army of robot soldiers might think itself invincible, or be willing to take risks with its troops that it would not countenance were its ‘men’ of flesh and blood.

No doubt the advent of true robot soldiers will be couched in the usual euphemisms of deterrent and protection, but there is a risk that such weapons will make war more, not less, likely. Already, the Predator and Reaper drones prowling the air over Afghanistan have led to the reacceptance of assassination as a legitimate tool of war. What other moral slippage might more advanced robotic weapons herald?

As technology accelerates and our artefacts become better thinkers, these are questions we might soon have to contemplate. Something tells me we better have the answers ready.