Killer Robots Will Save Human Lives

Inside the Future of Autonomous Warfare

By
Jeremy Rabkin and John Yoo |
September 28, 2017

In the movie, Terminator, and its many sequels, a self-aware computer system wages genocide on mankind. In this dystopian future, Skynet first triggers a global nuclear war and then launches hunter-killers to prowl for the remaining humans. Drones eerily similar to today’s Predators and Reapers launch missiles from the skies. Advanced autonomous tanks patrol downtown Los Angeles and killer motorcycles cruise California’s highways. Robot soldiers resembling human skeletons launch ground assaults and assassinate human leaders. Even the seas contain automated ships and torpedoes. The movies focus on Arnold Schwarzenegger, both before and after his stay in the California governor’s mansion, who plays a mechanical assassin designed to look human..

Killer robots inhabit the world of science fiction no more. In its Middle Eastern wars, the United States depends heavily on unmanned aerial vehicles (UAVs), known colloquially as drones. At low cost a Predator or Reaper UAV can hover over hostile territory for hours, conduct surveillance, and fire a guided missile at remote command. F-35 stealth fighters can stay on station for only a few hours, depend on ground personnel for live targeting information, and risk the life of the pilot. Not only can the U.S. Air Force purchase twenty Predators for the cost of a single F-35, but it can also operate them at a far lower cost per hour and keep them on station for far longer, without risking the lives of pilots who may be captured or killed.

Military strategists have judged the drone to represent a revolution in military affairs. UAVs combined real-time intelligence, precision targeting, and robotic endurance to project power over territory denied to ground forces. They target individual members of the enemy with much less destruction and harm to civilians than conventional bombing or artillery attacks. According to some estimates, the U.S. launched 389 drone strikes in Pakistan against terrorist leaders between 2004-15, with only eleven launched before January 2008. From 2006-15, these strikes killed 2,789 members of the Taliban, al-Qaeda, and their allies, and 158 civilians. In World War II or Vietnam, air forces would have dropped thousands of bombs to eliminate key enemy leaders or command and control facilities that now a few drone strikes can destroy.

Autonomous warriors may reduce, rather than increase, errors in the use of force

Share

And this is just the beginning. Military officials have designs for robots in the air, sea, and land on the drawing boards. Civilian technology gives a hint of the future. Google’s auto-driving system has gone more than a half million miles without an accident. Humans in the United States, by contrast, drive about 3 trillion miles a year at the cost of 32,000 accidental deaths. Google’s car may still be learning the subtleties of driving and its millions of variables, but it does not suffer from fatigue, distraction, or poor judgment. Militaries need only marry the technology from self-driving cars to the firing systems for drones to deploy robot tanks far more cheaply than an M-1 Abrams. It is no coincidence that the self-driving cars of today had their start in competitions sponsored by the Defense Advanced Research Program Agency, known as DARPA.

Military advances will occur in other realms as well. A small, unmanned vessel can become a seaborne IED in short order. Existing Unmanned Surface Vehicles (USVs) can carry out dangerous missions, such as reconnaissance or minesweeping. Autonomous submarines can sail faster, deeper, and quieter without crew compartments, large propulsion systems, or narrow depth and danger limits. The U.S. Navy has shown that unmanned vessels can deploy more weapons for longer periods and with greater accuracy than their aerial counterparts. Smaller, cheaper vessels can also deploy in swarms to overcome and destroy larger vessels.

An unmanned 11-meter rigid hulled inflatable boat (RHIB) from Naval Surface Warfare Center Carderock operates autonomously during an Office of Naval Research (ONR) sponsored Swarm demonstration held on the James River in Newport News, Va. (U.S. Navy photo by John F. Williams/Released)

Even without these coming advances, existing robots already provide a peek at the battlefield of the future. Satellite imagery, sophisticated electronic surveillance, drones, and precision-guided munitions allow American intelligence and military forces to strike enemy targets virtually anywhere in the world at any time. Robotic weapons can reach beyond the traditional battlefield to strike deep into enemy territory, with surgical precision, without risking the lives of their operators. Once U.S. intelligence, for example, locates a terrorist leader in a safe house or moving in a car, controllers in Virginia can order a drone in the area to strike in hours, if not minutes. Although the first such strike occurred as recently as 2002, when the CIA killed Abu Ali al-Harithi, and other five other al-Qaeda members while in a car in Yemen, drones have assumed a paramount role in the U.S. war against terrorism. U.S. drones today strike enemy targets throughout Afghanistan, Iraq, Pakistan, Syria, Libya, and Yemen. These capabilities allow the United States to match the unconventional organization and tactics of terrorist groups without the extensive harm to civilians that might arise from pre-2002 bombing.

New warfighting technology naturally improves the effectiveness of military force – otherwise, nations would not adopt it. Crossbows made archery more deadly against armored knights; artillery allowed more destructive bombardment from a greater distance; modern rifles gave draftees the ability to kill at low cost with high accuracy. Robotics’ falling costs, flexibility, precision, and the reduction in the risk of harm to combatants and non-combatants alike make them an irresistible choice for the generals of today and tomorrow.

The humanitarian goals of the rules of warfare should encourage the broader use of unmanned weapons

Share

Prominent government and academic critics argue that robots pose a severe threat to the laws of war, because they encourage the use of force, enable strikes on protected targets, and threaten a loss of human control. Security analyst Lawrence Korb claims, for example, that robots “will make people think, ‘Gee, warfare is easy.’” He worries that leaders will hold the impression that they can win a war with just “three men and a satellite phone.” Brookings Institution fellow P.W. Singer agrees, “As unmanned systems become more prevalent, we’ll become more likely to use force.”

.

Two Northrop Grumman MQ-4C Triton unmanned aerial vehicles are seen on the tarmac at a Northrop Grumman test facility in Palmdale, Calif. (U.S. Navy photo courtesy of Northrop Grumman by Chad Slattery/Released)

.

We believe that these alarmist critiques mistake the capabilities of robots and the purpose of the laws of war. Contemporary military robots, popularly known as drones or more technically as unmanned aerial vehicles, should pose little difficulty for the law or ethics of war. Remote operation does not automatically transform a weapon into an unjustified method of warfare. UAVs, for example, do not differ significantly from long-range artillery, aerial bombing, and ballistic missiles. All of these weapons inflict damage from a remote distance. What remains important is whether drones can effectively attack enemy targets while reducing overall harm to combatants and civilians alike. If robots can aim force with better accuracy so that battlefield casualties decline and can make sharper distinctions between combatants and civilians, we should welcome them, not fear them. The humanitarian goals of the rules of warfare should encourage the broader use of unmanned weapons, not seek to end them.

Critics claim that independent robots pose a more dire threat because they eliminate human decision-making from the “kill chain.” We believe these concerns miss the mark. Nations already use such weapons that do not require direct human decision to pull the trigger. Land mines, for example, kill only based on proximity to the explosive, not by any human identification of a specific target. Cruise missiles make decisions based on programmed flight patterns and can limit themselves to specific targets. Autonomous warriors may reduce, rather than increase, errors in the use of force, not just compared to current technologies but also with regard to human soldiers. By taking humans out of the firing decision, independent robots controlled by computer programming or even artificial intelligence could elevate the humanity of combat. Rather than foreclose research and development of these weapons, as many urge, nations should weigh the improvements in accuracy and decline in targeting mistakes against any risk that robots will run amok.

Jeremy Rabkin is Professor of Law at George Mason University and was, for over two decades, a professor in the Department of Government at Cornell University. Professor Rabkin serves on the Board of Directors of the U.S. Institute of Peace, the Board of Academic Advisers of the American Enterprise Institute, and the Board of Directors of the Center for Individual Rights.

John Yoo is Emanuel S. Heller Professor of Law at the University of California, Berkeley and a visiting scholar at the American Enterprise Institute. He served in the Bush administration Justice Department.