Killer Robots: New Reasons to Worry About Ethics

In war, it makes sense to create greater distance between the soldier and harm’s way. Think of the cannon and rifles of old, to modern cruise missiles and drones. Lately, humans are “off-loading” more and more of their decisions to computers. These trends mean a future with fully autonomous weapons: robotic systems that can select and engage targets—even humans—on their own, without human oversight.

But is it ethical to deploy such “killer robots”? In light of the recent stunning leaks about America’s drone program, there are already deep worries about robotic weapons. In theory, increasingly autonomous weapons could be nimbler and more precise than drones currently in use. But do they raise new moral dilemmas?

The academic community, including military ethicists and moral philosophers, have rapidly turned their attention to autonomous weapons—a debate that has been simmering for the last 10 years or so. I had a chance to sit down with three moral philosophers to discuss the ongoing moral controversy over autonomous weapons and the direction these arguments have taken.

Duncan Purves is an assistant professor and faculty fellow at New York University. Ryan Jenkins is an assistant professor philosophy at California Polytechnic State University, San Luis Obispo. Bradley J. Strawser is an assistant professor of philosophy at the Naval Postgraduate School and a research associate at the Ethics and Law of Armed Conflict Center at Oxford University. All three have published independently on military ethics.

A recent paper of theirs offered two new arguments against autonomous weapons. Since deciphering academic papers can be challenging, this interview with the authors draws out the main ideas to help move the public discussion forward.

Q: What are autonomous weapons?

A: We’re talking about weapons that can make decisions for themselves, specifically lethal or potentially lethal decisions. So, imagine a weapon—say, a flying drone—that looks at a human being and says, “That’s a human,” but then goes on to determine that they’re not only a human, but a soldier, and a soldier who’s part of the ongoing hostilities, and is a legitimate target of attack. Then, most troublingly, that robot decides after all of that to use force—potentially lethal force—against that human.

The most important thing to keep in mind here is that there is no “human in the loop,” no human being with their finger on the trigger. Once these machines are deployed, it’s up to them whom they target and engage.

Q: That’s scary stuff. How close are we to having weapons like that?

A: Well, there are obviously a lot of technological hurdles to having a machine like that. We know machines can make decisions thousands of times more quickly than humans can, and we know the technology is advancing rapidly. This is one of the reasons why the militaries of the world are pursuing increased levels of autonomy.

Now, we have several weapons that can execute some functions autonomously, for example, cruise missiles that can plot their own path to their target, or can even determine which payload they will detonate depending on the kind of target they detect in the area.

There are clearly a lot of advantages to increased autonomy: greater information processing, coping with the “increased tempo of battle,” and most importantly, keeping our own soldiers out of harm’s way.

We’re confident that if these weapons are developed, in the absence of an international ban, then they will see the battlefield. We say in our paper that the militaries of the world will find their advantages irresistible. So, there is a very real race underway between the advancing technology, on the one hand, and an international coalition of scholars and lawyers who are pushing for a ban.

Q: What arguments have been offered against killer robots so far?

A: Some think that no matter how far our technology progresses, robots will never be able to fulfill their duties as soldiers, that they’ll never be able to truly deliberate and appreciate the weight of the decision to take a human life. This is one popular argument against autonomous weapons: robots cannot, in principle, deliberate and feel the force of their decision. If they can’t do that, then they shouldn’t be deployed.

Another popular argument—perhaps the most famous—is that if a robot should do something really terrible, there will be no one to hold responsible. There would be no one to punish or court martial. This lack of accountability—an accountability or responsibility gap—is disrespectful to our enemy and to the rules of war, since it amounts to going to war with a disregard for international law. It is tantamount, you might think, to simply pledging beforehand not to prosecute any of your soldiers who break the law. It’s that bad. And since that would be unconscionable, so too would using killer robots.

Q: What’s wrong with these arguments?

A: Well, in the paper we call these arguments “contingent”: They only happen to be compelling because of the current state of technology. Imagine an autonomous weapon that is perfect: suppose it never makes mistakes, it always kills the right person, it always inflicts the minimum amount of harm or suffering necessary to complete a task. What’s wrong with deploying a weapon like this? There’s no one to hold responsible if it makes a mistake, but we know it doesn’t make mistakes, so who cares?

And it would be odd to pound your fist and say we shouldn’t deploy it because it can’t really deliberate. You mean to tell me that we should insist on using flawed humans—who will surely kill unnecessary people, who probably shoot from the hip without thinking too deeply about it, who are susceptible to biases, errors, fatigue, vengeance and the whole raft of human imperfections—just so we can rest easy that our soldiers have the ability to deliberate? That seems odd, since you’re condemning some people to death who wouldn’t otherwise die if we used a perfect autonomous weapon.

We sought an argument that could ground a stronger objection against autonomous weapons, one that wouldn’t melt away when the technology becomes reliable enough.

Q: So, how is your argument different?

A: Well, we give two arguments. The first starts by pointing out that many philosophers—and probably most ordinary people—believe that morality cannot be boiled down to a list of instructions. This view goes back to Socrates—that acting morally is a “craft” that takes experience, practice and nuance. And it requires something else—judgment or intuition or a moral sense—that is not expressible in words. If that’s right, then morality could never be captured in a set of requirements and just handed over to a machine to follow perfectly.

Q: But that sounds like a contingent argument, like the others. Couldn’t a computer learn to be perfect, even if morality is more than just a list of instructions? Machine learning is past the point where we can “feed” computers lists of instructions. Couldn’t a machine learn to imitate a human being reliably, including its moral behavior?

A: Yes, we consider that and give a second argument just in case. Then, we say that no matter how complicated a machine becomes, it will never be able to act for the right reasons.

Q: What’s so important about acting for the right reasons?

A: Well, for one thing, it has a strong intuitive appeal. There’s a difference between someone who gives flowers to his crush in order to endear himself to her, and a person who gives the flowers to his crush to make his crush’s current boyfriend jealous. There’s a difference between someone who saves a drowning child out of pure selflessness, and someone who does it because they hope to be rewarded handsomely. The difference in these cases is that one action is performed for the right reasons and the other one is not. So, this makes a big difference.

On the other hand, if we’re comfortable deploying machines that can’t act for good reasons, then we should be comfortable with deploying soldiers that we know to be psychopathic, even if they’re well-behaved. (That’s because a lot of recent research in what’s called moral psychology seems to show that psychopaths do not feel the force of moral reasons the way non-psychopathic people do.) Now, most people would think there’s a serious problem deploying an army of even well-behaved psychopaths, and we think those people are right.

Secondly, there’s a long history in the military ethics tradition of people arguing that soldiers should fight in war only for the right reasons. Augustine says, for example, that soldiers shouldn’t fight for personal gain or spite, but should only be motivated by a desire to establish a just peace. And today we think it makes a tremendous difference why a state goes to war—this is why there is such contention over the US invasion of Iraq. Whether you think that was justified or not, or whether it has been “worth it” or not, a lot of the disagreement hinges on why the Bush administration took the country to war in the first place. There’s a huge difference between a humanitarian intervention and a resource grab, even if they look the same from the outside. Your motivations matter deeply in wartime.

Q: That makes sense. But when you criticize robots for not being able to act for the right reasons, or any reason at all, aren’t you anthropomorphizing them—holding them to a standard that shouldn’t apply to tools? We don’t criticize bullets or cruise missiles for not being able to act for the right reasons, for instance. Is this a double standard?

A: Naturally, some people will liken autonomous weapons to simply very smart cruise missiles, and this is something we mention in the paper. We think this comparison is faulty. Think of it this way: cruise missiles and bullets do not make the decision that specific people should die. For every cruise missile and bullet, there is some human behind it that made that decision, who then transmits their intention through that weapon. That human acts for reasons; their weapon does not decide anything.

Autonomous weapons are not like that—they make lethal decisions on their own. They are not simply transmitting the intention of some other human; they are taking over the role that has traditionally been filled by humans, namely, deciding which humans live and die. So, while it may be odd to talk about robots having intentions or having to act for reasons, it’s not strange to see them as soldiers. This is precisely what they are designed and deployed to be.

Q: So, what’s the point of all this?

A: We end by considering a possible future in which machines do become much better than us at making morally important decisions. Suppose machines become moral saints. Shouldn’t we, then, eagerly turn over all of our difficult decisions to machines? Not just whom to target in war, but whether to go to war at all, whom to marry, what occupation to choose, and so on?

After all, they would make these decisions better than we would—they would know what’s best for us and best for the world. Shouldn’t we outsource all of our autonomy to machines, if they’re so good at making decisions?

Of course, most people will find something troubling or repugnant in this. Even if we are fallible decision makers with flawed consciences, it could be that simply grappling with difficult moral decisions is one of the things that makes our lives valuable and meaningful.

I'm the director of the Ethics + Emerging Sciences Group, based at California Polytechnic State University, San Luis Obispo, where I'm a philosophy professor.⋯Other current and past appointments at: World Economic Forum's Global Future Councils; 100 Year Study on AI; Sta...