Ian Kerr: A ban on killer robots is the ethical choice

It is not often that captains of industry, scientists and technologists call for prohibitions on innovation of any sort — let alone an outright ban.

Ian Kerr

Published on: July 31, 2015 | Last Updated: July 31, 2015 2:13 PM EDT

A file photo taken on April 23, 2013 shows a mock "killer robot" pictured in central London during the launching of the Campaign to Stop "Killer Robots," which calls for the ban of lethal robot weapons that would be able to select and attack targets without any human intervention. CARL COURT / AFP/Getty Images

This mentality and the existential risks that emerging technologies impose are precisely what more than 16,000 AI researchers, roboticists and others in related fields are now seeking to avoid. Like the many chemists and biologists who provided broad support for the prohibition of chemical and biological weapons, these AI researchers and roboticists don’t want to see anybody steamrolled by killer robots.

That’s right. Killer robots.

Killer robots are offensive autonomous weapons that can select and engage targets without any need for human intervention. In an open letter recently presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, experts describe the prospect of killer robots as “the third revolution in warfare, after gunpowder and nuclear arms.” The letter calls for “a ban on offensive autonomous weapons” that can be engaged without meaningful or effective human control.

The list of signatories calling for an offensive ban on killer robots is impressive. Anyone who consumes popular media surely knows by now that it includes the likes of Tesla and SpaceX CEO Elon Musk, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, physicist Stephen Hawking, and numerous highly influential academics such as Noam Chomsky and Daniel Dennett.

Unsurprisingly, the popular press has ignored a number of notable female signatories worthy of explicit mention (hat tip to Mary Wareham): Higgins Professor of Natural Sciences Barbara Grosz of Harvard University, IBM Watson design leader Kathryn McElroy, Martha E. Pollack of the University of Michigan, Carme Torras of the Robotics Institute at CSIC-UPC in Barcelona, Francesca Rossi of Padova University and Harvard University, Sheila McIlraith of the University of Toronto, Allison Okamura of Stanford University, Lucy Suchman of Lancaster University, Bonnie Weber of Edinburgh University, Mary-Anne Williams of the University of Technology Sydney, and Heather Roff of the University of Denver, to name a few.

As a technological concept, the killer robot represents a stark shift in military policy; a willful, intentional and unprecedented removal of humans from the kill decision loop. Just set the robots loose and let them do our dirty work.

For this reason and others, the United Nations has dedicated a series of meetings through its Convention on Conventional Weapons, hoping to better understand killer robots and their social implications.

To date, the debate has mostly focused on three issues: How far off are we from developing advanced autonomous weapons? Could such technologies be made to comport with international humanitarian law? Could a ban be effective if some nations do not comply?

On the first issue, the open letter reveals the stunning fact that many technologists believe the robot revolution is “feasible within years, not decades, and the stakes are high.”

Of course, this is largely speculative and the actual timeline is surely longer once one layers on top of the technology the requirements of the second issue, that killer robots must comport with international humanitarian law. That is, machine systems operating without human intervention must be able to: successfully discriminate between combatants and non-combatants in the moment of conflict; morally assess every possible conflict in order to justify whether a particular use of force is proportional; and comprehend and assess military operations sufficiently well to be able to decide whether the use of force on a particular occasion is of military necessity.

To date, there is no obvious solution to these non-trivial technological challenges.

However, in my view, it is the stance taken on the third issue — whether it would be efficacious to ban killer robots in any event — that makes this open letter profound. This is what made me want to sign the letter.

Although engaged citizens sign petitions everyday, it is not often that captains of industry, scientists and technologists call for prohibitions on innovation of any sort — let alone an outright ban. The ban is an important signifier. Even if it is self-serving insofar as it seeks to avoid “creating a major public backlash against AI that curtails its future societal benefits,” by recognizing that starting a military AI arms race is a bad idea, the letter quietly reframes the policy question of whether to ban killer robots on grounds of morality rather than efficacy. This is crucial, as it provokes a fundamental reconceptualization of the many strategic arguments that have been made for and against autonomous weapons.

When one considers the matter from the standpoint of morality rather than efficacy, it is no longer good enough to say, as careful thinkers like Evan Ackerman have said, that “no letter, UN declaration, or even a formal ban ratified by multiple nations is going to prevent people from being able to build autonomous, weaponized robots.”

We know that. But that is not the point.

Delegating life-or-death decisions to machines crosses a fundamental moral line — no matter which side builds or uses them. Playing Russian roulette with the lives of others can never be justified merely on the basis of efficacy. This is not only a fundamental issue of human rights. The decision whether to ban or engage killer robots goes to the core of our humanity.

The Supreme Court of Canada has had occasion to consider the role of efficacy in determining whether to uphold a ban in other contexts. I concur with Justice Charles Gonthier, who astutely opined:

“(T)he actual effect of bans … is increasingly negligible given technological advances which make the bans difficult to enforce. With all due respect, it is wrong to simply throw up our hands in the face of such difficulties. These difficulties simply demonstrate that we live in a rapidly changing global community where regulation in the public interest has not always been able to keep pace with change. Current national and international regulation may be inadequate, but fundamental principles have not changed nor have the value and appropriateness of taking preventive measures in highly exceptional cases.”

Killer robots are a highly exceptional case.

Rather than asking whether we want to be part of the steamroller or part of the road, the open letter challenges our research communities to pave alternative pathways. As the letter states: “AI has great potential to benefit humanity in many ways, and … the goal of the field should be to do so.”

In my view, perhaps the chief virtue of the open letter is its implicit recognition that scientific wisdom posits limits. This is something Einstein learned the hard way, prompting his subsequent humanitarian efforts with the Emergency Committee of Atomic Scientists. Another important scientist, Carl Sagan, articulated this insight with stunning, poetic clarity:

“It might be a familiar progression, transpiring on many worlds – a planet, newly formed, placidly revolves around its star; life slowly forms; a kaleidoscopic procession of creatures evolves; intelligence emerges which, at least up to a point, confers enormous survival value; and then technology is invented. It dawns on them that there are such things as laws of Nature, that these laws can be revealed by experiment, and that knowledge of these laws can be made both to save and to take lives, on unprecedented scales. Science, they recognize, grants immense powers. In a flash, they create world-altering contrivances. Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others, not so lucky or so prudent, perish.”

Recognizing the ethical wisdom of setting limits and living up to demands the of morality is difficult enough. Figuring out the practical means necessary to entrench those limits will be even tougher. But it is our obligation to try.

Ian Kerr holds the Canada Research Chair in Ethics, Law and Technology at the University of Ottawa, where he teaches a course called The Laws of Robotics and is co-author of the forthcoming book Robot Law, which will be published by Edward Elgar in December.

Comments

We encourage all readers to share their views on our articles and blog posts. We are committed to maintaining a lively but civil forum for discussion, so we ask you to avoid personal attacks, and please keep your comments relevant and respectful. If you encounter a comment that is abusive, click the "X" in the upper right corner of the comment box to report spam or abuse. We are using Facebook commenting. Visit our FAQ page for more information.

Almost Done!

Postmedia wants to improve your reading experience as well as share the best deals and promotions from our advertisers with you. The information below will be used to optimize the content and make ads across the network more relevant to you. You can always change the information you share with us by editing your profile.

By clicking "Create Account", I hearby grant permission to Market to use my account information to create my account.

I also accept and agree to be bound by Postmedia's Terms and Conditions with respect to my use of the Site and I have read and understand Postmedia's Privacy Statement. I consent to the collection, use, maintenance, and disclosure of my information in accordance with the Postmedia's Privacy Policy.

Postmedia wants to improve your reading experience as well as share the best deals and promotions from our advertisers with you. The information below will be used to optimize the content and make ads across the network more relevant to you. You can always change the information you share with us by editing your profile.

By clicking "Create Account", I hearby grant permission to Postmedia to use my account information to create my account.

I also accept and agree to be bound by Postmedia's Terms and Conditions with respect to my use of the Site and I have read and understand Postmedia's Privacy Statement. I consent to the collection, use, maintenance, and disclosure of my information in accordance with the Postmedia's Privacy Policy.