There is nothing “new” in this “new report” apart from yet another synonym for “killer robot” to add to an already over-long list that includes lethal autonomous robot, lethal autonomous weapons system, unmanned weapons system, autonomous weapons system and autonomous weapon. There are myriad others. We now have “fully autonomous weapon” to add as well.

I’ll stick to the term lethal autonomous weapons system (LAWS) mainly because that is what the diplomats attending the Expert Meeting on the Convention on Certain Conventional Weapons used last year. And that is the term they are using this year.

LAWS is a sensible term that is neither “emotive” (Heyns, 2013) nor an “insidious rhetorical trick” (Lokhorst & van den Hoven, 2011) that covers complex distributed weapons systems that are actually fielded that have multiple integrated components and that are likely to evolve into “off the loop” LAWS and in the absence of regulation from that point to “beyond the loop” weapons systems that might have “machine learning” and “genetic algorithms” that “evolve” and “adapt” and indeed might turn into Skynet in due course.

Walking, talking, human-scale, titanium-skulled killer robots with beady red eyes are not actually fielded by anybody yet except for James Cameron in his Terminator flicks. But they are more scary and the hope of the Scare Campaign is that fright will make right.

Indeed this kind of tabloid trash “argument” might get a headline but to persuade an audience of diplomats, who are very bright and very sharp, the calibre of the argument needs to be far better that the vague and recycled confusions of Mind the Gap.

The report makes various points about “the lack of accountability for killer robots” none of which have not already been made. The two word solution for the “problem” of “killer robot accountability” would be “strict liability” as suggested by the Swedish delegation (among others) last year.

Scare campaigners please put that in your draft Protocol VI of the CCW.

Actually, how about you actually draft a Protocol VI and put it out for discussion?

Clarify what exactly it is that you want.

Mind the Gap does have some mildly original confusion about the meaning of “autonomous” and some spectacular question begging to accompany the well-worn rhetorical tricks.

Line 1:

Fully autonomous weapons, also known as “killer robots,” raise serious moral and legal concerns because they would possess the ability to select and engage their targets without meaningful human control.

Whoa!

So we open with the customary “emotive” and “insidious” tabloid language “killer robots,” we use this recycled and as yet undefined term “meaningful human control” and we blithely assert that fully autonomous weapons (whatever that means) do not have meaningful human control (whatever that means). We beg and blur the decisive question right from the start.

Later in the paper “fully autonomous weapons” are defined as human “off the loop” as distinct from “in the loop” and “on the loop” weapons. This assumes that a strictly causal, human-programmed artefact making delegated decisions on the basis of objective sensor data according to human defined policy norms is not in any sense under “meaningful human control.”

Much confusion is added by careless “personification” of machines. Consider this line:

On the one hand, while traditional weapons are tools in the hands of human beings, fully autonomous weapons, once deployed, would make their own determinations about the use of lethal force.

This language “their own determinations” suggests there is some cognitive element in the programmed machine that is not a human-defined instruction. There is no “I” in the robot. It has no values on the basis of which it can make choices.

Line 2.

Many people question whether the decision to kill a human being should be left to a machine.

People in real wars have been leaving the decision to kill human beings to machines since 1864 and probably earlier. The Union lost several men to Confederate “torpedoes” (landmines) on Dec 13th, 1864 in the storming of Fort McAllister at the end of Sherman’s infamous March to the Sea. Militaries continue to delegate lethal decisions to machines by fielding anti-tank and anti-ship mines which remain lawful “off the loop” weapons.

Line 2 is actually a very fair question and worthy of deeper analysis which, alas, you will not find in Mind the Gap. How exactly a “decision” differs from say a “reaction” and a “choice” (as defined in the Summa Theologica) is a deep and interesting philosophical question.

Moving on.

Fully autonomous weapons are weapons systems that would select and engage targets without meaningful human control. They are also known as killer robots or lethal autonomous weapons systems. Because of their full autonomy, they would have no “human in the loop” to direct their use of force and thus would represent the step beyond current remote-controlled drones.

The tacit assumption here is that the human “in the loop” will guarantee better human rights outcomes. “Meaningful human control” gave us the Somme, the Holocaust and the Rwandan Genocide. Frankly, I am not automatically signed on to this assumed Nirvana of “meaningful human control.”

Meaningful legal control is far more reassuring. And if a programmed robot can be engineered to do this better than the amygdalas of 18-25 year old males with testosterone and cortisol pulsing through their blood-brain interfaces, then I do not (as yet) see compelling reasons as to why such R & D possibilities should be “comprehensively and pre-emptively” banned, especially on the basis of a conceptually muddled scare campaign expressed in tabloid language.

The ban argument is based on several claims. Robots cannot technically comply with core principles of IHL. Robots cannot discriminate between combatant and non-combatant. Robots cannot make proportionality calculations. Robots cannot be held responsible. There are also appeals to moral intuition. Robots should not make the decision to kill humans. Robots should not have the power of life and death over humans. There are also proliferation and cultural concerns. Lethal robots will make bad governments worse. Robots will exacerbate the decline and extinction of martial valour already started by drone warfare.

REGULATION: LAWS regulation may be modelled upon Protocol II of the CCW. This regulated anti-personnel mines and defined the conditions of military necessity under which they could be used and provided explicit regulation to protect civilians.

Regulation would explicitly affirm the applicability of IHL to LAWS. It would require that norms be encoded in robots to constrain their behaviour so that they acted in strict accordance to IHL. The main argument against a ban is that lethal autonomy (e.g. Aegis, Patriot, C-RAM, Iron Dome) already exists and such systems will further evolve to make faster decisions. Human cognition will not able to compete with the speed of machine decision making (e.g. future air war between peers). The defence of service personnel in the conduct of their military duties (and the nation more broadly) therefore will require increasing use of autonomous weapons. Thus they should be regulated not banned.

STATUS QUO: IHL is a broad framework designed to deal with the evolution of weapons and warfare. The key principles of necessity, discrimination, proportionality and responsibility are almost universally accepted and of broad scope. Thus if a robot cannot discriminate, calculate proportionality and if responsibility cannot be assigned for its use and the military acts it performs are not necessary, its use would already be illegal and thus there is no need to ban what is already banned. This is the UK position as stated by Under-Secretary Alistair Burt in 2013. (NB. When following the above link, search for the section entitled Lethal Autonomous Robotics some way down the page.)

Given the breadth of scope of LAWS, it may be unworkable to have a treaty instrument that enters into the detail of Protocol II to protect civilians for every conceivable system capable of lethal autonomy. (See in particular the Technical Annex to Protocol II of the CCW which goes into very specific detail defining the requirements for lawful anti-personnel landmines. Anti-personnel landmines as regulated by Protocol II were lawful from 1980 to 1999 when the Ottawa Convention became binding IHL.)

DEFER: There is always the option to have more discussion or to defer a decision. In the meantime, there might be a temporary moratorium or reliance on existing IHL pending an eventual choice of ban, regulation or reliance on IHL.

DEFINITIONS: LAWS are commonly divided into three types. Some refer to a fourth type.

Type

Human relation to robot

Explanation

I

“in the loop”

Human must approve a kill decision. Human must act to confirm kill.

II

“on the loop”

Human can disapprove a kill decision but robot will kill in case of human inaction.

The robot is “free” to overwrite, reject, vary or supplement the rules put into it. This overwriting would be done on the basis of human-level “autonomous” features such as “machine learning” and “genetic algorithms.” The robot has “adaptive” features that allow it to go “beyond” its programming in some sense.

AUTONOMOUS: Definitions of “autonomous” vary. Some roboticists define autonomous simply as “no human finger on the trigger” others consider “autonomous” to imply some “machine learning” capability such that it could “create its own moral reality” (Boella & van der Torre, 2008) Robots that are “autonomous” in this sense do not yet exist (connected to weapons) though they are being researched. Above they are characterized as Type IV “beyond the loop” LAWS. Responsibility for the acts of such robots is a major issue. The Campaign to Stop Killer Robots would like to see such machines comprehensively and pre-emptively banned.

BAN/REGULATE: There is obviously much grey between the ban and regulate positions. Some nations (e.g. Pakistan) are calling for a ban on remotely piloted drones which are Type I human “in the loop” weapons that are partly “autonomous.” Most nations are cautious and are seeking better definitions, in order to clarify what exactly should be banned and/or regulated.

Prepared By

This briefing note was prepared by Sean Welsh, a PhD student in the Department of Philosophy at the University of Canterbury.

The working title of his doctoral dissertation is Moral Code: Programming the Ethical Robot. Prior to embarking on his PhD, Sean worked in software development for 17 years.

ABSTRACT: A recent meeting (May 2014) of the United Nations in Geneva regarding the Convention on Certain Conventional Weapons considered the many issues surrounding the use of lethal autonomous weapons systems from a variety of legal, ethical, operational, and technical perspectives. Over 80 nations were represented and engaged in the discussion. This talk reprises the issues the author broached regarding the role of lethal autonomous robotic systems and warfare, and how if they are developed appropriately they may have the ability to significantly reduce civilian casualties in the battlespace. This can lead to a moral imperative for their use due to the enhanced likelihood of reduced noncombatant deaths. Nonetheless, if the usage of this technology is not properly addressed or is hastily deployed, it can lead to possible dystopian futures. This talk will encourage others to think of ways to approach the issues of restraining lethal autonomous systems from illegal or immoral actions in the context of both International Humanitarian and Human Rights Law, whether through technology or legislation.

BIOGRAPHY: Ronald C. Arkin is Regents’ Professor and Associate Dean for Research in the College of Computing at Georgia Tech. He served as STINT visiting Professor at KTH in Stockholm, Sabbatical Chair at the Sony IDL in Tokyo, and in the Robotics and AI Group at LAAS/CNRS in Toulouse. Dr. Arkin’s research interests include behavior-based control and action-oriented perception for mobile robots and UAVs, deliberative / reactive architectures, robot survivability, multiagent robotics, biorobotics, human-robot interaction, robot ethics, and learning in autonomous systems. Prof. Arkin served on the Board of Governors of the IEEE Society on Social Implications of Technology, the IEEE Robotics and Automation Society (RAS) AdCom, and is a founding co-chair of IEEE RAS Technical Committee on Robot Ethics. He is a Distinguished Lecturer for the IEEE Society on Social Implications of Technology and a Fellow of the IEEE.