University of Sheffield – Autonomous Weapons Debate

This isn’t a debate to be shelved until we reach some distant sci-fi future. As we delegate more and more tasks to robots, it’s time to ask ourselves: Are we comfortable with giving them the power to decide when to kill?

In April 2015, a multilateral meeting was held at the United Nations in Geneva. Representatives from the Campaign to Stop Killer Robots hope it will be another step on a path towards a ban on “lethal autonomous weapons systems”.

Noel Sharkey

One of the leading campaigners pushing for this ban is Noel Sharkey, a Professor of Artificial Intelligence and Robotics at the University of Sheffield.

An experienced robotics researcher, Professor Sharkey is now focused on their ethical application in fields from health to education and the military. His expertise and campaigning have helped to shape the debate over the issue, and he has met with several military figures and governments to persuade them to take a stand against autonomous machines with the power to kill.

I have a real problem with autonomous weapons systems. I’m most concerned about the decision to kill being delegated to a machine, giving it the power to pull a trigger without human intervention.

Professor Sharkey has spent more than 15 years researching robot learning and behaviour. During this time, he has worked with colleagues on technical solutions for problems such as how to locate robots quickly and reliably, and how to help them avoid obstacles. He started researching the ethics behind the use of robots in 2005, and began speaking out against Autonomous Weapons Systems in 2007.

Professor Philip Alston – the former UN Special Rapporteur on Extrajudicial Executions – says that “because of the depth of his technical and scientific expertise, his work is an indispensable reference point for those working on these issues”.

Professor Sharkey’s concern lies with the development of systems that can identify a target and act on it themselves, without a human hand in the process. As part of his fight against this new development, he has appeared in print, radio, television and online over 1,000 times, in more than 50 countries.

He believes that these robots do not meet the standards for war set out in the Geneva and Hague conventions, as they will be unable to acceptably distinguish combatants from non-combatants, and the loss of life or damage will outweigh the military advantage gained.

He is also concerned that autonomous robots would be more vulnerable to malfunctions and cyber attacks, and that they would be more likely to strike illegitimate targets.

In a 2007 article in the Guardian newspaper, he said: “In reality, a robot could not pinpoint a weapon without pinpointing the person using it or even discriminate between weapons and non-weapons.

“I can imagine a little girl being zapped because she points her ice cream at a robot to share. Or a robot could be tricked into killing innocent civilians.”

Matrix – with Dinka Dumicic (left) Jody Williams (right)

Professor Sharkey has published several pieces in science and engineering journals, analysing whether robots have the cognitive capability to make lethal decisions. He has used his technical knowledge to explore questions of autonomy, and has argued that the failures that currently affect drones are likely to be even more pronounced with autonomous weapons systems. This work has been cited in academic publications more than 185 times.

His research led him to co-found the International Committee for Robot Arms Control in 2009. The ICRAC gathers experts in robot technology, ethics, international relations, arms control and human rights law to push for limits on the use of robot technology. Work with non-governmental organisations such as Human Rights Watch culminated in the launch of the Campaign to Stop Killer Robots in 2013. Human Rights Watch describes him as “one of the most effective spokespersons” in an international coalition of more than 40 NGOs, including Amnesty International, the International Peace Initiative and the Nobel Women’s Initiative.

Opponents to his line of argument have suggested that safeguards are preferable to a ban, and that the use of such machines could be restricted to non-human targets. However, Professor Sharkey counters that this approach is a “foot in the door” that can be opened further all too quickly.

Sharkey has already expressed concerns over the British unmanned supersonic super drone Taranis, which is pencilled in for introduction alongside manned combat aircraft after 2030.

“It might start off slowly, but if you look at aerial bombardment, people like Roosevelt initially tried to get treaties to stop that. Submarines were once completely against the rules of war as you were meant to show your colours, but they were so useful that eventually everyone adopted them.

“I’m convinced that if these things are in development, they will be used.”

As a result, he has been active in trying to influence policy-makers internationally. He has addressed senior military figures in 26 countries, and his research is referenced in officer training materials in the UK, US, France, and the Netherlands.

He has briefed UK parliamentarians on several occasions, including a cross-party briefing of around 20 MPs and peers in April 2013. He has also addressed groups in Germany in France, and was cited extensively in the 2013 European Parliament policy document on the use of unmanned robots in warfare.

Partly as a result of such efforts, the UN’s Convention on Conventional Weapons agreed to meet last year to discuss such lethal weapons. Their second meeting this month is a continuation of that discussion.

ICRAC at the UN

“A lot of people in the military support us”, says Professor Sharkey. “They don’t want to see these weapons, as there’s an accountability gap. Who is responsible if something goes wrong? Is it the commander, or the creator of the software?

“In fact, the UK is the only country that has taken a definitive stand on not banning them. The argument from our Ministry of Defence is that it has no intention of having autonomous weapons, but they will not support a moratorium or a ban.”

If the April 2015 talks progress well, the next step in the process would be the establishment of a Governmental group of experts to discuss the best course of action.

“The process could last another four or five years. Maybe it could take longer, but we hope not. We’ll have to wait and see.

“As an academic it’s been fascinating. I wrote a lot of papers and gave a lot of talks and people were very interested, but I didn’t know what the best course of action was. As soon as Human Rights Watch and other major Non-Governmental Organisations got involved, they knew exactly what to do and put me in touch with ambassadors and into diplomatic circles.

“We’re now very much part of a large international civil society movement.”

Links to Additional Information

What should we do about killer robots?
This isn’t a debate to be shelved until we reach some distant sci-fi future. As we delegate more and more tasks to robots, it’s time to ask ourselves: Are we comfortable with giving them the power to decide when to kill?
In April 2015,...

About

There were 280 impact case studies submitted to the 2014 Research Excellence Framework (REF) sub panel 11 Computer Science and Informatics by Eighty Seven institutions. Over 80% of the case studies had some form of economic impact, including spin-out businesses created by universities, software tools and techniques developed by research projects which have benefited the efficiency of both computing practitioners in large and small organisations, as well as standard security and communication protocols in daily use by millions of users. The annual revenue generated from those spinouts which included figures in the case studies, was in excess of £170 million and they had nearly 1900 employees. The additional sales revenues attributed to the academic research in industries such as aerospace, telecommunications, computing and energy was about £400 million. Some of the impact has been in the form of public policy, for example in terms of identifying security risks, informing healthcare decisions or public debate on ethical issues. There has been considerable social impact in terms of new healthcare procedures and treatments as well as aids for disabled or elderly people.

The following figure indicates the main types of impact in the submitted case studies listed in the Appendix.

The sub-panel assessors, which included eight people from industry and government appointed only to assess impact, recommended about fifty case studies as being potentially suitable for publicising UK academic Computer Science impact.These were not the fifty highest scoring case studies but were selected based on potential interest to the general public.An initial set of twenty case studies was selected from these to be written up in a form to make them more accessible to non-technical people. The selection criteria included ease of understanding of the technology underpinning the impact, potential interest by the public,examples from a wide range of different types of impact – both social and economic and showing that excellent impact can be generated from a range of universities with both large and small submissions to REF including post-92 universities, Russell Group and the other universities.

The 2014 REF was the first formal assessment of impact as part of the overall research assessment of UK academic institutions.The sub-panel assessors were very impressed by the extent to which UK academic research has had social and economic impact within the UK and often world-wide.The range of impact case studies included:

Spin-out companies from universities, some of which had then been taken over by large international companies.

Software tools and techniques either made available open-source or sometimes licensed to particular organisations with impact in automotive, aerospace, energy suppliers, media, gaming, healthcare, pharmaceutical, transport, retail as well as computing industry.

Contributions to many different international standards e.g. telecommunication, web, compilers, security.

Impact on government, healthcare and security policy as well as on public awareness about ethical and social issues.

The REF criteria stated that the research underpinning the impact must have taken place during the period 1 January 1993 to 31 December 2013 and be of a quality that is recognised internationally in terms of originality, significance and rigour (i.e. at least 2* quality, in terms of REF scoring), but the actual impact must have taken place during the period 1 January 2008 to 31 July 2013.The underpinning research described in case studies ranged from development of specific protocols, to formal methods used to reason about software design or to machine learning techniques.The underpinning research was often of the highest quality with publications in top conferences and journals.

The twenty case studies selected for this report were picked to reflect the range of those submitted and include spin out companies, software tools and techniques, commercialisation of open source software as well as a number of healthcare related applications and aids for people with disabilities. Some case studies indicate impact influencing public policy including issues relating to electronic payments, autonomous weapons systems and evaluation of health information systems.

The working group managing the report included Jon Crowcroft, David Duce, Ursula Martin, David Robertson and Morris Sloman.John Hill wrote the impact case study texts, in consultation with the relevant academics, and Naomi Atkinson was responsible for the design layout of the report.