Open letter by leaders of leading robotics & artificial intelligence companies is launched at the world’s biggest artificial intelligence conference as UN delays meeting till later this year to discuss the robot arms race.

Media Release

Open letter by leaders of leading robotics & AI companies is launched at the world’s biggest artificial intelligence conference as UN delays meeting till later this year to discuss the robot arms race.

An open letter signed by 116 founders of robotics and artificial intelligence companies from 26 countries urges the United Nations to urgently address the challenge of lethal autonomous weapons (often called ‘killer robots’) and ban their use internationally.

A key organiser of the letter, Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, released it at the opening of the International Joint Conference on Artificial Intelligence (IJCAI 2017) in Melbourne, the world’s pre-eminent gathering of top experts in artificial intelligence (AI) and robotics. Walsh is a member of the IJCAI 2017’s conference committee.

The open letter is the first time that AI and robotics companies have taken a joint stance on the issue. Previously, only a single company, Canada’s Clearpath Robotics, had formally called for a ban on lethal autonomous weapons.

In December 2016, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal discussions on autonomous weapons. Of these, 19 have already called for an outright ban.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter states. “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.

“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close,” it states, concluding with an urgent plea for the UN “to find a way to protect us all from these dangers.”

Signatories of the 2017 letter include:

Elon Musk, founder of Tesla, SpaceX and OpenAI (USA)

Mustafa Suleyman, founder and Head of Applied AI at Google’s DeepMind (UK)

Walsh is one of the organisers of the 2017 letter, as well as an earlier letter released in 2015 at the IJCAI conference in Buenos Aires, which warned of the dangers of autonomous weapons. The 2015 letter was signed by thousands of researchers in AI and robotics working in universities and research labs around the world, and was endorsed by British physicist Stephen Hawking, Apple Co-founder Steve Wozniak and cognitive scientist Noam Chomsky, among others.

“Nearly every technology can be used for good and bad, and artificial intelligence is no different,” said Walsh. “It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis. However, the same technology can also be used in autonomous weapons to industrialise war.

“We need to make decisions today choosing which of these futures we want. I strongly support the call by many humanitarian and other organisations for an UN ban on such weapons, similar to bans on chemical and other weapons,” he added.

“Two years ago at this same conference, we released an open letter signed by thousands of researchers working in AI and robotics calling for such a ban. This helped push this issue up the agenda at the United Nations and begin formal talks. I am hopeful that this new letter, adding the support of the AI and robotics industry, will add urgency to the discussions at the UN that should have started today.”

“The number of prominent companies and individuals who have signed this letter reinforces our warning that this is not a hypothetical scenario, but a very real, very pressing concern which needs immediate action,” said Ryan Gariepy, founder & CTO of Clearpath Robotics, who was the first to sign.

“We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability,” he added. “The development of lethal autonomous weapons systems is unwise, unethical and should be banned on an international scale.”

Yoshua Bengio, founder of Element AI and a leading ‘deep learning’ expert, said: “I signed the open letter because the use of AI in autonomous weapons hurts my sense of ethics, would be likely to lead to a very dangerous escalation, because it would hurt the further development of AI’s good applications, and because it is a matter that needs to be handled by the international community, similarly to what has been done in the past for some other morally wrong weapons (biological, chemical, nuclear).”

Stuart Russell, founder and Vice-President of Bayesian Logic, agreed: “Unless people want to see new weapons of mass destruction - in the form of vast swarms of lethal microdrones - spreading around the world, it’s imperative to step up and support the United Nations’ efforts to create a treaty banning lethal autonomous weapons. This is vital for national and international security.”

DOWNLOADS AVAILABLE FOR MEDIA USE

Portraits: Photos of Toby Walsh with UNSW’s Baxter Collaborative Robot, made by Rethink Robotics (a U.S. company founded by Australian Rodney Brooks). Credit: Grant Turner/UNSW.

Killer robots: Images of autonomous weapon systems currently in use or being developed.

The 2017 Open Letter: An open letter signed by 116 founders of robotics and artificial intelligence companies from 26 countries.

The International Joint Conference on Artificial Intelligence (IJCAI) is the world’s leading conference on artificial intelligence. It has been held every two years since 1969, and annually since 2015. It attracts around 2,000 of the best researchers working in AI from around the world. IJCAI 2017 is currently being held in Melbourne, Australia.

A news conference will be held at 11am on Monday 21 August 2017 to open the IJCAI 2017 conference in Banquet Room 201 of the Melbourne Exhibition and Conference Centre, where we will answer questions on the technical, legal and social challenges posed by autonomy especially in areas like the battlefield, and on the open letter. Address: 1 Convention Centre Pl, South Wharf VIC 3006.

Two years ago, at IJCAI 2015, more than 1,000 AI researchers released an open letter calling for a ban on lethal autonomous weapons. Signatories to this letter have now grown to over 17,000.

As part of Melbourne’s Festival of Artificial Intelligence, there will be a public panel on Wednesday 23 August, 5.30 to 7.00pm, entitled, ‘Killer robots: The end of war?’. The panel features Stuart Russel, Ugo Pagallo and Toby Walsh. This is part of AI Lounge, a conversation about artificial intelligence open to the public and media every night from 21 to 25 August 2017 (see http://tinyurl.com/ailounge)

Expert Reaction

These comments have been collated by the Science Media Centre to provide a variety of expert perspectives and reflect independent opinion on this issue. Feel free to use these quotes in your stories. Views expressed are the personal opinions of the experts named. They do not represent the views of the SMC or any other organisation unless specifically stated.

Distinguished Professor Mary-Anne Williams is Director, Disruptive Innovation at the Office of the Provost at the University of Technology Sydney (UTS). She is also Founder and Director of Innovation and Enterprise Research Lab (The Magic Lab) and a Fellow at Stanford University.

From its earliest beginnings, human history is a tale of an arms race littered with conflicts aimed at achieving more power and control over resources.

In the near future, weaponised robots could be like the velociraptors in Jurassic Park, with agile mobility and lighting fast reactions, able to hunt humans with high precision sensors augmented with information from computer networks. Imagine a robot rigged as a suicide bomber able to detect body heat or a heart beat that might be remotely controlled or able to make its own decisions about who and what to seek and destroy.

If we built a killer robot today, it would be dangerous in different ways - more like an unhappy, unstable toddler wielding an AK47 wanting to kill "bad" people. Robots today have limited perception and mobility capabilities in real world applications, but they are rapidly being enhanced with intelligence and autonomy.

There is no question robots can be developed at scale to efficiently seek and kill humans, possibly any human. The risk to human life is real and so too is robots' vulnerability to hacking and to be used as a sophisticated technology espionage and terrorism.

I signed the killer robot ban in 2015 because state-sponsored killer robots are a terrifying prospect.

However, enforcing such a ban is highly problematic and it might create other problems; stopping countries such as Australia from developing defensive killer robots would leave us vulnerable to other countries and groups that ignore the ban.

Furthermore, today the potential loss of human life is a deterrent for conflict initiation and escalation, but when the main casualties are robots, the disincentives change dramatically and the likelihood of conflict increases.

So a ban on killer robots cannot be the only strategy. The nature of destructive weapons is changing; they are increasingly DIY. One can 3D print a gun, launch a bomb from an off-the-shelf drone, and turn ordinary cars into weapons.

Society and nations need much more than a killer robot ban.

Last updated: 21 Aug 2017 4:44pm

Professor Anthony Finn is Director of the Defence & Systems Institute, Associate Head of the School Engineering – Research, and an Affiliate Member of the Future Industries Institute at the University of South Australia

Lethal autonomous robots differ from existing ‘fire-and-forget’ weapons because - although both prosecute targets without human involvement once their programming paradigms are satisfied - military advantage versus collateral damage estimation is undertaken by humans only for existing weapons; and humans are accountable under international humanitarian law.

Decisions regarding the legitimacy of lethal autonomous robots thus hinge on whether they comply with international humanitarian law. If lethal autonomous robots are to comply with international humanitarian law, they cannot be indiscriminate: they must be constrained by paradigms that classify targets by signature or region.

Ideally, lethal autonomous robots would balance key principles of international humanitarian law - discrimination and proportionality - using sophisticated algorithms yet to be developed; although it remains an open question as to whether they would ever successfully achieve this all of the time: it challenges humans.

However, the standard of international humanitarian law is one of reasonableness, not perfection; and the declaration of a weapon as unlawful centres on its inability to be directed discriminately at lawful targets under any circumstances, combined with the suffering caused by the effect of the weapon.

This is not changed by the autonomy of an engagement.

Finally, lethal autonomous robots might well reduce collateral damage - just as the current arsenal of fire-and-forget weapons have. This negates the notion that lethal autonomous robots should be declared unlawful per se.

The key is to establish circumstances under which their use might be permitted and to develop practical legal frameworks that allocate responsibility for infringements.

Last updated: 21 Aug 2017 4:42pm

James Harland is an Associate Professor in Computational Logic in the School of Computer Science and IT at RMIT University in Melbourne, Australia

In the past, technology has often advanced much faster than legal and cultural frameworks, leading to technology-driven situations such as mutually assured destruction during the Cold War, and the proliferation of land mines.

I think we have a chance here to establish this kind of legal framework in advance of the technology for a change, and thus allow society to control technology rather than the other way around.

I have seen first-hand the appalling legacy of land mines in countries such as Vietnam (where RMIT has two campuses), where hundreds of people are killed or maimed each year from mines planted over 40 years ago.

Last updated: 21 Aug 2017 1:10pm

Dr. Michael Harre is a Lecturer in Complex Systems Group and PM Program in the Faculty of Engineering & Information Technologies at the University of Sydney

It is an excellent idea to consider the positives and the negatives of autonomous systems research and to ban research that is unethical.

An equally important question is the potential for non-military autonomous systems to be dangerous, such as trading bots in financial markets that put at risk billions of dollars.

Soon we will also have autonomous AIs that have a basic psychology, an awareness of the world similar to that of animals. These AIs may not be physically dangerous but they may learn to be dangerous in other ways just as Tay, IBM's chat-bot, learned to be anti-social on Twitter.

So what are our ethical responsibilities as researchers in these cases? These issues deserve a closer examination of what constitutes 'ethical' research.

Last updated: 21 Aug 2017 1:02pm

News for:

Australia

International

NSW

VIC

SA

Multimedia:

Raytheon Phalanx Close-In Weapon System

Raytheon Phalanx Close-In Weapon System, which automatically searches, detects, evaluates, tracks, engages and and destroys incoming missiles. The same technology could one day be used against living targets.

File Size: 3.3 MB

Attribution: Raytheon

Permission Category: Free to share (must credit)

Last Modified: 24 Oct 2017 6:33pm

Note: High resolution files are only available for download by registered journalists.

Prof Toby Walsh with a Baxter robot

Toby Walsh , Professor of Artificial Intelligence at the University of New South Wales, has led the international campaign ban lethal autonomous weapons, or 'killer robots'. at the United Nations.

File Size: 2.0 MB

Attribution: Grant Turner/UNSW

Permission Category: Free to share (must credit)

Last Modified: 24 Oct 2017 6:38pm

Note: High resolution files are only available for download by registered journalists.

Prof Toby Walsh with a Baxter robot

Toby Walsh , Professor of Artificial Intelligence at the University of New South Wales, has led the international campaign ban lethal autonomous weapons, or 'killer robots'. at the United Nations.

File Size: 1.0 MB

Attribution: Grant Turner/UNSW

Permission Category: Free to share (must credit)

Last Modified: 24 Oct 2017 6:32pm

Note: High resolution files are only available for download by registered journalists.

Qinetiq Modular Advanced Armed Robotic System

The Qinetiq MAARS (Modular Advanced Armed Robotic System), an unmanned ground vehicle for reconnaissance, surveillance and target acquisition in battle, currently under development.

File Size: 2.5 MB

Attribution: Qinetiq

Permission Category: Free to share (must credit)

Last Modified: 24 Oct 2017 6:38pm

Note: High resolution files are only available for download by registered journalists.

MQ-9 Reaper drone

The MQ-9 Reaper drone is one of the lethal weapons whose operations in battle could easily be automated.

File Size: 1.1 MB

Attribution: US Department of Defense

Permission Category: Free to share (must credit)

Last Modified: 24 Oct 2017 6:30pm

Note: High resolution files are only available for download by registered journalists.

Raytheon Phalanx Close-In Weapon System

The Raytheon Phalanx Close-In Weapon System used by the Australian Navy, which automatically identifies and destroys incoming missiles. While purely defensive, it's an example of an autonomous weapon that could be adapted for lethal use.

File Size: 683.1 KB

Attribution: Raytheon

Permission Category: Free to share (must credit)

Last Modified: 24 Oct 2017 6:40pm

Note: High resolution files are only available for download by registered journalists.