AI experts call for immediate action against autonomous weapons

An international group of artificial intelligence and robotics experts have signed an open letter to the United Nations to halt the use of autonomous weapons they say threaten a ‘third revolution in warfare’.

Elon Musk founder of Tesla, SpaceX and OpenAI and Mustafa Suleyman, founder and Head of Applied AI at Google’s DeepMind are among 116 signatories of the letter, which was issued at the International Joint Conference on Artificial Intelligence (IJCAI 2017) which is taking place in Melbourne this week.

“As companies building the technologies in artificial intelligence and robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm,” the letter states.

“Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

In December last year, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal discussions on the use of autonomous weapons, and 19 have already called for an outright ban. Australia is not among those that have already stated their position.

A Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems was due to meet for the first time this week, but the event was cancelled “due to a small number of states failing to pay their financial contributions to the UN” the organisers of the letter said.

The group is now due to meet for the first time in November.

“We entreat the high contracting parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilising effects of these technologies,” the letter continues.

“Lethal autonomous weapons threaten to become the third revolution in warfare. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close,” it states.

The letter follows an earlier one published in 2015 and endorsed by British physicist Stephen Hawking, Apple co-founder Steve Wozniak and cognitive scientist Noam Chomsky, among many others.

Scientia Professor of artificial intelligence at the University of New South Wales in Sydney, Toby Walsh, organised both letters and said action to curb the threat of autonomous weapons was required immediately.

“We need to make decisions today choosing which of these futures we want. I strongly support the call by many humanitarian and other organisations for an UN ban on such weapons, similar to bans on chemical and other weapons,” he said.

“Two years ago at this same conference, we released an open letter signed by thousands of researchers working in AI and robotics calling for such a ban. This helped push this issue up the agenda at the United Nations and begin formal talks. I am hopeful that this new letter, adding the support of the AI and robotics industry, will add urgency to the discussions at the UN that should have started today,” he added.

Autonomous weaponry is for the most part still at the prototype stage, although the technology is rapidly improving.

Several nations with advanced militaries, particularly the United States, China, Israel, South Korea, Russia, and the United Kingdom are moving toward systems that would give greater combat autonomy to machines, according to international coalition, Campaign to Stop Killer Robots.

“Nearly every technology can be used for good and bad, and artificial intelligence is no different,” said Walsh. “It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis. However, the same technology can also be used in autonomous weapons to industrialise war.”

Ryan Gariepy, founder and CTO of Clearpath Robotics and one of the first to sign the open letter added:

“The number of prominent companies and individuals who have signed this letter reinforces our warning that this is not a hypothetical scenario, but a very real, very pressing concern which needs immediate action,” he said.

“We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability. The development of lethal autonomous weapons systems is unwise, unethical and should be banned on an international scale.”

Yoshua Bengio, founder of Element AI and a leading deep learning expert, said: “I signed the open letter because the use of AI in autonomous weapons hurts my sense of ethics, would be likely to lead to a very dangerous escalation, because it would hurt the further development of AI’s good applications, and because it is a matter that needs to be handled by the international community, similarly to what has been done in the past for some other morally wrong weapons – biological, chemical, nuclear.”

Stuart Russell, founder and vice-president of Bayesian Logic, added: “Unless people want to see new weapons of mass destruction - in the form of vast swarms of lethal microdrones – spreading around the world, it’s imperative to step up and support the United Nations’ efforts to create a treaty banning lethal autonomous weapons. This is vital for national and international security.”

Copyright 2019 IDG Communications. ABN 14 001 592 650. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of IDG Communications is prohibited.