Advertisement

Although the letter, first reported by the Guardian, notes that "we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so", it concludes that "this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow".

Read next

Startups are exploiting AI’s hazy definition to cash in on the hype

ByWill Bedingfield

The UK says it is not developing lethal AI, but the potential to build such weapons already exists and is developing fast -- a recent report into the future of warfare commissioned by the US military predicts "swarms of robots" will be ubiquitous by 2050. In response, experts and high-profile figures like Musk have made repeated calls to limit the development of deadly AI, even as peaceful autonomy grows more central to virtually every other area of tech and industry. The Future of Life Institute announced in June it would use a $10m donation from Elon Musk to fund 37 projects aimed at keeping AI "beneficial", with $1.5m dedicated to a new research centre in the UK run by Oxford and Cambridge universities.

The latest letter starts by defining autonomous weapons as those which "select and engage targets without human intervention", including quadcopters able to search for and kill people, but not remotely piloted missiles or drones. It also lists the arguments usually made in favour of such machines -- such as reducing casualties among soldiers.

But for the academics and figures who signed the letter, AI weapons are potentially more dangerous than nuclear bombs. "Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. "Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity."

Advertisement

The letter also notes specifically that most AI researchers do not want to "tarnish their field" by contributing to lethal AI and "by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits". "Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons."

Prompted in part by anti-autonomous weapons pressure groups, the United Nations debated the potential for a global ban on lethal autonomous weapons earlier this year -- which the UK has officially opposed. The Foreign Office stated earlier this year that "at present, we do not see the need for a prohibition on the use of Laws, as international humanitarian law already provides sufficient regulation for this area".

Also earlier this year, Stuart Russell, professor of computer science at the University of California, Berkley, wrote in the journal Nature that two programmes commissioned by the US Defence Advanced Research Projects Agency (Darpa) would -- if successful -- "foreshadow planned uses" of killer robots, and potentially contravene the Geneva Convention. "As flying robots become smaller, their manoeuvrability increases and their ability to be targeted decreases," Russell said. "They have a shorter range, yet they must be large enough to carry a lethal payload -- perhaps a one-gram shaped charge to puncture the human cranium."