A global arms race to make artificial-intelligence-based autonomous weapons is almost sure to occur unless nations can ban the development of such weapons, several scientists warn.

Billionaire entrepreneur Elon Musk, physicist Stephen Hawking and other tech luminaries have signed an open letter warning against the dangers of starting a global arms race of artificial intelligence (AI) technologyunless the United Nations supports a ban on weapons that humans "have no meaningful control over."

The letter, which was issued by the Future of Life organization, is being presented today (July 27) at the International Joint Conference On Artificial Intelligence in Buenos Aires, Argentina. [Super-Intelligent Machines: 7 Robotic Futures]

"The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow," the letter reads, referring to the automatic weapons.

The risks, the signatories say, could be far greater than those posed by nuclear weapons.

Rise of the machines

From self-driving cars to sex bots, more and more of humanity's fundamental tasks are being taken over by robots. The inevitable march of the machines has spurred both utopian and apocalyptic visions of the future. Rogue AI that threatens humanity has featured prominently in science fiction movies such as "The Matrix" and "2001: A Space Odyssey."

But increasingly, these fears aren't just being played out on the silver screen. In fact, artificial-intelligence researchers themselves have voiced concerns over how innovations in the field are being developed. With autonomous AI weapons — such as drone planes that could seek and kill people using a face-recognition algorithm — the technology could be here in a matter of years, the writers of the letter argue.

And while drone fighters could limit battlefield casualties, these autonomous bots could also lower the threshold for initiating conflicts in the first place, the letter states.

In addition, such automatic weapons could conceivably be in the hands of almost every military power on Earth, because AI-based killing machines wouldn't require costly or hard-to-obtain materials. It wouldn't be long before assassins, terrorists and other bad actors could purchase them on the black market and use them for nefarious purposes, the scientists wrote.

"Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity," the letter states.