Even if autonomous weapons were created for use in "legal" warfare, the letter warns that autonomous weapons could become "the Kalashnikovs of tomorrow" — hijacked by terrorists and used against civilians or in rogue assassinations.

"They're likely to be used not just in wars between countries, but the way Kalashnikovs are used now ... in civil wars," Russell told Tech Insider. "[Kalashnikovs are] used to terrorize populations by warlords and guerrillas. They're used by governments to oppress their own people."

A life in fear of terrorists or governments armed with autonomous artificially intelligent weapons "would be a life for many human beings that is not something I would wish for anybody," Russell said.

Unlike nuclear arms, the letter states that lethal autonomous weapons systems, or LAWS, would "require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce."

But just how close are we to having usable autonomous weapons? According to Russell, affordable killer robots aren't a distant technology of the future. Stuart wrote in a May 2015 issue of Nature that LAWS could be feasible in just a few years.

For example, Roff writes, Lockheed Martin's long-range anti-ship missile can pick its own target, though it is fired by humans, at least for now. "The question has been raised whether this particular weapon slides from semi-autonomous to fully autonomous, for it is unclear how (or by whom) the decision is made," she wrote.

According to the New York Times: "Britain, Israel, and Norway are already deploying missiles and drones that carry out attacks against enemy radar, tanks, or ships without direct human control."

This open letter banning autonomous weapons is one of the latest warnings issued by scientists and entrepreneurs about the existential threats superintelligent AI could pose for humanity.

In January, many of the signatories of today's letter signed a call for AI research to remain "robust and beneficial" on the Future of Life Institute, an organization headed by prominent names in AI research, scientists, and even Morgan Freeman.

Jared Adams, the Director of Media Relations at the Defense Advanced Research Projects Agency told Tech Insider in an email that the Department of Defense "explicitly precludes the use of lethal autonomous systems," as stated by a 2012 directive.

"The agency is not pursuing any research right now that would permit that kind of autonomy," Adams said. "Right now we're making what are essentially semiautonomous systems because there is a human in the loop."