The Next Generation Of Drones Will Decide For Themselves Whom To Kill

The Global Post writes that in the near future, drones will be smarter and more “autonomous,” using algorithms to determine whom to terminate on the ground below. What could go wrong?

In all, a minimum of 2,800 people have died in no fewer than 375 US drone strikes in Pakistan, Yemen and Somalia since 2004, according to a count by the UK Bureau of Investigative Journalism. Many hundreds of those killed were probably innocent bystanders.

Standard procedure is for one crewman to control the drone’s sensors, potentially including daytime and night-vision video cameras and high-resolution radars. The robot does essentially nothing without direct human input. But if a host of government and private research initiatives pan out, the next generation of drones will be more powerful, autonomous and lethal … and their human operators less involved.

“In the future we’re going to see a lot more reasoning put on all these vehicles,” Cummings says. For a machine, “reasoning” means drawing useful conclusions from vast amounts of raw data — say, scanning a bustling village from high overhead and using software algorithms to determine who is an armed militant based on how they look, what they’re carrying and how they’re moving.

Robots that possess the ability to reason might not need human beings to make so many decisions on their behalves. Drones have the potential to be more efficient without the burden of direct human control. “The ability to compute and then act at digital speed is another robotic advantage,” Peter Singer, an analyst with the Brookings Institution in Washington, DC, wrote in his seminal book on robot warfare, Wired for War.

I remember a story a former General Dynamics electronics technician told me about intelligent initiative on the part of a air-to-surface missile which ignored the tanks which were sitting waiting to be blown up in a demonstration for some Pentagon officials . . . and blew up the bus the officials came in instead.

“What Can Go Wrong” indeed.

The other issue is to make military violence effective as a method for making a population do one’s will, that population MUST know how to avoid it, otherwise if everybody thinks they’re a potential target for arbitrary military violence regardless of what they do or don’t do, a lot of people are going to see no downside to killing any member of that military organization they can find.

Algorithms are as arbitrary as it gets.

alizardx

I remember a story a former General Dynamics electronics technician told me about intelligent initiative on the part of a air-to-surface missile which ignored the tanks which were sitting waiting to be blown up in a demonstration for some Pentagon officials . . . and blew up the bus the officials came in instead.

“What Can Go Wrong” indeed.

The other issue is to make military violence effective as a method for making a population do one’s will, that population MUST know how to avoid it, otherwise if everybody thinks they’re a potential target for arbitrary military violence regardless of what they do or don’t do, a lot of people are going to see no downside to killing any member of that military organization they can find.

Algorithms are as arbitrary as it gets.

ishmael2009

“Algorithms are as arbitrary as it gets”

Absolutely right – with the added attraction of having deniability when it launches an attack on a civilian crowd. “There appears to have been a malfunction and we’re checking the algorithms”. So that’s alright then.

ishmael2009

“Algorithms are as arbitrary as it gets”

Absolutely right – with the added attraction of having deniability when it launches an attack on a civilian crowd. “There appears to have been a malfunction and we’re checking the algorithms”. So that’s alright then.

alizardx

Not if the civilian crowd is the congresscritters who voted money for the program. Though that’s a matter of viewpoint. How hard is it going to be to hack a not-too-bright AI, especially if contractors cheaped out on security?

I suspect that these devices are going to be ultimately banned by international treaty after a few extremely embarrassing accidents, some of which may not be all that accidental.

alizardx

Not if the civilian crowd is the congresscritters who voted money for the program. Though that’s a matter of viewpoint. How hard is it going to be to hack a not-too-bright AI, especially if contractors cheaped out on security?

I suspect that these devices are going to be ultimately banned by international treaty after a few extremely embarrassing accidents, some of which may not be all that accidental.

Anarchy Pony

The ultimate tool for imperialism, you can bet that if the Romans had this shit, we’d all be wearing togas.

Anarchy Pony

The ultimate tool for imperialism, you can bet that if the Romans had this shit, we’d all be wearing togas.

Bender

Coming to your town soon…

Bender

Coming to your town soon…

crownofstorms

if humans can’t make skyrim without bugs they can’t make drones smart enough to make such decisions