A step closer to Skynet? Pentagon wants fighting robots to talk to each other

In search of networks of autonomous robots that can talk to each other and …

Over at the Department of Defense, they've got lots of robots. Most of them aren't scary and glamorous like the lethal Drones you read about all the time. Perhaps the most useful land-based bot is the Tanglefoot, a short, roving critter that sneaks up on Improvised Explosive Devices, then graciously allows itself to be blown up for its trouble. Then there's the Autonomous Platform Demonstrator (APD) a nimble, 9.3-ton, unmanned ground vehicle that can turn on a dime and accelerate to a top speed of 50mph.

The holy grail is to get to a capability something like this: Two unmanned aircraft are searching for a target over an area of, say, several square miles or so. One plane notices movement, but the enemy is operating in a dense street situation, difficult for a high-altitude device to analyze, much less attack.

So the flying craft signals the relevant position via GPS coordinates to a cooperating APD on the ground. The unmanned car snakes around streets and buildings in pursuit of the goal. Meanwhile, the two planes keep trading information about the location of the target and sending it to the APD, which returns more detailed data about operations as they come in.

"This system demonstrates not only the collaborative interoperability possible among dissimilar vehicles, but also the numerous sensing technologies that can be included onboard as interchangeable payloads," explained Lora Weiss of the Military Sensing and Analysis Center (SENSIAC) in a blog post.

Judgment day?

All very interesting, but like most civilians our main reference point for these developments is the movies, some of which have been anticipating "interchangeable payload" scenarios for years. The most famous of these is Terminator Salvation, the fourth offering in the Terminator series, which also predicts some of the possible limitations of these gadgets.

An "interexchangeable payload," courtesy of Skynet

As fans of this fictitious saga know, in 2004 the Skynet military system becomes self-conscious, concludes that people pose a threat to its existence, and goes ballistic on the human race in an event known as "Judgment Day." It's been fun and games ever since, with Avatar's Sam Worthington replacing the former governor of California as the cyber-protagonist in the latest edition, and Batman's Christian Bale starring as the Hope of Mankind.

At one point in Salvation, Worthington and a couple of kids he's collected around Los Angeles manage to escape a very large Skynet robot. The Tyrannosaurus-like machine is expert at directing flaming blasts at hapless caravans of people, but not so proficient at following them once they get into fast moving pickup trucks.

That's not an obstacle in this instance, however—the dino-bot just dispatches three sleek motorcycle APDs, which relentlessly pursue our heroes until they can be cornered on a bridge and the adolescents collected for the big Skynet human prisoner camp in San Francisco. Here's an example of interexchangeable payload with a vengeance.

But the movie also illustrates a big downside of this technology—it can be hijacked. Later on in the story, Bale grabs himself a loud, ostentatious boombox, waits for one of the cycle bots to show up, trips it with low hanging wire, and converts it into a vehicle for him to ride to the Skynet base up north.

Therein lies the rub—what's to stop miscreants from appropriating these vehicles, a distinct prospect once they run out of fuel? No doubt the Pentagon has already given this problem some thought.

As for the risk of these devices becoming self-conscious and turning on us, hopefully that problem is a little further down the road.

A common problem in the history of artificial intelligence is our very limited idea of our own natural intelligence. Uniquely among animals, human beings have the ability to talk and think about things in terms of the symbols represented by words. That ability has lead to the common illusion that those words make some direct contact with concrete reality. In fact those words are just abstractions of the perceptual and operational system that we more or less share at least with other mammals. We are able to talk to each other about our own feelings and intentions because we share the same underlying physical system that generates those feelings and intentions. There is no way that any computer is going to suddenly acquire that ability because no computer has a system that works anything like the mammalian mind. But, when the focus is on responding to patterns that are purely defined by external non living concrete reality, the expectations are different. Even in that area, success has come much more slowly than the AI leaders of the 1960's expected. But we have computers today that can drive cars for extended periods of time under all of the traffic conditions that humans normally deal with. That capability is surely at least pushing what humans are capable of doing with this kind of task. There is no prospect of a computer becoming anything more than the agent of human intentions. But, there is every prospect of a computer becoming more effective than a human being in the activity of direct combat.

Once it has this capability, who will it use it on? A bunch of guerrillas with rocket propelled grenades in a mountainous landscape thousands of miles away?

Flying drones are useful everywhere. Small ground vehicles are useful in urban environments as scouts and heavy weapons platforms. We're a long way from a completely remote war, but vehicles like this can replace humans in a lot of the jobs that are too dangerous or boring for human soldiers to do them properly.

Quote:

Cant see it ever being viable any autonomous ground vehicle would be a valuable prize not only to strip for materials by the locals but to sell on the black market or to foreign governments.

We're willing to spend a lot of money to save soldier's lives. Even disregarding the emotional impacts and loss of morale a dead solider causes, each soldier is worth a few million in training and knowledge. That means that a whole lot of small robots costing thousands each can be written off or self destructed and it would still be cheaper than losing a soldier. I can't see these robots being much use to anyone who captures them. Integrating complex systems and making them useful is a huge task, and without the C&C these things are nothing more than really expensive RC toys.

My money is still on the econo-bot that Wall street trades with. They live in a competitive environment, are constantly fine-tuning themselves, and work hard to deceive each other.

An economic doomsday might be even worse. But wouldn't that be an odd spin on Terminator? Wall street of the future sends back one trader to impregnate Sara Connor and create the quant that can outfox the bots?

But, there is every prospect of a computer becoming more effective than a human being in the activity of direct combat.

I think we're still a long way away from a computer deciding when to pull the trigger, at least without a human designating the target. There are just too many chances for horrible accidents leaving it entirely to a computer.

But, there is every prospect of a computer becoming more effective than a human being in the activity of direct combat.

I think we're still a long way away from a computer deciding when to pull the trigger, at least without a human designating the target. There are just too many chances for horrible accidents leaving it entirely to a computer.

But, there is every prospect of a computer becoming more effective than a human being in the activity of direct combat.

I think we're still a long way away from a computer deciding when to pull the trigger, at least without a human designating the target. There are just too many chances for horrible accidents leaving it entirely to a computer.

My money is still on the econo-bot that Wall street trades with. They live in a competitive environment, are constantly fine-tuning themselves, and work hard to deceive each other.

An economic doomsday might be even worse. But wouldn't that be an odd spin on Terminator? Wall street of the future sends back one trader to impregnate Sara Connor and create the quant that can outfox the bots?

I, Robot, last story in the collection.

Of course, with 3 Laws Safe, they don't create a Doomsday scenario at all..

m bear wrote:

Quote:

Therein lies the rub—what's to stop miscreants from appropriating these vehicles, a distinct prospect once they run out of fuel? No doubt the Pentagon has already given this problem some thought.

Self-destruct systems.

Bingo. But even that is tricky; you wouldn't want to kill surrounding civillians, and you need to make sure the self-destruct mechnanism truly destroys every useful bit (pun not intended, but acknowledged).

First, I would be more worried about our current-day scenario of humans controlling autonomous drones than any kind of skynet situation. A commander able to control tons of drones can distance himself from the ones getting controlled/killed, much like a corporate exec can distance themself from customers, treating them like expendable cattle.

If robots really wanted to rule the world, they wouldn't do so with some archaic means like ... robots. An advanced AI would merely release various biological warfare agents into the air & water, letting them do the dirty work. Or find some way to sterilize us. It wouldn't be something we could fight against with our primitive "smash" instinct. It would be swift, logical and calculated.

Second, you're thinking too large scale with autonomous drones. If cyberpunk has taught us anything, it's that we'll have autonomous insect robots running around, blending in to the environment. Some would do survellance, and others would assassinate. People on the lam would naturally be wary of planes & other vehicular drones. But a small cockroach in a rat-infested part of town? They'd ignore that. They'd run to their safe haven, and we'd let them, as our bug-sized drones infesting the streets monitored them the whole way. Then when they thought they were safe and sound, and went to sleep, one of our bug drones would sneak in and poison the person. It would use a toxin that would either be untraceable (IE: mimic a natural occurance, like a heart attack), or that of a naturally toxic creature, like a black widow. So, when folks find the target dead, they would blame it on some will of god or act of nature instead of black ops.

Second, you're thinking too large scale with autonomous drones. If cyberpunk has taught us anything, it's that we'll have autonomous insect robots running around, blending in to the environment. Some would do survellance, and others would assassinate. People on the lam would naturally be wary of planes & other vehicular drones. But a small cockroach in a rat-infested part of town? They'd ignore that. They'd run to their safe haven, and we'd let them, as our bug-sized drones infesting the streets monitored them the whole way. Then when they thought they were safe and sound, and went to sleep, one of our bug drones would sneak in and poison the person. It would use a toxin that would either be untraceable (IE: mimic a natural occurance, like a heart attack), or that of a naturally toxic creature, like a black widow. So, when folks find the target dead, they would blame it on some will of god or act of nature instead of black ops.

It doesn't even have to be something complicated or (overly) dangerous, just fry the circuit boards and heat any ammo until it explodes.

Quote:

I have more faith in machines than brain-washed, sociopathic meat puppets directed by evil, old men far away in a safe, comfortable place, who kill journalists, those delivering aid, and children.

There aren't enough for this. War sucks, people die. Compared to the wars of the past, what the US wages is clean and friendly. Consider that 150 years ago it was still standard procedure to literally rape and pillage your way across the countryside. We at least make an effort to not fire on civilians, even when the enemy specifically hides among them with the intention of causing innocent deaths.

But, there is every prospect of a computer becoming more effective than a human being in the activity of direct combat.

I think we're still a long way away from a computer deciding when to pull the trigger, at least without a human designating the target. There are just too many chances for horrible accidents leaving it entirely to a computer.

If there is a real risk of autonomous units becoming a threat to us, why the hell are we pouring so much money into developing them in the first place? The world, it would seem, is running out of wisdom, fast.

On the one hand, removing the prospect of death from our soldiers should allow them to make calm and clear decisions in (remote-controlled) battle. It might stop the sort of friendly fire incidents that occur because some jet-jockey is high on amphetamines after putting in double the normal flight-hours or some marine battalion is getting shot to hell and returns fire in the wrong direction.

On the other hand, giving people lots of power with little personal risk or accountability is generally a recipe for disaster. It could easily end up being a 'banality of evil' situation, where people in offices are just doing their job, following rules and never stop to notice that their actions are killing the wrong people.

There's always a weak point. Or am I being too optimistic? AIs will be hugely influenced by their hugely fallible human creators (for at least the first few generations... )

Eagle Eye was an interesting thought experiment even if it wasn't an amazing film. Likewise Matrix: Reloaded was painful to watch, but had interesting ideas about the purpose and intention of machines and their interaction with humans and each other.

Plus, some weapon systems must be autonomously triggered if they are to be useful. Humans would be too slow to select targets. Not to mention, giving currently dumb weapons a chance to turn themselves off might be one potential benefit of AI.

Yes, but you can *reason* with another human being. One cannot reason, nor expect mercy, or compassion, from a machine. It does not change its mind to disobey orders, whatever its orders are. It is the simplest minded soldier, one that knows nothing outside of its mission parameters; and it believes what it is told completely. It has no higher level understanding, its a primitive psychopath that has been armed with the latest weapons and armoured to withstand attack.

In search of networks of autonomous robots that can talk to each other and deliver "interexchangeable payloads," is the Pentagon bringing us a step closer to Skynet? And what do the Terminator movies suggest about the limits of this technology?

I'd be more concerned about the enemy capturing one of the interconnected robots and using it as a beach head into the network or reverse engineering IFF. I assume the military has thought about this issue before due to jet fighters, etc, but if there are a lot of these robots the number of attack surfaces greatly increases.

Yes, but you can *reason* with another human being. One cannot reason, nor expect mercy, or compassion, from a machine. It does not change its mind to disobey orders, whatever its orders are. It is the simplest minded soldier, one that knows nothing outside of its mission parameters; and it believes what it is told completely. It has no higher level understanding, its a primitive psychopath that has been armed with the latest weapons and armoured to withstand attack.

There's not much reasoning that goes on in the battlefield between foes. I assume you mean in a more civil environment?

If there is a real risk of autonomous units becoming a threat to us, why the hell are we pouring so much money into developing them in the first place? The world, it would seem, is running out of wisdom, fast.

There's not. We can't create anything even approximating sentient AI when we try, so the idea that a robot running relatively simple logic loops would get ideas of their own is laughable. As to why, because robots are cheaper than (well trained) humans, don't generate unrest at home when they "die", and can do a lot of things better than humans.

There will undoubtedly be tragic accidents, but we already have accidents, blue on blue incidents, and civilian casualties with humans doing everything. Ultimately technology should help us to minimize that kind of thing.

You can't reason with soldiers. The distances are too great, and it's not their place. A soldier on the ground doesn't have any more authority or ability to reason with the enemy than a robot would. There are three options. Fight, surrender, or run away.

The need for the US to stay current and develop new concepts for military campaigns is key to our future as a world power. But we must no forget for every high tech rover out there there is a low tech way of rendering it useless. Have we learned nothing from Terminator Salvation, man will always beat machine.

As fans of this fictitious saga know, in 2004 the Skynet military system becomes self-conscious, concludes that people pose a threat to its existence, and goes ballistic on the human race in an event known as "Judgment Day."

As fans of this fictitious saga know, the date of Judgment Day isn't a constant due to repeated time travel changing the timeline.

Matthew Lasar / Matt writes for Ars Technica about media/technology history, intellectual property, the FCC, or the Internet in general. He teaches United States history and politics at the University of California at Santa Cruz.