When I interviewed Wired for War author PW Singer last March, he told me that the preconditions for a successful Terminator-type uprising are not in place. As computer development accelerates, however, those preconditions become way more possible.

Advertisement

So, what are the preconditions, according to Singer?

1. The AI or robot has to have some sense of self-preservation and ambition, to want power or fear the loss of power.

Advertisement

2. The robots have to have eliminated any dependence on humans.

3. Humans have to have omitted failsafe controls, so there's no ability to turn robots or AI off.

4. The robots need to gain these advantages in a way that takes humans by surprise.

Sponsored

At the moment, says Singer, these conditions do not exist. "In the Terminator movies, Skynet gets super intelligence, figures the humans are going to eventually shut it down, thinks, 'I better strike first.'" However, in today's army, "we're building robots specifically to go off and get killed." He adds, "No one is building them to have a survival instinct—they're actually building them to have the exact opposite."

As far as human dependence, robots may do more and more human dirty work, but robots still need the meatbags to handle their dirty laundry. "The Global Hawk drone may be able to take off on its own, fly on its own, but it still needs someone to put that gasoline in there." Still, it's not hard to see how this precondition could eventually be overcome.

Advertisement

Advertisement

The failsafe discussion is surprisingly two sided. "It seems rather odd that people who grew up watching Terminator in the movie theaters wouldn't think, 'Hmm, maybe we should have a turn-off switch on there.'" But on the other hand, "brilliant AI could just figure a way around it." Besides, "we don't want to make the failsafe all that easy, because we don't want a robot that comes up to Bin Laden that he can just shut off by reaching around the back and hitting the switch."

We of course assume that robots will never gain the element of surprise. "You don't get super-intelligent robots without first having semi-super-intelligent robots, and so on. At each one of these stages, someone would push back." The scary thing is, Singer does acknowledge that the exponential growth of super-smart machines may indeed catch us by surprise eventually. "By the end it's happening too quickly for people to see."

No matter what preconditions are prevented deliberately, there is a point on every futurist's timeline where computers become "smarter" than humans, in terms of sheer brain capability, and no matter what happens up till that point, the game then changes completely. "In the Terminator movies, Skynet both tricks and coerces people into doing its bidding." How do we stop that from happening?

"Some people say, 'Let's just not work on these systems. If they're so many things coming out of this that are potentially dangerous, why don't we just stop?'" says Singer. "We could do that, as long as we also stop war, capitalism and the human instinct for science and invention." [More from my interview with PW Singer]