Elon likens advanced AI to “summoning the demon”, comparing the resulting consequences to an uncontrollable beast who has the capability of destroying us. It’s good to talk about it now, particularly coming from someone as innovative as Elon, but he’s speaking in generalities. We’re already relying on technology that can get out of control. It’s not AI that will decide humans are a threat, like Skynet, but its dangerous enough to shut down power grids, nuclear reactors, and the stock market if the programming breaks.

As the US military increasingly relies on drones and drone technology to fight, we’re entering an era where the next step in technology is giving these machines the power to kill autonomously. Yet its not just the obvious control over weapon systems that could potentially turn on us, or accidentally kill based off of faulty programming, but the very systems that keep our modern way of life humming. Putting our energy and resources in the hands of an unpredictable beast can be just as catastrophic as a defense system gone mad. The problem is, as with all scientific progress, we’ll dive in first and figure out how to swim later. And we may only learn our lesson after entire cities go dark, a reactor melts down, or a modern economy crashes.

Skynet is the fictional AI from the Terminator movies that, upon becoming Sentient, decides that humans are a threat to its existence and, having been put in control of the vast military resources of the United States, launches a full scale nuclear attack on Russia knowing that they will retaliate in kind.

The devastating war is only the beginning, however. Military grade machines stalk the burned cities hunting for survivors. In the 80’s and 90’s, James Cameron’s vision was terrifying and plausible, but the technology involved in his doomsday scenario was still far-fetched. The real world simply wasn’t advanced enough. Someday, yes. But when?

The countdown has begun. The DoD is aggressively pursuing advanced technology to replace an inevitably shrinking US ground force. The modern US military is facing dire budget constraints and an a ever-changing battlefield that requires smaller, faster, and more nimble forces. The heavy reliance on drones to fight our wars overseas is just a prelude for what’s to come: reliance on machines to do our fighting. In the air, ground, and sea.

As I work on the sequel to A Cold Black Wave, I’m exploring a post-apocalyptic world dominated by machines and their inevitable integration with human beings. The idea of a weaponized machine patrolling an occupied city is now an imminent reality that is actively being pursued by the world’s largest military. It will bring into light a slew of questions, including that of accountability and morality. If a machine inaccurately blows up a school bus full of kids, who is held responsible? The contractor who built it? The US military as a whole? With US drones, they are controlled by “Operators” who are rarely, if ever, mentioned when a drone strike kills innocent civilians. It’s always the drone. The “drone” accidentally killed people. The drone only acts by the decision of a human, but we’re already conditioned to not care of the consequences or who was responsible.

Although, DARPA is also looking into using actual soldiers as “surrogates” for bi-pedal machines, akin to the movie Avatar (which their project is named after). It’s possible that the Geneva Convention will pass rules against autonomous “combat” machines in particular, so that there is still a human responsible for any associated deaths.

Otherwise, I can see the headlines now: “Glitch in software causes errant missile strike on school bus”.