The political theorist Frederic Jameson once observed that "it has become easier to imagine the end of the world than the end of capitalism." But what if predatory capitalism finally destroys life on earth? That's the question posed by science fiction writer Ted Chiang, who argues that in "superintelligent AI," Silicon Valley capitalists have "unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own."

In a new essay for Buzzfeed, part of a series about the forces shaping our lives in 2017, the acclaimed author of "Arrival" (Stories of Your Life and Others) deconstructs our fear of artificial intelligence; specifically, that of tech titans like Tesla founder Elon Musk. For Musk, the real threat is not a malevolent computer program rising up against its creator like Skynet in the Terminator films as much as AI destroying humanity by accident. In a recent interview with Vanity Fair, Musk imagines a mechanized strawberry picker wiping out the species simply as a means of maximizing its production.

"This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why?" Chiang wonders. "Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies."

Don't let big tech control what news you see. Get more stories like this in your inbox, every day.

In Musk's hypothetical, the destruction of human civilization follows the logic of the free market.

"Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share?" Chiang continues. "[The] strawberry-picking AI does what every tech startup wishes it could do—grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly."

Ultimately, the catastrophe Musk and others foretell has already arrived in the form of "no-holds-barred capitalism."

"We are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations," Chiang continues. "Corporations don’t operate autonomously, of course, and the humans in charge of them are presumably capable of insight, but capitalism doesn’t reward them for using it. On the contrary, capitalism actively erodes this capacity in people by demanding that they replace their own judgment of what 'good' means with 'whatever the market decides.'"

For Chiang, the operative word is insight. Our capacity for self-reflection, or the "recognition of one's own condition," is what separates humans from the Googles, Facebooks and Amazons. And it is this deficiency that makes these monopolies so uniquely dangerous.

"We need for the machines to wake up, not in the sense of computers becoming self-aware, but in the sense of corporations recognizing the consequences of their behavior," he concludes. "Just as a superintelligent AI ought to realize that covering the planet in strawberry fields isn’t actually in its or anyone else’s best interests, companies in Silicon Valley need to realize that increasing market share isn’t a good reason to ignore all other considerations."