Thursday, September 21, 2017

What we didn't get

I recently wrote a fairly well-received Twitter thread about how the cyberpunk sci-fi of the 1980s and early 1990s accurately predicted a lot about our current world. Our modern society is totally wired and connected, but also totally unequal - "the future is here, it's just not evenly distributed", as Gibson was fond of saying. Hackers, cyberwarfare, and online psyops are a regular part of our political and economic life. Billionaires build spaceships and collaborate with the government to spy on the populace, while working-class people live out of shipping crates and drink poison water. Hobbyists are into body modifications and genetic engineering, while labs are researching artificial body parts and brain-computer interfaces. The jetpack is real, but there's only one of it, and it's owned by a rich guy. Artificial intelligences trade stocks and can beat humans at Go, deaf people can hear, libertarians and criminals funnel billions of dollars around the world with untraceable private crypto-money. A meme virus almost as crazy as the one in Snow Crash swept an insane man to the presidency of the United States, and in Texas you can carry a sword on the street like a street samurai in Neuromancer. There are even artificial pop stars and murderous cyborg super-athletes.

We are, roughly, living in the world the cyberpunks envisioned.

This isn't the first time a generation of science fiction writers has managed to envision the future with disturbing accuracy. The early industrial age saw sci-fi writers predict many inventions that would eventually become reality, from air and space travel to submarines, tanks, television, helicopters, videoconferencing, X-rays, radar, robots, and even the atom bomb. There were quite a few misses, as well - no one is going back in time or journeying to the center of the Earth. But overall, early industrial sci-fi writers got the later Industrial Revolution pretty right. And their social predictions were pretty accurate, too - they anticipated consumer societies and high-tech large-scale warfare.

But there have also been eras of sci-fi that mostly got it wrong. Most famously, the mid-20th century was full of visions of starships, interplanetary exploration and colonization, android servitors and flying cars, planet-busting laser cannons, energy too cheap to meter. So far we don't have any of that. As Peter Thiel - one of our modern cyberpunk arch-villains - so memorably put it, "We wanted flying cars, instead we got 140 characters."

What happened? Why did mid-20th-century sci fi whiff so badly? Why didn't we get the Star Trek future, or the Jetsons future, or the Asimov future?

Two things happened. First, we ran out of theoretical physics. Second, we ran out of energy.

If you watch Star Trek or Star Wars, or read any of the innumerable space operas of the mid-20th century, they all depend on a bunch of fancy physics. Faster-than-light travel, artificial gravity, force fields of various kinds. In 1960, that sort of prediction might have made sense. Humanity had just experienced one of the most amazing sequences of physics advancements ever. In the space of a few short decades, humankind discovered relativity and quantum mechanics, invented the nuclear bomb and nuclear power, and created the x-ray, the laser, superconductors, radar and the space program. The early 20th century was really a physics bonanza, driven in large part by advances in fundamental theory. And in the 1950s and 1960s, those advances still seemed to be going strong, with the development of quantum field theories.

Then it all came to a halt. After the Standard Model was completed in the 1970s, there were no big breakthroughs in fundamental physics. There was a brief period of excitement in the 80s and 90s, when it seemed like string theory was going to unify quantum mechanics and gravity, and propel us into a new era to match the time of Einstein and Bohr and Dirac. But by the 2000s, people were writing popbooks about how string theory has failed. Meanwhile, the largest, most expensive particle collider ever built has merely confirmed the theories of the 1970s, leaving little direction for where to go next. Physicists have certainly invented some more cool stuff (quantum teleporation! quantum computers!), but there have been no theoretical breakthroughs that would allow us to cruise from star to star or harness the force of gravity.

The second thing that happened was that we stopped getting better sources of energy. Here is a brief, roughly chronological list of energy sources harnessed by humankind, with their specific energies (usable potential energy per unit mass) listed in units of MJ/kg. Remember that more specific energy (or, alternatively, more energy density) means more energy that you can carry around in your pocket, your car, or your spaceship.

Protein: 16.8

Sugars: 17.0

Fat: 37

Wood: 16.2

Gunpowder: 3.0

Coal: 24.0 - 35.0

TNT: 4.6

Diesel: 48

Kerosene: 42.8

Gasoline: 46.4

Methane: 55.5

Uranium: 80,620,000

Deuterium: 87,900,000

Lithium-ion battery: 0.36 - 0.875

This doesn't tell the whole story, of course, since availability and recoverability are key - to get the energy of protein, you have to kill a deer and eat it, or grow some soybeans, while deposits of coal, gas, and uranium can be dug up out of the ground. Transportability is also important (natural gas is hard to carry around in a car).

But this sequence does show one basic fact: In the industrial age, we got better at carrying energy around with us. And then, at the dawn of the nuclear age, it looked like we were about to get MUCH better at carrying energy around with us. One kilogram of uranium has almost two million times as much energy in it as a kilogram of gasoline. If you could carry that around in a pocket battery, you really might be able to blow up buildings with a handheld laser gun. If you could put that in a spaceship, you might be able to zip to other planets in a couple of days. If you could put that in a car, you can bet that car would fly. You could probably even use it to make a deflector shield.

But you can't carry uranium around in your pocket or your car, because it's too dangerous. First of all, if there were enough uranium to go critical, you'd have a nuclear weapon in your garage. Second, uranium is a horrible deadly poison that can wreak havoc on the environment. No one is going to let you have that. (Incidentally, this is also probably why you don't have a flying car yet - it has too much energy. The people who decide whether to allow flying cars realize that some people would choose to crash those high-energy objects into buildings. Regular cars are dangerous enough!)

Now, you can put uranium on your submarine. And you can put it in your spaceship, though actually channeling the power into propulsion is still a problem that needs some work. But overall, the toxicity of uranium, and the ease with which fission turns into a meltdown, has prevented widespread application of nuclear power. That also holds to some degree for nuclear electricity.

As for fusion power, we never managed to invent that, except for bombs.

So the reason we didn't get the 1960s sci-fi future was twofold. A large part of it was apparently impossible (FTL travel, artificial gravity). And a lot of the stuff that was possible, but relied on very high energy density fuels, was too unsafe for general use. We might still get our androids, and someday in the very far future we might have nuclear-powered spaceships whisking us to Mars or Europa or zero-G habitats somewhere. But you can't have your flying car or your pocket laser cannon, because frankly, you're probably just too much of a jerk to use them responsibly.

So that brings us to another question: What about the most recent era of science fiction? Starting in the mid to late 1990s, until maybe around 2010, sci-fi once again embraced some very far-out future stuff. Typical elements (some of which, to be fair, had been occasionally included in the earlier cyberpunk canon) included:

These haven't happened yet, but it's only been a couple of decades since this sort of futurism became popular. Will we eventually get these things?

Unlike faster-than-light travel and artificial gravity, we have no theory telling us that we can't have strong AI or a Singularity or personality upload. (Well, some people have conjectures as to reasons we couldn't, but these aren't solidly proven theories like General Relativity.) But we also don't really have any idea how to start making these things. What we call AI isn't yet a general intelligence, and we have no idea if any general intelligence can be self-improving (or would want to be!). Personality upload requires an understanding of the brain we just don't have. We're inching closer to true nanotech, but it still seems far off.

So there's a possibility that the starry-eyed Singularitan sci-fi of the 00s will simply never come to pass. Like the future of starships and phasers, it might become a sort of pop retrofuture - fodder for fun Hollywood movies, but no longer the kind of thing anyone thinks will really happen. Meanwhile, technological progress might move on in another direction - biotech? - and another savvy generation of Jules Vernes and William Gibsons might emerge to predict where that goes.

Which raises a final question: Is sci-fi least accurate when technological progress is fastest?

Think about it: The biggest sci-fi miss of all time came at the peak of progress, right around World War 2. If the Singularitan sci-fi boom turns out to have also been a whiff, it'll line up pretty nicely with the productivity acceleration of the 1990s and 00s. Maybe when a certain kind of technology - energy-intensive transportation and weapons technology, or processing-intensive computing technology - is increasing spectacularly quickly, sci-fi authors get caught up in the rush of that trend, and project it out to infinity and beyond. But maybe it's the authors at the very beginning of a tech boom, before progress in a particular area really kicks into high gear, who are able to see more clearly where the boom will take us. (Of course, demonstrating that empirically would involve controlling for the obvious survivorship bias).

We'll never know. Nor is this important in any way that I can tell, except for sci-fi fans. But it's certainly fun to think about.

27 comments:

First and biggest (so far) we didn't imagine the internet and web as anything like they are now -- a general utility used by all. (Paul Baran did, but not in SF, but no-one read him.) And this was even true in the early 1990s, just before the dam burst.

We didn't imagine pocket or wrist sized computers running Unix (and if we had we'd have imagined them very wrong). No stories from even a few years ago have everyone on the bus or spaceship with their noses buried in the internet.

We mostly didn't imagine sequencing everyone's genome, CRISPR or more generally the rapid advance of genomics. These trends make universal genomic tweakage almost inevitable. We STILL don't (and can't?) imagine where this will take us in 20 years.

Also, arguably, we didn't imagine AI like we have today. SF imagines AI as "people" or at least "Siri". But we've got it embedded everywhere, organizing our photo collection, finding routes for us, and mostly other mundane things that in SF stories (until very recently) are still up to humans.

There *still* aren't any SF books I'm aware of that imagine a society with pervasive embedded AI 20 years better than today. A simple example: everyone would always be operating in the context of all public knowledge about everyone they see.

Which should make us wonder, what is coming in the near future that we don't even know that aren't imagining?

"There *still* aren't any SF books I'm aware of that imagine a society with pervasive embedded AI 20 years better than today. A simple example: everyone would always be operating in the context of all public knowledge about everyone they see."

I know we like to call these "Artificial Intelligence", but is it more accurate to call them "Advanced Algorithms." Under the hood most, if not all, of these are statistical in nature. Unless you believe that Human-style intelligence boils down to some sort a correlation generation then I think we are still waiting to see the advent of Real AI (c).

That's because there are various different types of AI. Organisers and route finders are neural networks that gradually learn to optimise over time through reinforced/supervised/unsupervised (and whatever newer techniques that are being developed currently) where as advanced algorithms are usually hardcoded solutions that need to be manually changed to accomodate new inputs. Literally speaking, these systems are AI, but to be more specific they're artificial narrow intelligence; they are capable of change but they usually only (well most NNs) specialise in one task.

Also AI HAS to be statical in nature because thats just how computers work, if we introduce biology into the mix that's called a cyborg, my mate. If we boil down the workings of the brain, neural synapses are merely the translation of statistics over neuron and different parts of our neural anatomy.

Popular culture has a lot to answer for for how the general public imagine AI. While 'rule based' stochastic systems like JARVIS from Iron Man, replicants from Blade Runner (eh, to be honest, those are more androids, but I guess they're still ASI because of their logical mind? Or purely biological? Someone who has seen the film recently, please correct me, details of that are a bit fuzzy), Sunny from iRobot are fun to imagine, they're the other type of AI we're striving for, artificial super intelligence (though, spoiler alert we're not quite there yet).

TLDR: These photo collection organisers and route finders /are/ AI (and my dog image classifier sitting on my hard drive at the moment is offended at being called otherwise), but they're ANI and you're imagining ASI.

Right about 1970 a bunch of people looked at that spaceflight-laden science fiction-ey future and decided they didn't want any damned part of it. Preserving our fragile environment was far more important. Ending war and feeding the world had a higher priority. We needed a War On Cancer rather than moon colonies. We had to balance the federal budget!

And so on. I sometimes suspect the fact that the Apollo moon landings coincided with the climax of the Viet Nam war was a factor-- Apollo had been billed as this big symbolic conflict with communism, and by 1969 most Americans were pretty damned tired of fighting communists.

Anyhow, the 1969-1972 period saw some changes in America. The Federal government drastically cut back on R&D for one thing, and switched much of what funding remained from physics to medicine and biology. (Possibly an inspired choice, given that we'd be finding out about AIDS a decade later). Manned space programs got cut back a lot -- so much so that we haven't moved humans beyond low earth orbit was almost half a century.

It wasn't the difficulty of packaging energy that made this inevitable. It was politics.

The real reason manned expansion into space halted is because it was pretty much pointless. The space race was fundamentally a prestige contest between the USA and USSR, and when the United States won, there was no point to do more. Scientific research could be done with much less expensive unmanned probes.

Space colonization is massively expensive and unprofitable; until that changes, there will be no expansion of humans into space.

Endogenous, maybe. What if grander science fiction helps to accelerate technological progress, because it brings us the vision of a more exciting future? Hard to test this, but it's possible that grim, boring science fiction is correlated with optimism for the future, with lower economic growth the result.

"we have no idea if any general intelligence can be self-improving (or would want to be!)"

I possess general intelligence (because I'm human, in case you weren't sure), and I would like to make myself smarter. I'm certainly not the only one, if the popularity of brain-training games and nootropics are any indication (though I should add that I am very skeptical of both of those, and regard the possibility of a general intelligence that can self-improve to be an open question).

Also: does anyone know why existing light aircraft don't count as "flying cars"? Because you can't drive them on highways? I mean, some of them you could, in theory, but it's probably illegal and you wouldn't want to anyway.

Is nuclear power unexpectedly dangerous? It's much safer than classic SF predicted. We don't see three-eyed mutants walking around Hiroshima; we don't see nuclear explosions at reactors (cf. "Blowups Happen" by Robert Heinlein); we've seen only two major meltdowns (three if you count Kyshtym); we didn't see cancer epidemics as a result of the bomb testing in the 1950s; we're no longer faced with an insane great power armed with thousands of nukes (we might be faced with an insane medium power armed with dozens, but that won't produce an On the Beach scenario). Even radioactivity looks less dangerous than we thought.

The only two unexpected anti-nuke phenomena are: 1) the anti-technology movement; and 2) the fact that large capital expenditures were unprofitable in an inflationary environment. Even anti-technology movement was predicted ("Trends" by Isaac Asimov); the only unexpected feature was the identity of the idiots involved.

Seems to me the biggest use of technical resources over the next century or so will probably be maintaining the status quo. We've got a bunch of technology that works in the conditions we created it, but as a side effect of that we have caused climatic change that is going to result in storms the like of which we have never seen happening in far more places than they ever did before. Between that and increasing aridity in other parts of the world, we're going to need to invest a huge amount of time and effort into hardening our infrastructure and technology against the world we are creating. In terms of 90s Sci Fi I think Peter F Hamilton called this in his Nights Dawn Trilogy, where everyone on Earth ended up living in huge arcologies to protect against the storms outside.

@Ben Moxon Along with sea level rise there also will be massive migration of affected people and attendant stressing of resources. There will be another global war if famine and migration becomes too intense.

I think the best marriage of AI and post singularity society is Iane M. Banks Culture novels. The novels themselves (especially the early ones) are more concerned with the edges of that society and the interactions with pre-singularity societies and are an interesting hybrid of old and new SF. The backdrop of the Culture is an interesting conceptual liberal-libertarian view of what society might look like with both sentient AI's and perpetual abundance. The real interesting thing is that it is essentially a planned economy because there is no need to distribute unlimited resources and it is in some ways the realization of the communist dream. The limits come from the central planners who actually run everything in the form of the AIs.

FWIW, the "Star Trek" future only appears in the 22nd century with warp drive and first contact with Vulcan. In the Star Trek timeline, the 21st century is pretty horrible. See the DS9 episode "Past Tense," set in 2024, for example. Ironically their mid-90s idea of what technology we'd have in 2024 is worse than what we have in 2017, but the politics aren't far off.

It's interesting that you say this "no one is going back in time or journeying to the center of the Earth." Perhaps not as our 21st century selves. On the other hand, science, sensing, the ability to identify the smallest of iotas, genetics, etc. etc., are creating detailed information about the past--see the new work on Stonehenge, and the ice man, for example, and about the earth--see volcanology, our planets, and so on, and you can easily feel we are traveling to the far past and throughout earth and our solar system.