Long before Alan Moore asked “Who will watch the Watchmen?” Radium Age (1904-33) science fiction writers worried whether supermen would rescue us ordinary mortals — or try to dominate us.

This item first appeared on Gawker’s sci-fi blog io9.com, on Feb. 16, 2009.

Dreamed up by American and European SF writers in the late 19th and early 20th centuries — at a time when Lamarckian evolutionary philosophy, which posits a tendency for organisms to become more perfect as they evolve (because such change is needed or wanted, e.g., by “life”), remained popular — many of the first fictional supermen were portrayed by their creators as examples of a more perfect species towards which humankind has supposedly long aimed. Radium Age superman was, that is to say, homo superior, an evolved human whose superiority was mental, physical, or both. [Note: Bergson’s Creative Evolution, which suggested that life evolves in the direction of a greater capacity for action and freedom, was surely another important influence on these authors; but unlike Lamarck, and like Darwin, Bergson’s model of evolution didn’t imagine any predestined endpoint, e.g., a perfect being or absolute freedom.]

Aye, there’s the rub: for, as Nietzsche has Zarathustra predict, “Just as the ape to man is a laughingstock or a painful embarrassment, man shall be just that to [superman].” Olaf Stapledon, Philip Wylie, George Bernard Shaw, and other PGA SF authors agreed that the superman — whose values and worldview the rest of us can’t share, or even comprehend — would seem cold, inhuman, alien. Even, or especially when, he or she is trying to help us.

Well-known Golden Age superman fiction includes: Van Vogt’s The World of Null-A, Sturgeon’s More Than Human, Shiras’s Children of the Atom, and PKD’s The Golden Man. I also enjoy: Andrew Marvell’s Minimum Man, Stapledon’s Sirius and A Man Divided, Jack Williamson’s Darker Than You Think and Dragon’s Island, Jerome Bixby’s “It’s a GOOD Life” (basis of a famous Twilight Zone episode), Poul Anderson’s Brain Wave, Alfred Bester’s The Demolished Man, and Heinlein’s Stranger in a Strange Land and Friday.

The influence of Radium Age supermen remains a powerful one. Consider Adrian Veidt, in the Watchmen movie adaptation. Unlike most superheroes we’ll see on the big screen in the next year or so — e.g., Wolverine, Wonder Woman, Captain America, not to mention Superman himself — Ozymandias, as Veidt was known in his costumed adventurer days, isn’t merely a mutant, a godling, a scientific experiment, or an alien visiting a planet where he’s uniquely able to kick ass. Instead, he’s an Übermensch — a self-overcoming individual, that is to say, who has not only mastered his perfect body but (to quote Peter Cannon… Thunderbolt, the Charlton comic that inspired Moore’s Ozymandias) “harnessed the unused portions of the brain.” Fortunately or unfortunately for humankind (that’s the issue), he has also revalued our human, all too human values.

“I think you could see that it was an evil thing to do and maybe a patronizing thing to do,” Watchmen illustrator and co-creator Dave Gibbons said, when asked about Ozymandias’s catastrophic scheme to save the world. He continued:

I think that probably is one of the worst of his sins, that it’s kind of looking down on the rest of humanity, scorning the rest of humanity. I think for that reason that [Veidt’s former comrade, the masked vigilante] Rorschach, by persisting in his single-minded devotion to what he sees as the truth is … actually painted in a very human manner. At the end of it, your loyalty lies very much with this very flawed, psychopathic human being who knows his faults, who knows the faults of the rest of humanity, rather than somebody like Adrian, who considers himself to be above humanity and who has taken a rather cold and calculating view of everything.

Ozymandias may sound like Tom Cruise (who was interested in the role). What he’s utterly unlike, however, is your typical comic-book superhero — who, despite his or her superhuman abilities, tends to reflect and personify our own, human values. That’s because Superman was an invention of SF’s gung-ho, can-do Golden Age. His Radium Age literary precursors, however, were a different story altogether.

Here’s a list – in no particular order — of the 10 most influential and problematic supermen and women from 1904-33. There’s a complete list at the end.

1) HUMPTY, in Olaf Stapledon’s Last Men in London (London: Methuen, 1932). Stapledon writes insightfully about homo superior – he’s credited with coining the term – in three of the four novels for which he’s remembered. I’ve already written, in this series, about Last and First Men and Odd John, so that leaves Humpty. He’s a young “supernormal,” a London teenager in whom there is “some promise of a higher type.” According to Stapledon, all “submerged supermen” are adolescent misfits, because: they don’t take themselves seriously, they don’t want to get ahead, they despise athletics, they’re puzzled and bored by religion and patriotism, they don’t regard sexuality as shameful, and they remain idealistic long after childhood. They’re Lost-Generation-style idlers, in other words. But with really big heads. In a Good Will Hunting-like coda to the novel, whose protagonist is Paul, a teacher telepathically possessed by a member of an evolved human species (the Last Men) living in the distant future, the brilliant Humpty outlines a plan to found a new human species that will control the world and eliminate or domesticate the “subhuman hordes”… then succumbs to despair.READ IT | BUY IT

2) THE WONDER, in J.D. Beresford’s The Hampdenshire Wonder (Sidgwick and Jackson, London, 1911). Victor Stott is a giant-headed “supernormal” child mutated in the womb by his parents’ desire to have a son born without habits. After surveying science, philosophy, history, literature, and religion, the Wonder says, “So elementary… inchoate… a disjunctive… patchwork.” His adult interlocutors are shattered by his statements about the nature of the universe and human progress; his philosophy begins with rejecting “the interposing and utterly false concepts of space and time,” and ends with the notion that life and all matter are merely “a disease of the ether.” Unable to live without illusions, everyone rejects the Wonder’s disenchanting insights; he also makes an enemy of the local clergyman, who may murder him. “He was entirely alone among aliens who were unable to comprehend him, aliens who could not flatter him, whose opinions were valueless to him.” Scholars call this the first SF novel of real importance about intelligence; it’s the ancestor of Clarke’s Childhood’s End and Van Vogt’s Slan.READ IT | BUY IT

3) RALPH, in Hugo Gernsback’s Ralph 124C 41+: A Romance of the Year 2660 (Stratford Co.: Boston, 1923. Serialized in Modern Electrics, 1911-12.) Ralph One-to-foresee-for-an-other (get it?) is a great American scientist, and a superior type; the “plus” at the end of his name proves it. But what kind of big-headed superman story is this, anyway? The polar fleece-wearing citizens of solar-powered, geothermally heated New York don’t fear or resent Ralph; in fact, they’ve erected a glass-and-steelonium luxury tower for him in Union Square. Ralph isn’t an evil genius; he’s something of a bore, and so is his techno-utopian society. Except maybe when his girlfriend is kidnapped by a Martian! Despite the wooden prose and juvenile adventure, this Edisonade is worth a read because of all the technology it accurately predicts: e.g., fluorescent lights, microfilm, radar, television. Also, I suspect that we’re on the cusp of seeing the Hypnobioscope, which allows you to avoid subscribing to newspapers in your sleep, hit the Apple Store.

4) ZOZIM and ZOO, in George Bernard Shaw’s Back to Methuselah: A Metabiological Pentateuch (Constable: London, 1921). Shortly after WWI, the secret of Creative Evolution — in Shaw’s formulation, the process by which an organism can will its own entelechy, or self-potentiation — is discovered. By the year 3100 the long-lived elite have developed perfect physiques and advanced mentalities. Zozim and Zoo are Boomer-like superhumans (he’s nearly 100, she’s 50, but they act like young adults) who assist a sham oracle in overawing the short-livers seeking its advice. It’s all part of a plot to colonize and supersede ordinary humans! But like the proto-Yippies they are, Z and Z are upfront about the put-on. Zoo: “[Zozim] has to dress-up in a Druid’s robe, and put on a wig and a long false beard, to impress you silly people…. I have no patience with such mummery; but you expect it from us; so I suppose it must be kept up.” It’s Wild in the Streets meets Highlander. Fun fact: RFK was quoting Shaw’s play when he said, “”You see things; and you say, ‘Why?’ But I dream things that never were; and I say, ‘Why not?'”READ IT | BUY IT

5) THE AMPHIBIAN, in S. Fowler Wright’s The Amphibians (Merton Press: London, 1925). Half a million years in the future, a nameless, time-traveling protagonist discovers that the Earth’s dominant intelligent species are the Dwellers (humanoid giants with equally giant intellects, self-destructively devoted to science) and the furry Amphibians (not merely mentally and physically more evolved than modern man, they’re also morally superior). Though they regard the time traveler as a primitive (hello, Planet of the Apes), one of the Amphibians accompanies him across a Divine Comedy-like landscape of incredible horrors and warfare between monstrous species. It’s an allegorical adventure, in which conflicting philosophies of life and morality are debated: the Spock-like Amphibian is dispassionate and sees things from a transcendental perspective, while the Kirk-like Primitive is emotional and impulsive. Fun fact: Everett F. Bleiler calls The World Below (The Amphibians + its sequel, published together in ’29) “undoubtedly the major work of science fiction between the early Wells and the moderns.”BUY IT | MORE ABOUT SFW

6) BORK/DE SOTO, in John Taine’s Seeds of Life (1951; serialized in Amazing Stories Quarterly, Fall 1931). “We do not like the thought of being relegated to a minor place in the evolutionary scheme; we half expect that a race of geniuses would treat us cruelly, as we treat dumb animals,” writes Peter Nicholls. “Taine’s Seeds of Life is a prototype of this kind of story.” Maybe not, but it’s a ripping yarn that may have influenced everything from Flowers for Algernon to Spider-Man. Neils Bork, a pathetic lab technician, attempts suicide via X-rays and is transformed into a supermind in the body of a swarthy Adonis; he renames himself De Soto. (“De Soto was but a partial, accidental anticipation of the more sophisticated and yet more natural race into which time and the secular flux of chance are slowly transforming our kind.”) He invents wireless energy transfer devices, secretly planning to use them to bombard humankind with “dysgenic” rays that will devolve unborn children. Then De Soto’s own evolution reverses itself: “I never used to think, but saw the inevitable consequences of any pattern of circumstances – no matter how complicated – immediately, like a photograph of the future.” He repents of his superioristic ways, and is killed by his own reptilian offspring. Fun fact: John Taine was the pseudonym of CIT mathematician Eric Temple Bell.BUY IT

7) HUGO, in Philip Wylie’s Gladiator (Knopf: New York and London, 1930). Wylie is best known as coauthor of When Worlds Collide, and as a crank(y) essayist obsessed with Soviet nukes and “Mom-ism.” The only thing you ever hear about this Radium Age SF classic is that it’s “thought to be the book from which ‘Superman’ was derived.” Make no mistake: Siegel & Shuster lifted everything except Superman’s cape from Gladiator. Thanks to an experiment by his scientist father (who studies grasshoppers and ants), Hugo Danner is nearly invulnerable, runs faster than a train, leaps higher than trees, and hurls boulders like baseballs. Also, his father gives him Nietzschean advice only, e.g.: “The stronger, the greater, you are, the harder life is for you.” So… Hugo creates a fortress of solitude in Colorado, drops out of school, wanders the planet, then joins the French Foreign Legion at the outbreak of WWI (“He felt himself almost the Messiah of war…. He was like a being of steel”). Later, he adopts a secret identity, moves to Metropolis Manhattan, and vows to become “an invisible agent of right — right as best I can see it.” Despairing, however, of flawed mortals and their politics — I’d probably call him an anarcho-monarchist — Hugo heads to the Yucatan to start a colony of superbeings, “the new Titans.” But then he changes his mind and curses God… on a mountaintop, shown at left, instead.READ IT | BUY IT

8) EARANI, in Erle Cox’s Out of the Silence (E.A. Vidler: Melbourne, 1925; serialized in The [Melbourne] Argus, 1919). Alan Dundas stumbles upon the subterranean repository of a long-vanished, fantastically advanced civilization. He awakens Earani from a state of suspended animation. Her intelligence and abilities are as astounding as her beauty. (Hello, Leelo in The Fifth Element.) In fact, Earani was the end result of her civilization’s worldwide eugenics program, which she intends to put back into practice, as soon as she conquers the planet. (“This world of yours is full of pain and misery…. Is any price too great that buys a perfect and wholesome humanity?” Earani demands of Australia’s prime minister. “You hold that to carry out my mission would be a crime. I hold that to fail in doing so would be a crime.”) Alan, who falls in love with Earani, is undisturbed by her plan to wipe out the “colored races” and inferior whites; one is not sure what the author himself thinks about this subject. In the end, Earani is backstabbed (literally) by another woman.READ IT | BUY IT

9) POLLARD, in Edmond Hamilton’s “The Man Who Evolved” (Wonder Stories, April 1931). Dr. John Pollard is a biologist trying to crack one of the two great mysteries of evolution: “What is the cause of evolutionary change?” Having determined that the answer is “the cosmic rays,” and having built a cosmic ray-gathering contraption, Pollard invites two friends (one of whom is the narrator) to witness as he investigates the second question: “What is the future course of man’s evolution going to be?” Pollard first evolves himself into a superman, with a godlike physique and immense intellectual power; then into a shriveled body supporting an enormous head; then a huge head with almost no body, which plans to “master without a struggle this man-swarming planet, and make it a huge laboratory in which to pursue the experiments that please me”; then into a brain with tentacles, a Dr. Manhattan-like being whose perspective is so cosmic that it no longer cares to dominate the world. (“The only emotion, if such it is, that remains to me still is intellectual curiosity…”) Alas, after evolving himself one more time, Pollard devolves back into simple protoplasm. Fun fact: Isaac Asimov described this as “the first science fiction short story… that impressed me so much it stayed in my mind permanently.”READ IT

10) HERVE, in Noëlle Roger’s Le nouvel Adam (1924; translated as The New Adam, London: Stanley Paul & Co. Ltd., 1926). When Herve Silenrieux, a hapless medical student working for Dr. Flecheyre at Paris’s Institut Pasteur, attempts suicide by shooting himself in the head, Flecheyre implants in him an experimental combination of glands that he believes will stimulate the brain — and in so doing, create an evolved human. Indeed, Herve becomes incredibly brilliant… but wholly logical and unpleasant. For example, he no longer hesitates to kill patients for experimental data. Dismissed from the hospital, Herve develops a death ray, which he tests on villagers in the French countryside; after that, he detonates lumps of lead with the force of an atomic bomb, setting off a series of earthquakes. Attempting to prevent Herve from wiping out the human race, Flecheyre confronts him; Herve’s forcefield detonates the lead bullets in Flecheyre’s pistol, and both men die. Fun fact: Noëlle Roger is the pseudonym of Hélène Dufour Pittard, a Swiss-Canadian journalist.

ALSO OF INTEREST

“Some individuals, it is true, are more special. This is natural selection. It begins as a single individual, born or hatched like every other member of their species, anonymous, seemingly ordinary. Except they’re not. They carry inside them the genetic code that will take their species to the next evolutionary rung. It’s destiny.” — fictional geneticist Mohinder Suresh (Sendhil Ramamurth), expressing an un-Darwinian view of evolution in the first episode of Heroes, 9/23/06.

* H.G. Wells, The Food of the Gods and How it Came to Earth (1904)
* Alfred William Lawson, Born Again: A Novel (1904)
* Tyman Currio (John Russell Coryell?), Weird and Wonderful Story of Another World (1905-06)
* E. Nesbit (as E. Bland), “The Third Drug” (1908)
* M.P. Shiel, The Isle of Lies (1909)
* J.D. Beresford, The Hampdenshire Wonder (1911)
* James L. Ford, “The Highbrow” (1911), a Smart Set story which can be read as a Radium Age sf story about a mutant species.
* William Greene, “The Savage Strain” (1911)
* Hugo Gernsback, Ralph 124C 41+: A Romance of the Year 2660 (1911-12)

Wonderful stuff. This “radium age” (great term) stuff is truly odd. It really does seem as if WWI is some kind of horizon dividing us from some earlier worldview. I just finished Greg Bear’s City at the End of Time, which is kind of an extended homage to William Hope Hodgson’s The Night Land, and the modern treatment of something that feels resolutely pre-modern was all kinds of jarring. I’d love to hear your thoughts on Hodgson.

PS: Alfred Jarry, on the other hand, seems kind of post-pre-modern. The Supermale is a fascinating muddle, kind of a surprise if you only know Ubu and Pataphysics. It reminded me more of Nathanael West than anything. It would have made an interesting entry to this list–miles from the (naive?) moralizing you see in some of these.

PPS: love the Gladiator cover. “The moral crisis that made a man out of Mac!” Chicks dig supermen!

Rod — it’s so true that Jarry’s “Supermale” seems like it postdates the Radium Age, when in fact it predates it. (NB: Much of what Jarry wrote was influenced by Bergson.) I don’t think we’ve caught up to Jarry yet; he’s one of Hilobrow’s patron saints. We’ll be reading from the final chapter of “Supermale” during our forthcoming podcast.

I mentioned Hodgson’s The Night Land in a post for io9 about the greatest apocalypses of pre-Golden Age SF.http://tinyurl.com/6cdkqq

An io9 reader made a good criticism of my squib on Hodgson; I agree with these points. However, in my defense, if I’d addressed any of these issues in my short item, I wouldn’t have had space to describe the plot. Here’s the comment:

“Oh man, THE NIGHT LAND is so much stranger and better than your description here makes it appear. For one, it’s written in a bizarre pseudo-medieval style, dictated from the future via time-traveling telepathy. Also, this novel is a direct inspiration for Lovecraft’s nihilistic, hopeless vision of how the universe works. We are helpless, surrounded by incredibly powerful entities who, at best, don’t care about us and at worst want us to die horribly. The Watchers, especially, (mile-high creatures who move so slowly their blinking can only be tracked by centuries of observation) is very pre-Lovecraftian. The book is weird, and repetitive, but really worth a read for fans of horror or science fiction.”

One other thing about “Night Land” — the description of how its protagonist checks in with the inhabitants of the Last Redoubt telepathically, sensing great disturbances… seems like this must have been an important source of George Lucas’s inspiration for “the Force.” Plus, the guy is armed with a lightsaber-type weapon.

Like most memes from sf of this period, the “we don’t use 100% of our brain” meme probably first came from popular scientific and psychological discoveries of the era. Maybe from William James, who wrote in “The Energies of Men” (1908) that “We are making use of only a small part of our possible mental and physical resources.”

From wikipedia: One possible origin is the reserve energy theories by Harvard
psychologists William James and Boris Sidis in the 1890s who tested
the theory in the accelerated raising of child prodigy William Sidis
to affect an adulthood IQ of 250–300; thus William James told
audiences that people only meet a fraction of their full mental
potential, which is a plausible claim

Our present century may not be quite as perilous for the human race as an ice age in the aftermath of a super-volcano eruption, but the next few decades will pose enormous hurdles that go beyond the climate crisis. The end of the fossil-fuel era, the fragility of the global food web, growing population density, and the spread of pandemics, as well as the emergence of radically transformative bio- and nano­technologies—each of these threatens us with broad disruption or even devastation. And as good as our brains have become at planning ahead, we’re still biased toward looking for near-term, simple threats. Subtle, long-term risks, particularly those involving complex, global processes, remain devilishly hard for us to manage.

But here’s an optimistic scenario for you: if the next several decades are as bad as some of us fear they could be, we can respond, and survive, the way our species has done time and again: by getting smarter. But this time, we don’t have to rely solely on natural evolutionary processes to boost our intelligence. We can do it ourselves.

Most people don’t realize that this process is already under way. In fact, it’s happening all around us, across the full spectrum of how we understand intelligence. It’s visible in the hive mind of the Internet, in the powerful tools for simulation and visualization that are jump-starting new scientific disciplines, and in the development of drugs that some people (myself included) have discovered let them study harder, focus better, and stay awake longer with full clarity. So far, these augmentations have largely been outside of our bodies, but they’re very much part of who we are today: they’re physically separate from us, but we and they are becoming cognitively inseparable. And advances over the next few decades, driven by breakthroughs in genetic engineering and artificial intelligence, will make today’s technologies seem primitive. The nascent jargon of the field describes this as “ intelligence augmentation.” I prefer to think of it as “You+.”

Scientists refer to the 12,000 years or so since the last ice age as the Holocene epoch. It encompasses the rise of human civilization and our co-evolution with tools and technologies that allow us to grapple with our physical environment. But if intelligence augmentation has the kind of impact I expect, we may soon have to start thinking of ourselves as living in an entirely new era. The focus of our technological evolution would be less on how we manage and adapt to our physical world, and more on how we manage and adapt to the immense amount of knowledge we’ve created. We can call it the Nöocene epoch, from Pierre Teilhard de Chardin’s concept of the Nöosphere, a collective consciousness created by the deepening interaction of human minds. As that epoch draws closer, the world is becoming a very different place.

…

With every technological step forward, though, has come anxiety about the possibility that technology harms our natural ability to think. These anxieties were given eloquent expression in these pages by Nicholas Carr, whose essay “Is Google Making Us Stupid?” (July/August 2008 Atlantic) argued that the information-dense, hyperlink-rich, spastically churning Internet medium is effectively rewiring our brains, making it harder for us to engage in deep, relaxed contemplation.

Carr’s fears about the impact of wall-to-wall connectivity on the human intellect echo cyber-theorist Linda Stone’s description of “continuous partial attention,” the modern phenomenon of having multiple activities and connections under way simultaneously. We’re becoming so accustomed to interruption that we’re starting to find focusing difficult, even when we’ve achieved a bit of quiet. It’s an induced form of ADD—a “continuous partial attention-deficit disorder,” if you will.

There’s also just more information out there—because unlike with previous information media, with the Internet, creating material is nearly as easy as consuming it. And it’s easy to mistake more voices for more noise. In reality, though, the proliferation of diverse voices may actually improve our overall ability to think. In Everything Bad Is Good for You, Steven Johnson argues that the increasing complexity and range of media we engage with have, over the past century, made us smarter, rather than dumber, by providing a form of cognitive calisthenics. Even pulp-television shows and video games have become extraordinarily dense with detail, filled with subtle references to broader subjects, and more open to interactive engagement. They reward the capacity to make connections and to see patterns—precisely the kinds of skills we need for managing an information glut.

Scientists describe these skills as our “fluid intelligence”—the ability to find meaning in confusion and to solve new problems, independent of acquired knowledge. Fluid intelligence doesn’t look much like the capacity to memorize and recite facts, the skills that people have traditionally associated with brainpower. But building it up may improve the capacity to think deeply that Carr and others fear we’re losing for good. And we shouldn’t let the stresses associated with a transition to a new era blind us to that era’s astonishing potential. We swim in an ocean of data, accessible from nearly anywhere, generated by billions of devices. We’re only beginning to explore what we can do with this knowledge-at-a-touch.

Moreover, the technology-induced ADD that’s associated with this new world may be a short-term problem. The trouble isn’t that we have too much information at our fingertips, but that our tools for managing it are still in their infancy. Worries about “information overload” predate the rise of the Web (Alvin Toffler coined the phrase in 1970), and many of the technologies that Carr worries about were developed precisely to help us get some control over a flood of data and ideas. Google isn’t the problem; it’s the beginning of a solution.

***

Yet in one sense, the age of the cyborg and the super-genius has already arrived. It just involves external information and communication devices instead of implants and genetic modification. The bioethicist James Hughes of Trinity College refers to all of this as “exo­cortical technology,” but you can just think of it as “stuff you already own.” Increasingly, we buttress our cognitive functions with our computing systems, no matter that the connections are mediated by simple typing and pointing. These tools enable our brains to do things that would once have been almost unimaginable:

• cross-connected scheduling systems allow anyone to assemble, with a few clicks, a complex, multimodal travel itinerary that would have taken a human travel agent days to create.

If that last example sounds prosaic, it simply reflects how embedded these kinds of augmentation have become. Not much more than a decade ago, such a tool was outrageously impressive—and it destroyed the travel-agent industry.

That industry won’t be the last one to go. Any occupation requiring pattern-matching and the ability to find obscure connections will quickly morph from the domain of experts to that of ordinary people whose intelligence has been augmented by cheap digital tools. Humans won’t be taken out of the loop—in fact, many, many more humans will have the capacity to do something that was once limited to a hermetic priesthood. Intelligence augmentation decreases the need for specialization and increases participatory complexity.

As the digital systems we rely upon become faster, more sophisticated, and (with the usual hiccups) more capable, we’re becoming more sophisticated and capable too. It’s a form of co-evolution: we learn to adapt our thinking and expectations to these digital systems, even as the system designs become more complex and powerful to meet more of our needs—and eventually come to adapt to us.

***

But imagine if social tools like Twitter had a way to learn what kinds of messages you pay attention to, and which ones you discard. Over time, the messages that you don’t really care about might start to fade in the display, while the ones that you do want to see could get brighter. Such attention filters—or focus assistants—are likely to become important parts of how we handle our daily lives. We’ll move from a world of “continuous partial attention” to one we might call “continuous augmented awareness.”

As processor power increases, tools like Twitter may be able to draw on the complex simulations and massive data sets that have unleashed a revolution in science. They could become individualized systems that augment our capacity for planning and foresight, letting us play “what-if” with our life choices: where to live, what to study, maybe even where to go for dinner. Initially crude and clumsy, such a system would get better with more data and more experience; just as important, we’d get better at asking questions. These systems, perhaps linked to the cameras and microphones in our mobile devices, would eventually be able to pay attention to what we’re doing, and to our habits and language quirks, and learn to interpret our sometimes ambiguous desires. With enough time and complexity, they would be able to make useful suggestions without explicit prompting.

And such systems won’t be working for us alone. Intelligence has a strong social component; for example, we already provide crude cooperative information-filtering for each other. In time, our interactions through the use of such intimate technologies could dovetail with our use of collaborative knowledge systems (such as Wikipedia), to help us not just to build better data sets, but to filter them with greater precision. As our capacity to provide that filter gets faster and richer, it increasingly becomes something akin to collaborative intuition—in which everyone is effectively augmenting everyone else.

***

So what’s life like in a world of brain doping, intuition networks, and the occasional artificial mind?

Banal.

Not from our present perspective, of course. For us, now, looking a generation ahead might seem surreal and dizzying. But remember: people living in, say, 2030 will have lived every moment from now until then—we won’t jump into the future. For someone going from 2009 to 2030 day by day, most of these changes wouldn’t be jarring; instead, they’d be incremental, almost overdetermined, and the occasional surprises would quickly blend into the flow of inevitability.

By 2030, then, we’ll likely have grown accustomed to (and perhaps even complacent about) a world where sophisticated foresight, detailed analysis and insight, and augmented awareness are commonplace. We’ll have developed a better capacity to manage both partial attention and laser-like focus, and be able to slip between the two with ease—perhaps by popping the right pill, or eating the right snack. Sometimes, our augmentation assistants will handle basic interactions on our behalf; that’s okay, though, because we’ll increasingly see those assistants as extensions of ourselves.

The amount of data we’ll have at our fingertips will be staggering, but we’ll finally have gotten over the notion that accumulated information alone is a hallmark of intelligence. The power of all of this knowledge will come from its ability to inform difficult decisions, and to support complex analysis. Most professions will likely use simulation and modeling in their day-to-day work, from political decisions to hairstyle options. In a world of augmented intelligence, we will have a far greater appreciation of the consequences of our actions.

This doesn’t mean we’ll all come to the same conclusions. We’ll still clash with each other’s emotions, desires, and beliefs. If anything, our arguments will be more intense, buttressed not just by strongly held opinions but by intricate reasoning. People in 2030 will look back aghast at how ridiculously unsubtle the political and cultural disputes of our present were, just as we might today snicker at simplistic advertising from a generation ago.

Conversely, the debates of the 2030s would be remarkable for us to behold. Nuance and multiple layers will characterize even casual disputes; our digital assistants will be there to catch any references we might miss. And all of this will be everyday, banal reality. Today, it sounds mind-boggling; by then, it won’t even merit comment.