Big Data. The very concept seems to demand, indeed require, that massive pronouncements and claims of Herculean proportions should follow. Such a concept must inevitably overwhelm previous trends and satisfy even the most unbelievable expectations. But what in truth is the story that proponents of Big Data are (loudly) proclaiming? And is it a fad that's here today, only to be gone tomorrow? Or does it indicate a more deeply embedded belief system, part of a living myth, for our time?To find an answer to this question, I turned to the latest book on the subject, appropriately entitled Big Data, with one of those absolutely headline-grabbing subtitles that is designed to boggle the mind (and presumably make the casual observer pick up the book and, hopefully, buy it): A Revolution That Will Transform How We Live, Work, and Think. OK, I thought, so what kind of a transformation are we talking about here?First let me say that the authors come well credentialed. Victor Mayer-Schönberger teaches at the Oxford InternetInstitute at Oxford University and, we are told, is the author of eight books and countless articles. He is a "widely recognized authority “on big data. His co-author, Kenneth Cukier, hails from the upper echelons of journalism: he's the data editor for The Economist and has written for other prominent publications as well, including Foreign Affairs.This was a good place to start, I thought, to learn about the story of big data and the kind of changes—oops, Imean transformations—that it was inevitably going to produce in our world. The major transformation the authors predict is that soon computer systems will be replacing or at the very least augmenting human judgment in countless areas of our lives. The chief reason for this is the enormous amount of data that has recently become available. Digital technology now gives us access to, both easily and cheaply, large amounts of information, frequently collecting it passively, invisibly, and automatically.The result is a major change in the general mindset. People are looking at data to find patterns and correlations rather than setting up hypotheses to prove causality: "The ideal of identifying causal mechanisms is a self-congratulatory illusion; big data overturns this. Yet again we are at a historical impasse where 'god is dead.' That is to say, the certainties that we believed in are once again changing. But this time they are being replaced, ironically, by better evidence." So there you have it. God is dead, yet again. Only this time the god is the god of the scientific method, of causality. Out with the "why," in with the "what." If Google can identify an outbreak of the H1N1 flu and specify particular areas of significantly large instances of infection, is there any reason that we should worry about why this is occurring in such places, when we already know the what: there's an outbreak of flue and it is especially heavy in these locations, the authors ask. We have, my friends, slid into the gentle valley of the "Good Enough." Correlation is good enough for now. It's fast, it's cheap, it's here, let's use it. We'll get around to the why later, maybe, if it's not too complicated and expensive to find out. And here are some of the examples the authors use for proof of the good enough of correlations: “After all, Amazon can recommend the ideal book. Google can rank the most relevant website, Facebook knows our likes, and LinkedIn divines whom we know.” Such exaggerated attribution of insight and intuition to computer algorithms is so common these days that it’s seldom even called out. That's the transformation, according to the authors, that we have to look forward to. And behind their predictions lies a sense that the movement toward reliance on the results of big data to understand our world is not just inevitable but that the data itself, the vast invisible presence in our modern lives, also contains within itself a power and energy of incalculable value and ever-improving predictive powers. They call it “big-data consciousness”: Seeing the world as information, as oceans of data that can be explored at ever greater breadth and depth, offers us a perspective on reality that we did not have before. It is a mental outlook that may penetrate all areas of life. Today we are a numerate society because we presume hat the world is understandable with numbers and math. . . . Tomorrow, subsequent generations may have a “big-data consciousness”—the presumption that there is a quantitative component to all that we do, and that data is indispensible for society to learn from.”

And the heroes of this transformation? They are the people who can wield this data well—who can write the algorithms that will move us beyond our superstitions and preconceptions to new insights into the world in which we live. These are the new Galileos of our day because they will be confronting existing institutions and ways of thinking. In a clever turn of what I like to call "The Grandiose Analogy," the authors compare the use of statistics by Billy Beane of Moneyball fame to Galileo's pioneering observations using a telescope to support Copernicus’s theory that the Earth was not the center of the universe: "Beane was challenging the dogma of the dugout, just as Galileo's heliocentric views had affronted the authority of the Catholic Church." It's another attempt to elevate by association the comparatively banal practices of putting a winning baseball team together on a shoestring to the level of the world-shattering scientific observation that the earth and by extension mankind is not at the center of God's universe after all.

If you can ignore the hyperboles in this book, however--and given the number of them this is no small challenge—you can come to see the reality of what big data actually is and what kinds of contributions its use might make to our lives. The scientific method isn't going away. The march of science to discover and explain its best hypotheses at any given time will continue. In fact the patterns and correlations unearthed by big-data methods may form the basis for new hypotheses and bring us even closer to understanding the "why" of many things to come.

Nonetheless, within some contexts, big data can produce actionable information. In marketing, Amazon, for example, can use knowing that people who read Civil War histories may also like a particular subset of mystery writers to boost sales through their customer recommendation algorithms. Google's ability to detect flu outbreaks also produces actionable information. The NIH and other medical institutions can take actions based on such findings to make vaccines plentiful in certain areas, produce more vaccines if feasible, prepare hospitals and medical offices for the spike in needs, and publish other public health guidelines. Still there some real problems with heralding the quantification of everything into digitally manipulatable form as the answer to myriad issues. The supposition fails to take into account any fundamental issues except those obvious ones involving privacy and surveillance. First of all there are the insurmountable problems that complex algorithms create. That very complexity produces higher and higher risks for errors in the writing and executing of the code. That same complexity makes it very difficult to judge whether the results reflect reality. The very fact that such algorithms may challenge our intuition makes it difficult to validate their results without having an understanding of the "why," or even a sense of the assumptions and content of the algorithms themselves. Statistics can be powerful tools but there was also a wonderful book called How To Lie with Statistics that came out nearly sixty years ago and is no doubt still relevant today. The authors of Big Data claim that knowledge and experience may not be so important in the big data world: "When you are stuffed silly with data, you can tap that instead, and to greater effect. Thus those who can analyze big data may see past superstitions and conventional thinking not because they're smart, but because they have the data." The authors also suggest that a special team of “algorithmists” could oversee all the algorithms to ensure that they do not invade the privacy of individuals or cross other boundaries. I’m afraid Mayer-Schönberger and Cukier really ought to talk to the SEC about Wall Street and its algorithms to see how well that’s been working out!Finally, the proponents of big data want to discount intuition, common sense, experience, knowledge, insight, and even serendipity and ingenuity, never mind wisdom. In their quest to elevate the digitalization of everything, they neglect those very qualities, qualities which cannot be digitized. As Einstein once famously reminded us: "Not everything that can be counted counts, and not everything that counts can be counted."

There’s a semi-apocryphal story about Norbert Wiener, the brilliant, visionary MIT mathematician. It is said that he used to walk around the halls of the campus with his eyes closed and a finger on the wall to ensure that he did not lose his way. One day traveling what is fondly known as the “Infinite Corridor,” which stretches 825 feet from the main lobby of MIT’s central building west to east through 5 major buildings in all housing classrooms and offices. On one particular day, one of the classrooms in session happened to have its door open and Norbert Wiener simply entered the classroom and walked completely around the perimeter and out the door again as he made his way toward his destination—to the silent amazement (and amusement) of the professor as well as his students.

Recently the New York Times published an excerpt from a long-lost article that Norbert Wiener wrote in 1949. Originally solicited by the oddball Sunday Times editor, Lester Markel,it was mysteriously either lost by Markel or abandoned by Wiener, or both. In any event, a researcher recently found the among Wiener’s papers at the MIT archives. In the piece Wiener about “what the ultimate machine age is likely to be.” He expounded future automated systems well beyond what then existed and about smart computers and smart gauges that would integrate one machine with another machine various manufacturing processes. Although he did not foresee the economic shift in the value of information versus manufacturing, the revolution he did envision was profound and his predictions dire: “These new machines have a great capacity for upsetting the present basis of industry, and for reducing the economic value of the routine factory employee to a point at which he is not worth hiring at any price. If we combine our machine-potentials of a factory with the valuation of human beings on which our present factory system is based, we are in for an industrial revolution of unmitigated cruelty. . . Moreover if we move in the direction of making machines which learn and whose behavior is modified by experience, we must face the fact that every degree of independence we give the machine is a degree of possible defiance of our wishes. The genie in the bottle will not willingly go back in the bottle, nor have we any reason to expect them to be well disposed to us. In short, it is only a humanity which is capable of awe, which will also be capable of controlling the new potentials which we are opening for ourselves. We can be humble and live a good life with the aid of the machines, or we can be arrogant and die.”Would that our writers and our thinkers and our leaders of corporations today, instead of blithely hailing the onslaught of robots and marveling at increased productively and the brilliance of our technology, had some of the compassion and wisdom that Wiener possessed in 1949.

Why is digital technology so exciting? Why is all this fast change that technology brings always so inevitable and wonderful? There’s an eerily similar narrative that so many books follow these days as they recount the breakthroughs in technology and how they are changing our culture. Written by highly credentialed and respected scholars and technology writers, they all seem to begin by announcing a revolution that is taking place, a level of change that will dramatically (and of course rapidly) affect how we live, how we think, how we interact, or sometimes all of the above. The books present many studies and discuss their ramifications to support their theses, all of which are pretty rational and often good food for thought. So far, so good. The problem with these books really appears toward the end of the works. There the authors somehow feel compelled to extensively predict how some particular aspect of digital technology will inevitably transform various aspects of our lives in drastic ways, some of which may enhance our lives and some of which may simply make things more complex, or more artificial, or more alienated than they already are. In most instances, one is left thinking that writing the book got the authors so enmeshed in their own material that they literally ventured over the edge and into some great unknown by the end of the work. Maybe all these authors/editors need to do is just lop off the last 40 pages.To show the typical trajectory of such books, let’s take a look at Infinite Reality: Avatars, Eternal Life, New Worlds, and the Dawn of the Virtual Revolution by Jeremy Bailenson, founding director of Stanford’s Virtual Human Interaction Lab, and Jim Blascovish, Distinguished Professor of Psychology at UC Santa Barbara.

First there’s the revolutionary thesis:“We sit on the cusp of a new world fraught with astonishing possibility, potential, and peril as people shift from face-to-face to virtual interaction. If by 'virtual' one means 'online,' then nearly a third of the world’s population is doing so already.”

Then they announce the intent of the book: “In this book, we provide an account of how virtual reality is changing human nature, societies, and cultures as we know them. Our goal is to familiarize readers with the pros and cons of the brave new world in which we live. We explore notions of consciousness, perception, neuroscience, media technology, social interaction, and culture writ large, as they pertain to virtual reality—and vice versa.”Blascovich and Bailenson are genuinely excited because they see a new frontier of opportunity for the behavioral sciences based on all the data that is becoming available through online social interactions. While the environment may be virtual, the behavior of the individuals is “real.” And virtual interactions can change individual behavior in the real world as well as people’s sense of themselves. Virtual experiences change how people make important decisions and how they react viscerally to real-life situations. Examples of studies and analyses of various data sets take up the majority of the volume and are meant to convince the reader that there is indeed a revolution underway.

Some of their narratives and analyses are quite interesting and could have some meaningful implications for understanding how and why humans behave in the ways that they do and how therapists can use virtual reality software of various sorts, including explicitly targeted exercises and games, to help people change negative behavior patterns or improve their abilities or performance in some areas.

It’s in the final portion of the book, however, that the real problem arises. And it occurs in an area where so many technology writers get into hot water: Predicting the future. The authors paint a series of semi-plausible scenarios for various activities from market research to legal trial evidence to surgical training as well as physical therapy, airplane-pilot training, even virtual vacations. Some of these things are well on the way to becoming part of our lives (although I myself will probably resist virtual tourism to my dying day). Then Bailenson and Blascovich take another small leap into the more distant future that really puts them over the edge. First they discuss the “promise” of avatars, which they say will become “perceptually indistinguishable” from their real counterparts. People will be able to automatically interact with their avatars without the need for hardware or even voice commands so that they may be unaware that they are actually incorporating an avatar into their body. It will be an experience akin to wearing contact lenses. The authors also posit that avatars will “walk among us” so that we might not even realize that the figure approaching us is actually an avatar. This kind of speculation seems to seduce the authors into a discussion of how avatars will change close relationships (which inevitably in such works always seems to devolve into a discussion of virtual sex). Equally predictable and mundane are the applications of virtual reality to the domains of religion and education. A couple of examples are enough to illustrate how silly the claims about future can get:On religion, the authors write that having a virtual reality experience of Moses parting of the Red Sea, including the ability to smell the flopping fishes would deepen our understanding of the miracle. Frankly I prefer the Charlton Heston version myself. (And in any event, the land is dry when the Israelites walk through the path, according to the Bible.)

On education, things are even worse when you consider that one of the authors is a psychologist: The authors envision a “virtual tutoring system that will combine virtual reality, nanotech, and artificial intelligence"capabilities and provide the most complete educational experience anyone could hope for. The salient feature of this transition from “physical to digital” learning environments seems to be the elimination of the notably dull textbook with options such as movies and virtual reality programs. This means the children wouldn’t have to read anything at all—a great improvement, as any psychologist will tell you, for the development of the brain and for the learning process itself.

With improvements such as these one can only hope that the virtual revolution is just that—virtual, that is, not for real.

Steven Johnson wants us to believe. In his new book, Future Perfect: The Case for Progress in the Networked Age, Johnson asks us to place our faith in the power of peer-to-peer networks to lead the crusade of progress and solve many of today’s problems. He calls himself and any others who profess this faith to be "peer progressives," which he defines as people who live with "the conviction that Wikipedia is just the beginning, that we can learn from its success to build new systems that solve problems in education, governance, local communities, and countless other regions of human experience."

Johnson claims he is not a "net utopian." He says he’s not someone who believes any problem can be solved by "just throwing Facebook at it." And his book offers many sober warnings that peer progressives don't believe that everything can be solved through loose, informal decentralized organizations of individuals. The Internet isn't a panacea, he states, but it is a powerful role model for solving problems. At bottom, however, the book and the author himself really seem to be of two minds on the topic—because Johnson really does want us to believe in peer progressivism. His book concludes with what amounts to a call to arms: “We know a whole world of pressing social problems can be improved by peer networks, digital or analog, local or global, animated by those core values of participation, equality, and diversity. That is a future worth looking forward to. Now is the time to invent it.” I’d say that Johnson’s a bit more of a utopian than he’s like to admit.

Future Perfect, like so many other books that either evangelize or renounce the glories of our Internet age, tells stories that reinterpret narratives of our time. The reinterpretations are designed to line up contemporary narratives with a particular author’s own conceptual framework. You have to watch out for these often masterful verbal tricks, however, because distortions may and do occur in this process of reinterpretation. One would not think, for example, that peer-to-peer networks had anything to do with Chesley Sullenberger's spectacular landing of an Airbus A320 on the Hudson in January 2009. Hailed as a national hero for saving the lives of all the passengers and crew after the plane collided with a flock of Canadian geese, Sullenberger was feted from coast to coast for weeks and even landed himself an invitation to President Obama’s inauguration. But Steven Johnson sees Sullenberger’s story from a different perspective. It is not just the story of an experienced and highly skillful pilot making fast decisions and executing expert actions to avert a fatal crash. The feat was not his accomplishment alone by any means. Instead it was really a feat of collective intelligence and foresight on the part of many groups and organizations. Here's how Johnson retells the story:“There is no denying Sullenberger’s achievement that day, but the fact is, he was supported by a long history of decisions made by thousands of people over the preceding decades, all of which set up the conditions that made that perfect landing possible. A lesser pilot could have still failed catastrophically in that situation but as good as Sullenberger was, he was not working alone. . . . The plane survived because a dense network of human intelligence had built a plane designed to withstand exactly this kind of failure. It was an individual triumph, to be sure, but it was also, crucially, a triumph of collectively shared ideas, corporate innovation, state-funded research, and government regulation.”Somehow the heroic Chesley Sullenberg gets lost in the process of this hymn to peer progressivism. What Johnson is trying to say is that progress comes from collective efforts, or, in terms of one of buzzwords of the day, "collective intelligence.” And it’s not that he’s technically wrong in the way that he weaves the narrative. But it is a distinctly different interpretation of the whole incident and the mythology that grew up around "The Miracle on the Hudson." It is said that if all you have is a hammer, everything will look like a nail. In Future Perfect, all Johnson has for a role model is the Internet, and as a result everything tends to look like a peer-to-peer network and all progress, is, we are led to believe, the result of “collective intelligence.” Such is the new narrative. But one must ask, in our effort to see everything as connected to everything else, are we losing sight of individual achievement? Or are those of us who value individual achievement just clinging to some outmoded mythology that must make way for all things produced via “collective intelligence”? Perhaps we should consult the wisdom of the crowd about this . . .

The nonstop chatter about the power of the Internet has reached a level of cacophony that is hard to dismiss or even ignore. All sorts of people are fascinated with its ever-growing size, with its ubiquity, with its endless variety, and, of course, with its “promise.” Depending on whom you listen to, the Web promises specific transformative powers, from the spread of democracy to the end of history, from equal access to all information to a repository of the sum of human knowledge that is utterly unfathomable in its size and breadth. The Web’s decentralized structure itself has even become a model for all sorts of peer-to-peer networks, from which “collective intelligence” will emerge to solve many of the world’s thorniest problems. It’s enough to make you think that surely the cultural equivalent of the second coming is at hand.

The Web has indeed captured the general imagination, but what is actually emerging is a common living mythology for our time. A major component of any mythology is power of epic proportions, recounted in larger-than-lifestories. Today’s ever-expanding Internet of website nodes offers a story of the power of technology and may well provide a symbol that helps us understand the experience of what is like to live today.

In ancient times, humans constructed mythological symbols out of their physical environment and their way of life. In the hunting and herding societies, animals played key roles in the myths and rituals. In the agrarian societies, the planting cycle provided the focus for myths. So it is not surprising that, in a society woven through with various digital technologies that have changed the way we live and work—as well as the way we think and interact with others—that technology should find a central place in the myths of our day.

And by myths here I mean living, vital myths, which are neither true nor false but through their symbols speak directly to what it means to experience life at a particular time in history. The famous mythographer Joseph Campbell (1904-1987) spent his life studying, writing, and teaching others about the world of mythologies. Whereas dreams are private myths, Campbell would say, myths are public dreams. Mythologies are really stories that contain archetypal symbols, symbols that have been used in countless inflections throughout the mythologies of the world.

Campbell found remarkably similar and detailed stories of deaths and resurrections, virgin births, heroes’ journeys, and many other images and narratives. Like Jung and many others, Campbell emphasized that these similarities existed and resonated with so many people throughout the ages because myths originate in the unconscious. They are biologically grounded in the psyche, which Campbell defined as “the inward experience of the human body, which is essentially the same in all human beings, with the same organs, the same instincts, the same impulses, the same conflicts, the same fears.”

So one key to the powerful attraction to the Web for many people as they approach it from different angles with varying interpretations and emphases, is that the web, which is sometimes called a net, is itself an archetypal symbol that recurs in other cultures and mythologies as a metaphor for, among other things, interconnectedness. One classic mythical symbol is the Hindu “Net of Indra,” or “Net of Gems.” The Net of Indra is an infinite net that contains a gem at every crossing of one thread with another. Each gem reflects all the other gems. Everything is interrelated and everything that occurs does so in relation to everything else. Campbell sees a similar insight in the nineteenth-century philosopher Arthur Schopenhauer’s idea about the shape of an individual’s life. Schopenhauer observed that towards the end of your life you can look back and see a consistent order, a plan, to it. People you seem to have met by chance become important agents in the structure of your life. And you too have served unintentionally a similar role in the lives of others, so that one gathers a larger vision of the unfolding of life “like one big symphony, with everything unconsciously structuring everything else.” Schopenhauer wrote that it is as if a single dreamer were dreaming a dream in which all the characters dream as well, so that everything links to everything else. James Joyce developed a similar theme in his final work, Finnegans Wake.

Campbell also told an American Indian story where the web again plays a central role in conveying the idea of interconnection. An American Indian chief, Chief Seattle, wrote to the President of the United States in 1852 in response to an inquiry from the government about buying tribal lands to accommodate new influxes of immigrants from Europe. The basic theme was one of the interdependence of all of nature: “But how can you buy or sell the sky? The land? The idea is strange to us. If we do not own the freshness of the air and the sparkle of the water, how can you buy them? . . . The earth does not belong to man, man belongs to the earth. . . . Man did not weave the web of life, he is merely a strand in it. Whatever he does to the web, he does to himself.”Power, light, vastness, interconnection, transcendence beyond the visible—all these characteristics of the web converge in these various images. And this may explain why our electronic web—with its pulsing light, expanding toward some unknown and unseen space, connecting countless people, institutions, and sources of information through an endless array of light-emitting nodes—captures the imagination of so many today as they dream the dream of life in the here and now.

As Joseph Campbell always maintained, myths may evoke mystery and awe, their symbols leading forward. They point to clues of the spiritual potentialities of human life. While Campbell said it was impossible to predict what the next mythology might be, any more than it’s possible to predict what one might dream on any given night, he did believe that any new myth would have to take into account the planet as a whole and include the machines of our modern life. This is the focus of our mythology: the story of the progress of technology, with the computer engineers as our magicians and the web as the source of all knowledge, both our Delphic oracle and the symbol of the interconnected nodes of the human race.

When something is digitized, whether it is some text, an image, a video, or a series of sounds, it becomes broken up into a language made up of just ones and zeros, the universal language known as the binary code of electronic communications. Each letter becomes a series of digits. Every image first becomes a series of pixels, each of which is then translated into a series of digits. In the end the whole audio-visual world can be reduced to an infinite series of ones and zeros, and we are swept down a rabbit hole where everything becomes “content,” separated from its forms and often from its context as well. This is the world in which mash-ups are considered high art, and it is also the world in which data, information, and knowledge are jumbled together, morphing into undifferentiated instantiations of the same "content."Digitalization is the great leveler of meaning and value in our time. It can make entities seem both discrete and connected at the same time. If I search on Google for “paradise,” the first thing that appears will be an advertisement for the Paradise Rock Club on Commonwealth Avenue in Boston (since I live in the environs) followed by bakeries, a small town in Michigan, pictures of tropical islands, and innumerable stores and restaurants that have adopted the popular name. Occasional Wikipedia entries are scattered about alluding to another world.

It is only in the middle of the fifth page (does anyone ever go that deeply into a search?) that I finally come across what I was really after: information about Dante’s epic poem Paradise. What’s more, except occasionally for the first entry, all the results appear in the same format accompanied by descriptions of roughly equal length. Rock clubs, tropical islands, and world-class masterpieces—all appear of equal weight when sorted by search engines such as Google or Bing. (Admittedly Dante’s work appears closer to the top if one searches on “paradiso.” In the world of “Content” (and let’s not forget “Big Data”), life does indeed seem to be getting, as Alice might observe, "curiouser and curiouser," by the day.

“A functioning mythological symbol I have defined as ‘an energy-evoking and –directing sign.’ . . . Their messages are addressed not to the brain, to be interpreted there and passed on; but directly to the nerves, the glands, the blood, and the sympathetic nervous system. . . . The living mythological symbol . . . is an image that hits one where it counts. [It] talks directly to the feeling system and immediately elicits a response, after which the brain may come along with its interesting comments.” Myths To Live By, Joseph CampbellThe Sistine Chapel, T.S. Eliot, Manet, Thomas Mann, Bernini, Shakespeare—pick your own favorites. But anyone who has deeply experienced a great work of art understands the power of symbols to move us and even change the way we feel, the way we think, the way we live. Like many others, Joseph Campbell found a lot of similarities between how living mythological symbols work and how artistic symbols affect the individual. He considered both myths and art as products of the imagination, producing symbols that arise from the unconscious and communicate their emotive power to the unconscious minds of others. Although he did not believe that our society (in the latter decades of the twentieth century) had any powerful common mythology, I’m not sure he would believe the same to be true today. Things were changing too fast for a mythology to develop, he thought. Yet now it seems as though the very intense pace of change has become a common part of the story of our lives. Images such as the web itself and the multiprocessor have captured the imagination of many. Is it possible that we are in fact developing a new mythology for our time?In order to see whether functional mythological symbols are in fact developing at all today, it seemed best to first understand fully what Campbell meant by a functioning myth and to see if today’s leading neuroscientists had been able to discover any new insights in the process through which a symbol affects the individual mind. Nobel Laureate Eric Kandel’s latest book, entitled The Age of Insight: The Quest to Understand the Unconscious, in Art, Mind, and Brain, seemed to be a promising place to start.

Kandel chose to study the Viennese expressionist artists at the turn of the last century in order to explore the interplay of art, mind, and the unconscious. He does see a direct correlation between earlier art forms that tried to arouse religious emotions and more modern art that attempts to tap into the unconscious realm of emotions by using primitive (or universal) archetypes in a similar way. Kandel notes that since the earliest cave paintings thirty thousand years ago, virtually all groups of human beings have created images. So even though story-telling and the visual and musical arts do not seem to be necessary for survival, they have been persistently part of human life from the start. Art, Kandel argues, creates a process that can produce “an Aha! moment, the sudden recognition that we have seen . . . the truth underlying both the beauty and the ugliness depicted by the artist.” “A great work of art,” he continues, “enables us to experience a deep pleasure that is at once unconscious yet capable of triggering conscious feelings.”Kandel admits that the real challenges of understanding how the mind works in biological terms remain the crucial goals for neuroscientists in the twenty-first century. The current science of mind cannot explain much about the aesthetic experience, although scientists do distinguish between how the unconscious mind responds to an image, which they say produces “emotions,” and how the conscious mind responds, which they say generates “feelings.” This distinction dovetails with how Campbell described the experience of a functioning mythological symbol, which is initially experienced viscerally and then perhaps later interpreted linguistically by the conscious brain and placedwithin a cultural and historical context. The similarities in the processes of experiencing myths and art also helps us see why Campbell contended that in the future it would be the artists who would create the myths and keep them alive: “The function of the artist,” Campbell told Bill Moyers during their series of conversations known as The Power of Myth, “is the mythologization of the environment and the world,” for it is the artist “whose ears are open to the song of the universe.”

“Our problem today is that we are not well acquainted with the literature of the spirit. We’re interested in the news of the day and the problems of the hour.” Joseph Campbell, The Power of Myth (1988)

Some of the best cultural observers in the late twentieth century discerned the initial impact of digital computers on our society and tried to remind anyone who would listen of its dangers. Their thoughts help us remember what’s central to living a full human life in this world full of shiny, wonderful gadgets and always, always, the next new thing. Joseph Campbell’s life spanned a good part of the twentieth century. Born in 1904, this renowned expert in world mythology lived through two world wars, the Depression, the dropping of the atomic bomb, Vietnam, the domestic mess of the sixties, and the relentless encroachment of machines, first the mechanical ones and then the electronic ones, before his death in 1987. Late in life he spent many hours in interviews with Bill Moyers, the cream of which eventually became “The Power of Myth.” A highly popular, deeply interesting set of interchanges gleaned from those conversations aired on PBS soon after Campbell’s death. Subsequently, the entire set of conversations appeared in book form under the same title.

Although Campbell believed that we live in a demythologized world, he found that students around the country were attracted to his lectures in large numbers, mostly, he speculated, because mythology provided messagesunlike what ordinary course work at colleges and universities offered in his day. Myths are “stories about the wisdom of life. . . . What we’re learning in our schools is not the wisdom of life. We’re learning technologies, we’regetting information. There’s a reluctance on the part of faculties to indicate the life values of their subjects.”

One major reason for this was increasing specialization, something that has intensified in the twenty-first century. Campbell pointed out that specialization necessarily limits the field in which one considers any problem and tends to eliminate the life values, especially the human and cultural aspects of any specific issue. Generalists, on the other hand, have the advantage of a broader perspective and the ability to make more complex associations and perhaps gain deeper insights as well. They can take something learned in one specialty and relate it to something learned in different specialty. By so doing, they can discover similar patterns or contradictions or discontinuities that aren’t apparent when one specializes in a narrow field.Growing specialization and a greater focus on the literal, factual level of life, “the news of the day and the problems of the hour,” have only become more commonplace since the 1980s. Information technologies, with its data gluts, information overloads, knowledge “management,” and, most recently, big data, have put an enormous emphasis on the technologies themselves and have changed the pursuit of knowledge into a process of learning how to access the information one might need to know at some point or other in the future. As a result, the continuum of data, information, knowledge, and wisdom has become jumbled, their meanings confused. Some now describe knowledge as “actionable information.” Others, emphasizing dramatic changes in the state of knowledge due to the Internet, claim that the nature of knowledge has changed fundamentally. Knowledge now resides in networks, they maintain. It can’t possibly reside in an individual's head. In fact, knowledge is probably, in David Weinberger words, “too big to know.” As for wisdom, many seem to equate wisdom today with the consensus of a crowd or, even worse, the dynamics of the marketplace.Like Joseph Campbell, the journalist and medical researcher Norman Cousins lived through the bulk of the twentieth century and observed the onslaught of technology with similar ambivalence and prescience. "The essential problem of man in a computerized age,” he wrote in “The Poet and the Computer” (1990), isn’t any different than it was in previous times. “That problem is not solely how to be more productive, more comfortable, more content, but how to be more sensitive, more sensible, more proportionate, more alive. The computer makes possible a phenomenal leap in human proficiency . . . But the question persists and indeed grows whether the computer makes it easier or harder for human beings to know who they really are, to identify their real problems, to respond more fully to beauty, to place adequate value on life, and to make their world safer than it now is.” Computers as electronic brains can help enormously in vital research of many sorts, Cousins wrote. “But they can’t eliminate the foolishness and decay that come from the unexamined life. Nor do they connect a man to the things he has to be connected to—the reality of pain in others; the possibilities of creative growth in himself; the memory of the race; and the rights of the next generation.” These things matter, Cousins went on to say, because in the computer age “there may be a tendency to mistake data for wisdom, just as there is a tendency to confuse logic with values, and intelligence with insight.” All of which makes that this bright and shiny present and that enchanting next new thing seem quite ephemeral and even trivial in comparison to the really exciting journey of life and the challenge of how to live it fully in the midst of—and perhaps in spite of— all our digital machines.

One day, Carl Jung wrote in a memoir, he suddenly realized that, although he had written extensively about myths and personal transformations, he did not know what myth he himself was living by: “I took it upon myself to get to know ‘my’ myth, and I regarded this as the task of tasks.” In What Technology Wants, former Wired editor and technology writer Kevin Kelly takes on a similar job. He went on his own quest, spending seven years reading and talking to others about what he considers the central personal challenge of our time: how to understand the “essence” of modern technology and find the appropriate personal relationship to it. What Kelly actually discovered was his own myth, the story he (and many others) grapple with today about the technology that pervades our lives and how to live with it. Kelly calls the multitude of technologies that surround us and interact with each other the “technium.” For Kelly, the technium has a life of its own. Because of the countless feedback loops and complex interactions that exist in and between various technologies today, the technium, he claims, has become a sentient, autonomous entity. It represents “the greater, global, massively interconnected system of technology vibrating around us.” More than a set of technologies, the technium has become “a self-reinforcing system of creation,” from which new perspectives, relationships, and influences “emerge.”

What should we make of such large claims? To try to put this theory into perspective, I like to place it within the context of the work Joseph Campbell did with the history of world mythologies. He observed that myths are archetypal stories about the common experiences human beings share. As the stories accumulate, they become a symbolic system that expresses the human condition of a certain time. The images of any given system are drawn from the immediate environment. Thus when a people roam the land in a hunting culture, as the American Plains Indians did, they create myths and rituals concerning the animals. For the Indians, it centered around the buffalo. In an agrarian culture, the myths center on the earth, on seeds, on planting, growing, and harvesting as symbols of birth, life, death, and renewal. Kelly, finding our modern world permeated with machines and their technologies, focuses on the story of those technologies and our relationship to them. Campbell observed that even in the 1980s machines were finding their way into our mythology. He pointed out that Star Wars explores the problem of whether the machine is going to dominate humanity or serve it. In fact Campbell praised Star Wars as a story of mythic proportion that said “technology is not going to save us. Our computers, our tools, our machines are not enough. We have to rely on our intuition, our true being.” This was the message Obe Wan Kenobi gives Luke when he tells him to turn off his computer and use the force he has within. Campbell believed we needed new myths for modern times. And he thought it would have to be the poets and visionaries who would devise those new myths by listening to the song of the universe and creating new metaphors to express it. “Humanity,” as Campbell reminded his readers and students often, “comes not from the machine but from the heart.”

With his vision of the technium, Kevin Kelly offers a different interpretation of our current state of affairs. Through his own quest, he says, he has learned to listen to the machines of technology for enlightenment. “Seeing our world through technology’s eyes has, for me, illuminated its larger purpose.” Technology, he finds, is a much larger force than we had previously imagined. It is as large as nature itself and our response to it should be similar to how people have traditionally responded to nature. While in the past people have looked to nature for enlightenment, now they should look to the technium: “We can see more of God in a cell phone than in a tree frog,” Kelly submits. What’s more, Kelly argues, humans have less and less influence over the collective force of technologies, whose power he traces back to the beginning of the universe: “It follows its own momentum begun at the big bang.” In positing the technium and describing what technology “wants,” Kelly is in effect forging a new myth for our age: Technology is a unifying, evolving entity ever increasing in its power and reach. “Technology is stitching together all the minds of the living, wrapping the planet in a vibrating cloak of electronic nerves, entire continents of machines conversing with one another, the whole aggregation watching itself through a million cameras posted daily. How can this not stir that organ in us that is sensitive to something larger than ourselves?”

Joseph Campbell observed that all living myths, myths, that is, that speak to the common human condition at a certain period of time, have one thing in common: They assume some kind of unity that transcends the reality of what we observe in our lives, a unity that connects all: In the transcendent reality, “everything links and accords with everything else.” Kelly’s quest and his illumination are yet another example of humanity’s quest to envision something larger than ourselves. Even if we actually don’t call it something sacred, the attitude of worship nonetheless remains. It certainly emerges very strongly in What Technology Wants.

In the latest Harper’s Magazine the physicist and novelist Alan Lightman has an essay on “Our Place in the Universe: Face to Face with the Infinite.” The problem as he sees it is that our science is discovering a larger and larger cosmos. Professional astronomers such as Garth Illingworth at the University of California at Santa Cruz have used images from the Hubble Space Telescope to view galaxies so far away that their light has been traveling over 13 billion years to reach us. The distance from Earth adds up to about 100,000,000,000,000,000,000,000 miles. Lightman questions whether we as human beings can actually comprehend such enormous expanses of distance and time. “Science has vastly expanded the scale of our cosmos,” he writes, “but our emotional reality is still limited by what we can touch with our bodies in the time span of our lives.” He wonders whether Illingworth and other astronomers can feel connected to this huge cosmic terrain: “Or are such things instead digitized abstractions, silent and untouchable, akin to us only in their makeup of atoms and molecules?” In other words, can we only comprehend and feel we are a part of the same reality with the cosmos when we reduce it all through physics to basic particles?

Lightman quantifies our existence within the cosmos in another way, a way that makes us seem like a random, insignificant detail in the general cosmological scheme of things: the totality of living matter on Earth—everything from human beings to the scum floating on a pond—accounts for 0.00000001 percent of the amount of the mass of the planet, and based on the best research we have at the moment for the potential for life-sustaining environments elsewhere in the universe, that number for living matter in the universe amounts to 0.000000000000001 percent of the mass of the universe. A very small number indeed. Beyond insignificant in fact. Yet such reductionism doesn’t actually help our understanding of our place in the cosmos. It’s more relevant, Lightman implies, to look at our personal experience. The physicist both begins and ends his essay remembering an experience he once had of “infinity” when he was sailing on the Aegean Sea. He and his wife found themselves in a place where they could look fully around themselves and see neither land nor any other boats. Just water and sky. It was then that he realized some sense of infinity: “a sensation I had not experienced before, accompanied by feelings of awe, fear, sublimity, disorientation, alienation, and disbelief.” And with that moment of insight he understood more about what it means to be human in this vast universe than all the numbers and digital images of far away galaxies could ever convey to him. So perhaps we can only be truly at home in the universe, not by intellectualizing it or analyzing it, but just by settling right in and fully experiencing it.