Recent appearances on the quiz show Jeopardy by the IBM supercomputer dubbed Watson have brought questions of artificial vs. human intelligence to the fore in publications as diverse as the Economist and Entertainment Weekly. The silicon beast's ability to parse natural language full of idiom and metaphor (and to phrase its answers in the form of questions) has been truly impressive. But Watson's feats would not have been possible without instantaneous access to virtually the entire sum of human knowledge via the Internet. Yet these days that's the part we find less awe-inspiring; for many of us, being able to unearth the answer to nearly any factual question in seconds is something we take for granted. Among its many goals, James Gleick's The Information: A History, a Theory, a Flood aims to explain how we got here.

Gleick has explored the ways scientific and technological advances have altered our collective view of the world in best-sellers such as Chaos: Making a New Science and Faster: The Acceleration of Just About Everything. The Information, though, feels like the book he's been working toward all along -- a history-explaining, paradigm-altering look at the evolution of the human capacity to process data.

The book's first half or so proceeds as a fairly straightforward examination, starting pretty much at the beginning: "The alphabet was invented only once." There are frequent reminders that our current technological revolution isn't the first; early users of written language "found it awkward to keep tenses straight, like voicemail novices ... circa 1980." And there are plenty of cocktail-party nuggets to be gleaned: the first English dictionary, compiled in 1604, did not contain the word "dictionary."

Gleick expertly draws out neglected names and stories from history, assembling what could pass for a squad of mostly unsung info-heroes. Among them are John Napier, the inventor of the logarithm; Charles Babbage, creator of the first mechanical computer; Ada Lovelace, the woman who wrote the first computer language; and Samuel Morse, who invented not the telegraph but the method of encoding its messages. Of course, Alan Turing, Kurt Godel and Jimmy Wales (the founder of Wikipedia) pop up, too.

If this squad had a leader, it would be Claude Shannon, who, by inventing the transistor in 1948, and outlining a mathematical theory of communication the following year, is, to Gleick, more responsible than any other individual for the information-drenched environment we find ourselves in today. And he knew it, too, saying in 1956, "I think that this present century in a sense will see a great upsurge and development of this whole information business ... collecting ... transmitting ... and, perhaps most important of all, the business of processing it."

Beneath these factoids and stirring accounts of discovery and invention, though, lies a deeper story, one that reinforces the degree to which the tools we use to communicate about existence determine how we perceive it. Spoken language is an abstraction of reality, and writing takes it to another level. Any form of recording or storing information is a kind of time-travel, as the voice mail anecdote above suggests.

Methods of sending information had a similar effect, once they became quick enough. The immediacy of telegraphic (i.e. near-instantaneous) communication forced another recalibration of the sense of time -- folks used to learning of distant events only ex post facto had to adjust to the fact that they were now aware of the current conditions in far-off places, "that this is a fact that now is, and not one that has been," in the words of a contemporary account. And now we can watch Egyptian protesters live on television, or check out a streaming TMZ feed of Lindsay Lohan's latest court appearance.

Gleick's skill as an explicator of counterintuitive concepts makes the chapters on logic, the stuff even most philosophy majors slept through in class, brim with tension. It's quite possible, one realizes, that the most revolutionary statement of the 20th century was Bertrand Russell's paradox, which seems like mere sophistry, but is in fact a consciousness-altering revelation: "S is the set of all sets that are not members of themselves." Think about it (and for more detail, check out the excellent graphic novel Logicomix).

Things get both more fascinating and more far-out once the 20th century's two key scientific discoveries come into play: quantum mechanics and the structure of DNA. The former, through Heisenberg's uncertainty principle, decrees that we can never possess all the available information about a given subatomic particle; the latter reveals that the entire specific structure of any biological entity, ourselves included, is the result of information transmitted on a genetic level.

The equivalency between bits of information as the smallest unit of data and as the smallest units of existence grows clearer and clearer, until Gleick's contention that it's bits all the way down, makes a certain sense. If we view the entire universe as literally composed of information, whether in Borges' metaphorical Library of Babel or Google's servers, is that an epistemological cop-out, a self-fulfilling prophecy privileging perception over a somehow deeper reality? Or is it an empowering recognition of the ways in which our minds, our machines, and the very world around us are more closely related than we could have dreamt?