If you answered “46.1519,” “8,000,” and “Qantas,” there are two possibilities. One is that you’re Rain Man. The other is that you’re using the most powerful brain-enhancement technology of the 21st century so far: Internet search.

True, the Web isn’t actually part of your brain. And Dustin Hoffman rattled off those bits of trivia a few seconds faster in the movie than you could with the aid of Google. But functionally, the distinctions between encyclopedic knowledge and reliable mobile Internet access are less significant than you might think. Math and trivia are just the beginning. Memory, communication, data analysis—Internet-connected devices can give us superhuman powers in all of these realms. A growing chorus of critics warns that the Internet is making us lazy, stupid, lonely, or crazy. Yet tools like Google, Facebook, and Evernote hold at least as much potential to make us not only more knowledgeable and more productive but literally smarter than we’ve ever been before.

Advertisement

The idea that we could invent tools that change our cognitive abilities might sound outlandish, but it’s actually a defining feature of human evolution. When our ancestors developed language, it altered not only how they could communicate but how they could think. Mathematics, the printing press, and science further extended the reach of the human mind, and by the 20th century, tools such as telephones, calculators, and Encyclopedia Britannica gave people easy access to more knowledge about the world than they could absorb in a lifetime.

Yet it would be a stretch to say that this information was part of people’s minds. There remained a real distinction between what we knew and what we could find out if we cared to.

The Internet and mobile technology have begun to change that. Many of us now carry our smartphones with us everywhere, and high-speed data networks blanket the developed world. If I asked you the capital of Angola, it would hardly matter anymore whether you knew it off the top of your head. Pull out your phone and repeat the question using Google Voice Search, and a mechanized voice will shoot back, “Luanda.” When it comes to trivia, the difference between a world-class savant and your average modern technophile is perhaps five seconds. And Watson’s Jeopardy! triumph over Ken Jennings suggests even that time lag might soon be erased—especially as wearable technology like Google Glass begins to collapse the distance between our minds and the cloud.

So is the Internet now essentially an external hard drive for our brains? That’s the essence of an idea called “the extended mind,” first propounded by philosophers Andy Clark and David Chalmers in 1998. The theory was a novel response to philosophy’s long-standing “mind-brain problem,” which asks whether our minds are reducible to the biology of our brains. Clark and Chalmers proposed that the modern human mind is a system that transcends the brain to encompass aspects of the outside environment. They argued that certain technological tools—computer modeling, navigation by slide rule, long division via pencil and paper—can be every bit as integral to our mental operations as the internal workings of our brains. They wrote: “If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process.”

Fifteen years on and well into the age of Google, the idea of the extended mind feels more relevant today. “Ned Block [an NYU professor] likes to say, ‘Your thesis was false when you wrote the article—since then it has come true,’ ” Chalmers says with a laugh.

The basic Google search, which has become our central means of retrieving published information about the world—is only the most obvious example. Personal-assistant tools like Apple’s Siri instantly retrieve information such as phone numbers and directions that we once had to memorize or commit to paper. Potentially even more powerful as memory aids are cloud-based note-taking apps like Evernote, whose slogan is, “Remember everything.”

So here's a second pop quiz. Where were you on the night of Feb. 8, 2010? What are the names and email addresses of all the people you know who currently live in New York City? What's the exact recipe for your favorite homemade pastry?

Our own brains are brilliant at storing and retrieving information that’s viscerally important to us, like the smile of someone we love or the smell of a food that made us sick, explains Maureen Ritchey, a postdoctoral researcher at U.C.–Davis who specializes in the neuroscience of memory. But they’re prone to bungle abstract details like the title of a book we wanted to read or the errand we were supposed to run on the way home from work.

Notepads, calendars, and Rolodexes were once the preferred tools to fill those gaps. But those technologies were deeply flawed. “Whoever designed the address book totally didn’t think through the cognitive science of how remembering people works,” says Phil Libin, Evernote’s CEO. “The brain does not remember people alphabetically based on their name.” His solution: Evernote Hello, an app that stores photos of people’s faces and records of where and when you met them along with their contact information. If you can’t recall their name later on, you can ask Evernote to instead show you the faces of all the people you met at that conference last month in Indianapolis.

Other Evernote apps let you take notes via text, audio, video, or Web clipping and look them up later by title, date, location, or full text search, so you don't have to remember their titles or file them into categories. A browser widget called Evernote Web Clipper supplements traditional Web search by scanning your personal notes simultaneously when you look something up on Google. Evernote Food specializes in food photos and recipes.

Evernote, of course, is just one example of a cloud-based mobile app that gives you new ways to retrieve useful information that a Google search wouldn't find. Soundhound or Shazam can "listen" to a few seconds of a song playing on the radio and tell you the name of the band, the album, and all the lyrics. Checkmark can sense when you drive past the post office and shoot you an alert to drop off that package you've been carrying around in your trunk.

So where were you on that February night three years ago? If you use a modern email program like Gmail, there's a good chance you can piece it together by calling up your emails from that date. Which of your friends could you crash with or call up for a drink when you visit New York this summer? That's what Facebook's new Graph Search is for. See? Your memory is better than you think.

That might sound like a lot of a lot of apps to keep track of, even if they are all just a swipe and a click away on your mobile phone. And while information retrieval is getting easier, you still have to remember to enter the information in the first place. Those are exactly the problems that future generations of mobile software will attempt to solve.

Microsoft’s MyLifeBits project was a radical early attempt at seamless and comprehensive information storage, inspired by Vannevar Bush’s 1945 vision of the memex. The memex, Bush wrote in a stunningly prescient article in the Atlantic Monthly, “is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.” Bill Gates embraced an updated version of this prediction in his 1996 book The Road Ahead. After reading both works, Microsoft researcher Gordon Bell began trying in the late 1990s to record, scan, and index everything he read, wrote, watched, and heard. The task has grown less cumbersome over the years, especially with the addition of the SenseCam, a digital camera that hangs from his neck and snaps photos automatically throughout the day.

Google today is working on technologies that may come even closer to Bush’s ideal. Again, from his Atlantic Monthly article:

The camera hound of the future wears on his forehead a lump a little larger than a walnut. … The cord which trips its shutter may reach down a man's sleeve within easy reach of his fingers. A quick squeeze, and the picture is taken. On a pair of ordinary glasses is a square of fine lines near the top of one lens, where it is out of the way of ordinary vision. When an object appears in that square, it is lined up for its picture. As the scientist of the future moves about the laboratory or the field, every time he looks at something worthy of the record, he trips the shutter and in it goes, without even an audible click. …

As for the information-retrieval side of the equation, in December Google hired the artificial-intelligence guru (and extreme techno-optimist) Ray Kurzweil to head an engineering team that’s rumored to be working on personal-assistant technologies that would intuit and deliver the information you need before you even have to ask. For instance, if Google Glass noticed that you were approaching your bus stop around your usual commute hour, it could let you know automatically that your bus was running late and that you’d be better off taking the subway.

Whether that’s worth all the trouble is not yet clear. Ritchey, the U.C.–Davis neuroscientist, points out that the human brain is already adept at forming and acting on intentions. Trying to train a computer to guess our thoughts might result in a lot of false alarms and needless alerts.

There are also, of course, pitfalls to having devices that are smart and powerful enough to aid our minds in all sorts of ways.

One is the fear that the same Internet that makes us smarter in relatively superficial ways may also be making us stupid on a deeper level. The writer Nicholas Carr worries that the information age is leading inexorably to an age of ADHD—that a parade of tweets and hyperlinks is training our brains to expect constant stimulation and thus rendering us incapable of reading a book, let alone sustaining the type of profound contemplation that leads to real wisdom.

There may be some truth in that, though brain scans suggest that searching Google actually stimulates more parts of the brain than reading a book. And it’s worth keeping in mind Carr’s own observation that Socrates once bemoaned the rise of the written word on similar grounds. Similarly, 15th-century techno-skeptics fretted that the printing press would weaken people’s minds.

Chalmers points out that this type of reasoning depends on the notion that the human mind is coterminous with the brain. Sure, the rise of literature probably eroded our brains’ capacity to remember epic poems verse by verse. Long before that, Chalmers says, the advent of oral language might well have reshaped our cortexes to the detriment some primitive sensory capacities or modes of introspection. “Maybe the Nicholas Carr of the day said, ‘Hey, language is making us stupider,’ ” Chalmers jokes.

If you view the brain as the control room of the extended mind, it’s clear language has on balance made us much more intelligent. There’s no guarantee that the Internet will do the same, but for an example of how it is already spawning antidotes to the danger Carr articulated, try saving his article to Instapaper so you can peruse it at your leisure.

The second major pitfall may be more pressing. In order for a technology to count as a genuine cognitive enhancement, ready and reliable access (as in mobile Internet service) is one criterion. But another is trust. The more useful services like Google, Facebook, Instagram, and Evernote become, the greater the risks should they ever betray us. Betrayal could come in the form of system failure: Imagine the chaos if everyone grew to rely on Google Glass and self-driving cars only to have them all go dark or malfunction at the same time. Or it could come in the form of privacy invasions, a specter that already looms over some of the Internet companies that know the most about our personal lives. Just last week, hackers stole the email addresses, user names, and encrypted passwords of 50 million Evernote users. Facebook has never suffered a massive breach, but it erodes its users’ privacy by design, often for the benefit of corporate advertisers.

Yet today’s technological mind extensions also hold the promise of freeing our conscious brains to spend more time on higher-order tasks rather than committing facts and experiences to memory. As Albert Einstein once said when he failed to remember the speed of sound, “[I do not] carry such information in my mind since it is readily available in books.” Just as calculators enable math students to focus on theorems and proofs, ubiquitous access to the contents of Wikipedia and the Web at large could allow us to devote more cognitive space to thinking critically and building bridges between ideas. In other words, far better than turning us into Rain Man, the Internet could make us all a little more like Einstein.