Google and the internet are changing the way our brains work, no doubt about it. With the internet at our fingertips, why bother to remember trivial facts when Wikipedia is just a click or two away? In the latest issue of The Atlantic, Nicholas Carr makes a convincing argument about the various ways our obsession with cyberspace is altering the way we think, then tries to tell us that's a bad thing. Here's why he's wrong.

Carr's argument is a subtle one so I suggest reading the whole feature. But let me take a shot at a one-sentence distillation: The internet is giving us a form of ADHD when it comes to reading, and we should be scared of that.*

I don't entirely disagree with the first part of that thought, but the second doesn't make a whole lot of sense. In Carr's own words, humans have been developing technologies that change the way we thnk throughout our history:

As we use what the sociologist Daniel Bell has called our "intellectual technologies"-the tools that extend our mental rather than our physical capacities-we inevitably begin to take on the qualities of those technologies. The mechanical clock, which came into common use in the 14th century, provides a compelling example. In Technics and Civilization, the historian and cultural critic Lewis Mumford described how the clock "disassociated time from human events and helped create the belief in an independent world of mathematically measurable sequences." The "abstract framework of divided time" became "the point of reference for both action and thought."

The clock's methodical ticking helped bring into being the scientific mind and scientific man. But it also took something away. As the late MIT computer scientist Joseph Weizenbaum observed in his 1976 book, Computer Power and Human Reason: From Judgment to Calculation, the conception of the world that emerged from the widespread use of timekeeping instruments "remains an impoverished version of the older one, for it rests on a rejection of those direct experiences that formed the basis for, and indeed constituted, the old reality." In deciding when to eat, to work, to sleep, to rise, we stopped listening to our senses and started obeying the clock.

Now I hate alarm clocks as much as the next guy, and it's true, we do live our lives by minutes and hours more than the cycles of the sun, moon, tides, or whatever. But is Carr really trying to say that the advent of the 9-5 job cancels out the advances of all of science, math, and our understanding of the universe? That's pushing it.

And so is this passage on how Google will one day turn into the HAL-9000:

Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our brains. "The ultimate search engine is something as smart as people-or smarter," Page said in a speech a few years back. "For us, working on search is a way to work on artificial intelligence." In a 2004 interview with Newsweek, Brin said, "Certainly if you had all the world's information directly attached to your brain, or an artificial brain that was smarter than your brain, you'd be better off." Last year, Page told a convention of scientists that Google is "really trying to build artificial intelligence and to do it on a large scale."

...their easy assumption that we'd all "be better off" if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google's world, the world we enter when we go online, there's little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.

Google's dominance of the digital world is admittedly a little unnerving, but HAL? C'mon now. Actually Carr's article leads and ends with references to the murderous and fictional computer, making it pretty clear what he thinks about the role artificial intelligence will play in our non-fiction future.

In the end Carr's article isn't entirely ham-handed — but his analysis is. He looks back on Socrates, who once wrote about how the invention of writing would be the death of us all. Later, other writers thought the printing press would ruin the pursuit of knowledge. Looking back, those sentiments seem shortsighted, and with good reason. They're actually evidence against Carr's case: If printing presses are any indication of how these things go, the internet will facilitate an intellectual revolution the likes of which no one could predict in the early going.

But Carr still argues that the internet is going to ruin the human mind. Who knows, maybe he just couldn't resist the opportunity to compare himself to Socrates. Regardless, both Carr and the ancient Greek were wrong on this one: their arguments are little more than over-intellectualized bellyaching that resemble old people's classic "kids these days" speech. But instead of moaning about modern youth, the refrain is more like "technology these days."

*I realize the paradox here — if Carr's right, no one's going to go read the whole feature. You probably won't even read this whole post. You'll scan the headline, maybe a paragraph or two, then go flitting off to the next item. I've got more faith in io9ers, though.