Over at the LA Times, Maria Bustillos has a harsh review of Nicholas Carr’s new book, The Glass Cage: Automation and Us. Referring to Carr as one “of the Information Age's chief scaredy-cats,” Bustillos characterizes his latest endeavor, an explanation of problems with automation, as expanding “the field of his paranoia to computers in general.”

Sure, Carr’s last book, The Shallows: What The Internet is Doing to Our Brain,stirred up lots of debate about whether Google is making us stupid. Some said no and decided he’s too pessimistic—a negative judgment that’s absolutely appropriate to reach. You certaintly can have good reasons for believing that Carr’s conclusions aren’t supported by all of the research he musters. But ‘paranoia’ has connotations of irrationality and delusion. It’s an unfair association when applied to Carr. It’s particularly troubling because versions of the rhetoric are routinely applied to technology critics to unduly strip their skepticism of legitimacy.

One recurring strategy to invalidate a technology critic’s observations is to frame an issue in terms of overly simplistic comparisons. Then, all you need to do is allege the critic is blind to obvious advantages and makes a mountain out of a molehill by dramatizing small problems. Versions of this move are especially pronounced in discussions about safety and security. Why worry at all about drones when they keep our troops out of harms way? Why think twice about autonomous cars when humans are so easily distracted behind the wheel and are prone to making mistakes that result in injuries, including fatalities? Privacy?!? Who dares place a value on it when there’s so many bad people out there trying to harm us. Hello…ever hear of terrorists?

Nicholas Carr speaking at the VINT Symposium held in Utrecht, Holland on June 17, 2008. (Photo credit: Wikipedia)

Fortunately, sharp thinkers are around to explain why these comparisons omit crucial considerations and make it too easy to dismiss reasonable concerns. If you’re not taking deep pause to consider what’s lost if drones create material conditions that lead to moral hazards, whether sufficient oversight will ensure robotic cars can make appropriate moral decisions, and if banging the security drum too loudly unfairly stacks the deck against privacy, then you’re looking past significant issues.

Bustillos commits a version of this framing fallacy when she writes:

“Inattentive doctors may rely excessively on computerized diagnostic tools to the detriment of patients. Sure! — but it does not seem to occur to Carr that the existence of inattentive doctors predates the invention of diagnostic software. The real question is whether a new automated system produces results superior to the old ones.”

But why is this the “real” question? Why is the only legitimate way to look at the pros and cons of automating medical practice through the lens of whether some results are better? What about, as Paul Harvey used to say, the rest of the story?

Look, in the abstract it sure does sound like a great idea to computerize increasing amounts of medical information. If done right, you’d expect to see noteworthy gains in efficiency, like quicker note-taking and easier—maybe even life-saving—access to records and tests; this, in turn, should help lower costs and free up resources that can be used to provide better patient care. Indeed, it was so easy to associate medical automation with streamlining inefficient activities and creating new and improved digital age operations that heavy investment and optimistic praise were heaped on its activities.

But as Carr points out, the endeavor hasn’t quite gone as planned. He argues that it’s been a financial bust—at least when viewed in light of the original expectations. That’s an important thing to think about. Why hasn’t reality lived up to the hype, especially given the vast resources that have been used? The issue is even more pressing if automation has contributed to the degradation of essential activities.

Here’s a few of the problems that Carr emphasizes.

First, automation perversely led some patients to receive inflated medical bills. Costs for minor procedures that were once “subsumed into more general charges” became easy to itemize and routinely add as billable expenses. Although this can be seen as fixing a flaw in the system, Carr says it potentially introduces a new conflict of interest. “The fact that the doctor often ends up making more money by following the software’s lead,” Carr writes, “provides a further incentive to defer to the system’s judgment.”

Second, evidence suggests that when automation makes it easier for physicians to obtain diagnostic images, their behavior can change. Because doctors can order tests with minimal effort, Carr contends, they sometimes became more inclined to order them, including unnecessary ones.

Third, automation provides doctors with “boilerplate text” that they can cut-and-paste into their notes. This ability to rapidly use generalized descriptions nudges physicians to provide “less personalized care,” especially when compared to previous practices of “writing and dictation…that forced them to slow down and ‘consider what they wanted to say’.”

Fourth, the new digital format presents physicians with information in a manner that, comparatively speaking, disinclines them to review far back into patient histories. Carr writes: “Although flipping back through the pages of a traditional medical chart may seem archaic and inefficient these days, it can provide a doctor with a quick by meaningful sense of a patient’s health history spanning many years. The more rigid way that computers present information actually tends to foreclose the long view.”

Fifth, the presence of automated tools makes it harder for physicians to be meaningfully present to patients. The attention they need to give to computers leads to multi-tasking, with so much shifting going on that the technology can act like a “third party” in the exam room. While Bustillos is right that multi-tasking isn’t a new problem, the fact remains that some of the current technological sources are more disruptive than we’ve appreciated. And, as debates about laptops in the classroom suggest, they might require dramatic solutions—or at least more proactive ones—than regularly are enacted.

In short, by identifying some of the ways that using automated technology transformatively mediates medical practice, Carr, like a good technology critic, helps us appreciate why so-called gains of “superior results” can come with a steep price of hard to see tradeoffs that are no less potent for being subtle and nuanced.

Another common mistake that Bustillos perpetuates is the superhero fallacy. Disguised as a humble appeal to populist sensibilities and the wisdom of the masses, the superhero fallacy depicts the average person as master of his or own domain. But in reality, we often choose under such tight constraints that what passes as our behavior isn’t fully our own.

Bustillos writes:

“Carr rails against the reliance on GPS systems that causes us to become lost more easily, against drawing software that makes drawing by hand a relative chore, and in general against a blind and stupid worship of technology. Fair enough. But anyone who's ever used autocorrect or suddenly been advised by a car to drive thousands of miles to Vermont — that is to say, nearly every working American, by this time — already knows that it is foolish to rely too heavily on technology and has already been acting on that knowledge for a long while.”

Do we all really know that it’s foolish to rely heavily on technologies like autocorrect, or its less powerful predecessor, spell checkers? Higher education doesn’t seem to think so. When we professors ask our students to compose papers at home we take it for granted that they’ll use good word processing programs. In fact, we’ll often say that points will be deducted for errors like misspelling and presume that spelling mistakes arise because students rushed through an assignment and didn’t take advantage of their computer’s capacity to spot errors. In short, we hold students accountable for not allowing themselves to be dependent enough on technological assistance.

Now, you could ask: Is such dependence really a problem? Seen through the lens of the framing fallacy, clearly not. After all, students can submit papers that appear to have the virtue of being better proofread. But what happens when folks who grow accustomed to writing in this environment of ever-available digital assistance have to write handwritten notes? Speaking only for myself—but I’m sure many of you out there can relate—my spelling abilities have dramatically plummeted over the years. In fact, I’m so embarrassed about how badly my spelling has become that I’ll quickly use my phone to look up words before writing them down by hand.

But surely, the superhero fallacy suggests, if this degradation in skill were truly a problem, I’d recognize the foolishness and make the wise decision to course-correct. But in a world with professional and social expectations, individuals often don't have this luxury. I work in a publish-or-perish profession that prioritizes speed and volume. My inbox—which also has a spell checker—is over-flowing everyday with notes from students and colleagues. I simply don’t have time to pay close attention to words that get highlighted or the luxury to disable auto-correct to slow things down.

Yes, some technology critics can go too far down the dystopian rabbit hole. But it’s a mistake and social disservice to use fallacious reasoning to dismiss the good ones as paranoid. Without technology critics, our conversations about technology would be too rosy to lead to the good decisions Bustillos presupposes we’re making all the time.

Note

As a philosopher of technology, I’m generally sympathetic to Carr’s approach to thinking about technology. Consequently, we discussed automation while he was writing The Glass Cage and he thanks me in the Acknowledgements.