BLOGS

In June, a writer named Jonah Lehrer got busted for recycling material on a blog at the New Yorker. Lehrer, who specialized in writing about the brain, had been writing a blog called The Frontal Cortex for six years at that point; having just been appointed a staff writer at the New Yorker, he moved it to their web site, where he promptly cut and pasted material from old posts, as well as from magazine and newspaper pieces.

At the time, I just thought he was squandering a marvelous opportunity. When I was asked to comment on the situation, I wrote that some of the things Lehrer had done were uncool, while some were fairly harmless. But Lehrer himself acknowledged that what he was done was stupid, lazy, and wrong. So I figured he’d gotten the sort of school detention that wakes you up and keeps you from getting expelled.

Four months later, I’m struck by how wrong I was.

I’m quoted in the latest of a long string of articles about Lehrer’s misdeeds, a feature in this week’s issue of New York by Boris Kachka. Kachka talked to me for a long while, and it’s clear that he talked to a lot of other people–journalists and scientists alike. He’s ended up with the best account I’ve read of this sad, strange story.

A lot of the other stories and commentaries have been twisted to showcase people’s assorted bugaboos. I’ve lost count of how many times people fussed over Lehrer’s fancy jackets and haircut, as if they were tied up in his moral standing. If Lehrer had a mullet instead, it would not diminish his misdeeds. There was a fierce passion driving people to draw lessons from Lehrer’s story–lessons, I suspect, that they had already drawn and for which they were now just looking for evidence to confirm. In a rare misstep, for example, Reuters blogger Felix Salmon declared Lehrer the exemplar of all that is wrong with TED talks: “TED is a hugely successful franchise; its stars, like Jonah Lehrer, are going to continue to percolate into the world of journalism.” In fact, Lehrer has never given a TED talk. When you’re condemning a culture that promotes the distortion of facts to fit an easy story, it’s best not to distort the facts for an easy story.

In his densely reported piece, Kachka rightly sees two major aspects to this story: Lehrer’s own misdeeds and the culture that fostered and rewarded it.

I was willing to cut Lehrer some slack at first, but as the additional evidence came in, I wondered if I was making excuses for him. The breaking point came when I read about how he had warped a story about a memory prodigy, claiming that he had memorized all of Dante’s Inferno instead of just the first few lines. When someone noted the error, Lehrer blamed it on his editor, but kept on using the enhanced version of the story in his own blog and on Radiolab (which later had to correct their podcast). It’s easy to slip up with facts, but we have an obligation to admit when we’re wrong and not make the same mistake again. It would have been bad enough that Lehrer distorted the facts and continued to do so after having the facts pointed out to him. But he was also willing to damage other people’s reputations along the way. That’s when I signed off.

As for the other side of the story–the culture that fostered Lehrer–I appreciate that Kachka avoided silly sweeping generalizations–that all popular writing about neuroscience has become the worst form of self-help, that speaking about science in public is the intellectual equivalent of pole-dancing. Kachka instead reflects on the trouble that arises when a science writer reduces complex science to a glib lesson. He’s right to zero in on Lehrer’s 2010 New Yorker article “The Decline Effect and the Scientific Method” as an example of this error. For years, a lot of scientists and science writers alike have grown concerned that flashy studies often turn out to be wrong. But Lehrer leaped to a flashy conclusion that science itself is hopelessly flawed.

That makes for great copy (29,000 people liked the story on Facebook), for which I’m sure his editors were grateful. But Lehrer himself didn’t believe what he was writing. If scientific studies were fundamentally unreliable, then why did he continue to publish articles and a book full of emphatic claims about how the brain works–all based on those same supposedly unreliable studies?

The reality is more complicated. After Lehrer’s piece came out, the Columbia statistician Andrew Gelman was asked what he thought of it. “My answer is Yes, there is something wrong with the scientific method,” he wrote–adding (and this is crucial)–“if this method is defined as running experiments and doing data analysis in a patternless way and then reporting, as true, results that pass a statistical significance threshold.”

In other words, this is not a matter about which we should simply issue Milan-Kundera-like utterances, like Lehrer does in his article: “Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.” In fact, this is a matter of statistical power, experimental design, posterior Bayesian distributions, and other decidely unsexy issues (Gelman explains the gory details in this American Scientist article [pdf]).

Kachka understands there’s no easy way out of this dilemma, quoting Daniel Kahneman, the Nobel-prize-winning, best-selling Princeton behavioral economist: “There’s no way to write a science book well. If you write it for a general audience and you are successful, your academic colleagues will hate you, and if you write it for academics, nobody would want to read it.”

I put it to Kachka in a similar way, referring to writers like Lehrer: “They find some research that seems to tell a compelling story and want to make that the lesson. But the fact is that science is usually a big old mess.”

And the very way we choose to read about science makes it hard to convey that messiness. I will use my own work as an example of that failure.

In the current issue of Discover, I examine electroconvulsive therapy. I had about 1500 words to write about it, and so I only focused on a single study recently published in the Proceedings of the National Academy of Sciences. I think it’s an important piece of research, because it uses fMRI for the first time to look at what happens to the brain when ECT pulls people out of major depression.

But it’s also true that the study was necessarily small, that the particular method of fMRI they used is very new, that for now the study remains unreplicated, and that there’s a lot of debate in scientific circles (not to mention beyond) about some of the impacts of the treatment.

In the end, I probably oversimplified, leaving people with too much of a feeling that ECT is a perfect cure (it’s not) and an impression that we know exactly how it works (we don’t). But, to paraphrase Kahneman, there’s no way to write a science article well.

Still, the article I wrote was, I believe, the best of my options for discussing the subject. I didn’t have ten thousand words to use to explore its full complexity. I certainly wasn’t going to get many readers if I wrote a scientific journal paper. And waiting for fifty years to see if this research holds up seems like a worse option as well. So I had to fall short. Again. And I will take the criticism that my article triggers and try to do a better job the next time around.

I don’t mean to sound hopelessly fatalistic. Writers can either tackle this dilemma with eyes wide open, or they can look for a way to cut corners and pretend that the dilemma doesn’t exist. And readers can improve things too. When you find yourself captivated by someone talking to you about science in a way that makes you feel like everything’s wonderfully clear and simple (and conforms to your own way of looking at the world), turn away and go look for the big old mess.

It seems the thoughtful, serious writers of Zimmer’s stature keep plugging away, doing their good work while, sometimes, the more self-promoting fame-seekers get ahead. But I’ll take good old Zimmer any day!
I do hate it that the guys at RadioLab were sullied by Lehrer-gate. They are the Sex Pistols of science journalism (Jad being Johnny Rotten, Robt Sid Vicious).
Btw, Zimmer is the Radiohead of science journalism.

Part of me feels that you do your audience a disservice to say we blatantly follow (and I’m generalizing here too – we can’t escape it!) science-related stories as absolutely, unequivocally true. I know I don’t. That’s not to say I don’t look at an article relating to some new study and go, “Pssh, yeah right.” Usually I say, “Wow, interesting! What comes next?”

But I think I’m one of the few. Which brings me to the question of why is questioning stories or asking for more information before saying, “This is fact (or at least a widely accepted theorem).” the minority? Isn’t that the scientific method? Scientists (whether they are physicists or sociologists) don’t conduct one study, see the results and think that one study’s results are the end-all-be-all of any issue. They know it is just the beginning to another series of questions.

So, why don’t people in general think about things in the same way? Why not think critically and analytically?

Kachka discusses that we’ve developed this culture that fosters essentially lazy (or simplified) science (really it’s not just science, it’s a lot of things) journalism. But can’t we go farther back to the cause of that? Why do we have to simply things? Sure, we can’t have everything be academic level writing, but purveying to the masses has to have a middle ground somewhere.

I think an issue that needs to (and is, or at least is being talked about) be addressed is how we, as a society, aren’t teaching the scientific method as a means of thought – a way to look at the world and how to process information we receive.

Strong post, Carl. But I have one quibble — or perhaps a little more than that.

I didn’t like the Kahneman quote in the original piece, and I don’t agree with your gloss on it in the post, Carl — this bit: “But, to paraphrase Kahneman, there’s no way to write a science article well.”

Not to accuse you of false modesty (and damn, I always forget the name for that rhetorical trope), but (a) consider the writer here, and more seriously, (b) the true statement you are trying to make is that there is no easy way to write a science article/book well.

That’s what’s gotten my goat throughout the Lehrer back and forth. A bunch of critics (not you) have held up Lehrer as the person who writes beautifully and thus can’t possibly get the science down properly. Even if someone acknowledges that Lehrer (or any science writer) could actually grasp the science involved, the claim remains that the demands of story and of style are at odds with those of representing science accurately and fully.

On the other hand, as Kahneman’s quote argues, communicating science “correctly” gains no readers.

To which I say, “bullshit.”

To go to your specific example, the article on a paper looking at ECT through fMRI measurements. You (Carl) place a higher demand on your article than research publications ask of the individual communications from researchers. You ask how to communicate the context of a research program enmeshed in some controversy about the basic tools with which it makes its claims while delivering news about one small incident within that program. The paper itself just has to report its own findings, with some apparatus of reference to surrounding work.

I’m thinking now about varieties of tricks science writers can use to deal with this problem — but the point for this mini-rant is one you’ve argued before when I’ve been in the audience: the purpose of public science communication is to convey ideas to broad audiences, often though not always using very specific and inevitably partial reports from within the field being covered. That there are costs in doing so is obvious — but that it is possible to do it well, and to do it in ways that the costs are not those that mislead audiences, but rather require them to read more if they want a finer-grained appreciation of where a given piece of knowledge comes from.

Boy, was that last an ugly sentence.

Ah well, to paraphrase my man Mark Twain (among others), I didn’t have time to write a gorgeous one.

The biggest problem with science journalists is that they usually dont actually understand what’s interesting in science. They get seduced by the flashy shit that is meaningless. This science usually tends to be the worst kind of science. If science writers understood science better this would happen less. So don’t blame science blame science writers. Thats why i’m not fatalistic about this but angry. I could sit with you in my office for 30 minutes and tell you half a dozen incredibly interesting things that happened in science over the last 3 months that no one will ever ever write New Yorker articles about.

Carl, I think the problem affects *any* field of expertise. Popular history is just as much an effort to simplify a great big mess into a coherent story. Sports reporting suffers the same problem when the writer tries to distil a compelling tale out of the actions of two dozen players, thousands of little contests, the effect of the crowd, the decisions of the umpires, and the broader context of the game.

Also, I don’t think you need to try too hard to cover all the messiness in detail. I think a simple acknowledgement of the state of the research is enough. Is this an exciting new study, but unreplicated and controversial? You can say that in five words. It’s enough to alert the reader to the broader setting of the study. If they’re interested, they can read up on it themselves if at least they know there’s more to the story.

Carl – Your post put me in mind of Tim O’Brien’s, “The Things They Carried,” the brilliant collection of stories about what it’s like to be a soldier in war. In particular, his story “How to Write a True War Story” made the point that if a war story has a neat conclusion and a clear lesson or moral, it’s NOT a true war story. The fog of war makes that kind of clarity impossible.

Like war stories, reports from the “scientific front” risk faltering when they pretend that they carry more than just a small piece of a much larger, imperfectly understood reality (aka the big old mess). “Swift, sure, and wrong” is the axiom that can be applied to too many in the popular media who discard nuance and uncertainty in their effort to make science simple and sexy.

But it can be done. I agree with Tom Levenson that Kahneman’s assertion that there’s no way to write a science article well is nonsense. Go ask the fans of Carl Sagan or Stephen Jay Gould or Brian Greene. Ask the readers of Carl Zimmer. There’s just no easy way to do it. Not everyone can do it well. It’s hard work.

But it’s a critical task. As Carl Sagan noted in his last book, as the tools of technology become increasingly powerful in their ability to transform our world it is more important than ever that the general population and the representatives they send to congress understand what science is, what it can and cannot do.

I like your approach of linking individuals to population-level effects. It superficially appears that this debacle is analogous to your reporting on the misconduct between scientists and population-level pressures of scientific success.

Part of me thinks that what Lehrer did is no big deal, as I do not think that it is fair to place blame on individuals irrespective of their environments. That, and even the mistruths appear to be trivial in my opinion. Did Shereshevsky, for example, in fact have a bad memory or was it exceptional? Does changing Lehrer’s words to the Truth change his message? Similarly, earlier this year monologuist Mike Daisey experienced being unnecessarily reprimanded on This American Life for exaggerating inconsequential facts of the message he presented. Neither Mr. Daisey nor Mr. Lehrer are respective are the primary investigators of their media–Daisey is not an investigative journalist and Lehrer is not a scientist. The role of these liaisons, in my opinion, is that of any artist (respectively performance-based or literary): communicate complex ideas through a medium in a way that maximizes inclusiveness over complexity with–like all models–error. To say that everything we have written or said was 100% truthful would be untruthful in itself. The question in the case of Mr. Lehrer revolves around the bias of the error (is it conscious, systematic, malevolent bias?), which I do not think was egregious (outlying) given the population. I do not think that I am quick to judge Mr. Lehrer because I am not one to bias my messages with conscious, malicious intent. (The corollary of which I believe that the leaders of witchhunts often have some tricks up their sleeves.)

With respect to communicating science, I am under the opinion that taking a more conservative approach would be more beneficial to our culture in general. To me, I do not believe that new science should always be the focus of scientific communication because if scientists are battling it out, why throw the layman into the fray? How does that affect the public’s opinion of science? Why not tell the story once the dust has settled and we have a more thorough understanding of the phenomenon? A favourite related HD Thoreau quote of mine reads: “Every generation laughs at the old fashions, but follows religiously the new.” I know that a lot of scientific communication (including your work) is not always about the newest and shiniest science, but when it is I often wonder if new science is best for the public’s respect and understanding of science and the scientific process. One week wine is good for you, then next week it is bad . . .

Agreed. I work in a technical field and it’s axiomatic that the documentation I use must be tailored to both an audience and a purpose.

It’s easy to get the wrong idea and think that there is AN ANSWER. Nope! If you choose to write the uber article, the one that does it all, covers everything, leaves no stone unturned, you will still get criticism. In this case, it will typically be along the lines of, “I can’t read this. Why didn’t you write something that gets to the point?”

However I think that in simplified works, ones that gloss over certain issues, there is a good tactic. A simple footnote with a reference or link to additional information. Add a one-liner to trigger the interest of those who want or need to know. In online works this can be a hyperlink and you’ve done your best for your reader.

A very nice piece. I just wanted to comment on the last few paragraphs (Carl’s). I think your self-analysis about simplifying versus the ‘big old mess’ is good – and it’s precisely the same dilemma that we face in teaching at undergraduate and even graduate levels. Time and time again I hear students praise the teacher who ‘makes it all seem so clear’ – for the length of the class and perhaps a few hours afterwards, but then the gloss starts to rub off as the students begin to realize that they are missing details that help them make practical application of ideas. Perhaps this is more prevalent in the physical sciences, after all an equation or relationship really can have the beauty of simplicity or elegance. But it’s an interesting situation because you *want* the students to get those ‘ah-ha!’ moments, as well as to appreciate the grander complexity of things, but it’s awfully hard to do both simultaneously.

The problem (if that’s the right word) with pop-sci product from people like Lehrer or the Gladwell’s of the world, is that it may achieve the sensation of an ‘ah-ha!’ moment, but it obscures the rest by using convenient anecdotes in lieu of deeper and more uncertain material. This is forgivable in short articles with word limits, but much less so in entire books. It is a tremendous pity, because so much of the beauty of science and the universe around us is in the texture of that complex mess itself.

I fully agree that the best one can do is keep on trying to inject that notion where possible. It’s all about including that ‘just one more thing’ moment as much as the ‘ah ha’ moment.

The general audience can definitely grasp the messiness of science if it were presented that way in an article. Those conflicts are what makes the writing interesting, but those anecdotes about Inferno-memorizing geniuses makes those stories fantastical.

So I happened to read your ECT article yesterday — and while I appreciated it, I remember thinking at the time, “This seems particularly unguarded for what I expect from Zimmer”… a little bit more like “mainstream” popsci, if you will. That you were word constrained explains a lot

In any case, it was very enlightening to hear your admirably honest comments about it shortly after reading the article and having reservations. I’m not sure what my takeaway is here, but I appreciate it. Thank you for your honesty and vulnerability.

I like your last statement Carl, “When you find yourself captivated by someone talking to you about science in a way that makes you feel like everything’s wonderfully clear and simple (and conforms to your own way of looking at the world), turn away and go look for the big old mess.”

Well statedl! Maybe this is why 40 percent of Americans are highly suspicious of Darwinism and Dawkins, who repeatedly refers of creationists as “history deniers”. This is precisely what evolutionary theory amounts to: A subjective “interpretation” of historical data relating to unobserved distant past events. With all interpretations based on “inferences” and “assumptions” as to what the historical data SUPPOSEDLY represents, and what SUPPOSEDLY happened in the distant past. With no possible way of ever empirically verifying that evolution happened one way, and not another way, or even whether the evolutionary continuum happened at all. Meaning, despite the claims of Darwin’s disciples, evolutionary theory is neither wonderfully clear, nor simple.

There’s a concept called the “preponderance of evidence”. When lots of studies from different fields all point to a similar conclusion, that conclusion becomes more reliable. When predictions made based on that conclusion are later independently verified, again by many types of specialists in many fields, then the conclusion becomes more reliable still. When this well-supported and usefully predictive conclusion also jibes with realities well-known to and recorded by humans throughout history (animal husbandry, crop breeding), it is logical to treat the conclusion as a working model of reality until or unless new data appears which casts its reliability in doubt. So far, everything we’ve learned points to evolution, and virtually nothing we’ve learned points to something else.