What Jonah Lehrer reveals about popular science writing

Jonah Lehrer is one of the hottest science writers around. But this week, in a dramatic fall from grace, he resigned from his staff position at the New Yorker, and his publisher has removed his latest book, Imagine, from sale. The catalyst for these dramatic events is the fact that he fabricated quotes from Bob Dylan, as uncovered by the online Tablet magazine.

I had a few small interactions with Jonah Lehrer in late 2009, and looking back, they perfectly reflected both the reasons for his fame, and his impending troubles. At the time he was in charge of the Scientific American Mind Matters blog, and I was writing a piece for this. In a field where some editors are rather brusque, he in contrast was extremely friendly, complimentary, charming, helpful and supportive. It was the easiest thing in the world to like him, and I dearly hoped to have more dealings with him in the future.

At the same time, though, he wrote a News Feature article for Nature with a glaring factual error in it, in a field I know intimately (NB Nature have just corrected the error, but if you want to view the original article, with error intact, you can do so here). He was writing about the celebrated mnemonist, Shereshevsky, and stated, “After a single read of Dante’s Divine Comedy, he was able to recite the complete poem by heart.” This poem is 700 pages long, so that is quite a feat, especially given that Shereshevsky didn’t speak a word of Italian, and the poem was presented to him in its original language. The truth, instead, as Luria writes in his wonderful little book, The Mind of a Mnemonist, is that only a few stanzas of the poem were presented to Shereshevsky.

This doesn’t detract from Shereshevsky’s exceptional skills, though, since he was tested on this foreign set of lines 15 years after this single reading, and was able perfectly to recall not only every word, but every aspect of stress and pronunciation. Unfortunately, Lehrer didn’t recount any of these details, which to me are in some ways more staggering.

When I emailed Lehrer to point out this mistake, his reply was that “it was the one fact my editor added in the final draft…”

At the time, I simply assumed this was true. But now I don’t. This morning I contacted his editor at Nature, Brendan Maher, to ask about this, and Maher told me that this mistake was present in the first draft of the article that Lehrer sent to him, so was most definitely not an inaccurate last minute addition by the editor. To add insult to injury, after I’d pointed out this mistake to Lehrer, he nevertheless repeated it verbatim 7 months later in this Wired blog article, and then 6 months after that in another Wired blog article.

While I enjoy Bob Dylan songs, and admire the man, to me the furore shouldn’t have exploded following this fabrication revelation (though perhaps that was the last straw?), but months before, when some of the early reviews for his latest book, Imagine, came out.

In Lehrer’s previous book, How We Decide (also known as The Decisive Moment in the UK), I experienced yet again the same twin traits of charisma and lack of care over factual accuracy. The writing was utterly engaging, charming, oozing with talent, but at the same time peppered with basic errors. For instance, on page 100 he writes, “This kind of thinking takes place in the prefrontal cortex, the outermost layer of the frontal lobes.” This is anatomical rubbish – the prefrontal cortex instead, as the name implies, is simply the front-most section of the frontal lobes. Layers have nothing to do with it. I expect such mistakes from less able undergraduate students, who are too lazy to read the first line of the relevant Wikipedia article, but never ever in a respected science book. Then on page 112-3, he writes “the first parts of the brain to evolve – the motor cortex and brain stem.” Where did this come from? The brain stem very probably evolved hundreds of millions of years before the much more recent cortex, which the motor cortex is obviously a part of. So this is completely wrong as well. One last example (of many more) on page 100 again: “Neanderthals were missing one of the most important talents of the human brain: rational thought.” To me, rational thought is what keeps most species of animals alive, but at the very least can you make advanced tools and use fire, as Neanderthals did, without “rational thought”?

I was rather surprised to find that How We Decide received almost universal critical acclaim, when the science within it, although beautifully and stylishly explained, was error strewn and somewhat superficial. Most reviewers know little science in detail, I suppose, so don’t notice these errors that scream off the page to a jobbing research scientist. But at what point should these errors be caught?

I am a little ashamed to admit that I felt relief and a little pleasure that Lehrer’s latest book, Imagine, received what I would call a more accurate set of reviews. My favourite is in the The New York Times by the highly respected Harvard scientist, Christopher Chabris, which in part lists a similar set of simple neuroscientific facts that Lehrer got wrong (and here’s another great critical review, by Tim Requarth and Meehan Crist) .

Imagine is sold as a science book, and so the explanation of science doesn’t really suffer if the odd Bob Dylan quote is made up. So although such an act is utterly sloppy, potentially fraudulent and very embarrassing, I don’t feel that this is the aspect of the book we should be pouring the majority of our scorn over.

However, the main purpose of Imagine, to impart science, does suffer significantly when many of these elementary yet important scientific facts are just as wrong as the Dylan quote, and perhaps also if the topic is dealt with in too superficial a way. Chabris’ review came out on May 11th, and it should have been at this stage that the publisher stepped in, and pulled all copies of Imagine off the shelves for a few months, until a factually accurate replacement was available (preferably checked by an actual scientist). And the newspapers and magazines that Lehrer contributed to should have paused and thought about fact and source checking at this point, in early May. Instead, Lehrer was hired as a staff writer for the New Yorker a month later.

I want to emphasise that I think Jonah Lehrer is incredibly talented. He has an enviable writing style, and can talk with amazing eloquence. So it infuriates me even more that he currently lacks rigour in his work, and seems habitually to adopt a deceptive schoolboy attitude to mistakes when they are revealed to him, rather than maturely owning up to his errors. And I do really hope he bounces back from this. If he more deeply immersed himself in a topic, stopped cutting corners, fact-checked religiously, then he could easily reclaim his position towards the top of scientific journalism.

But to me there are wider issues that Lehrer’s case highlights. I’ve written before about the problem of fact-checking and trust within the neuroscientific community (specifically surrounding problems in neuroimaging reporting). But the issue in scientific journalism and book publishing is so much worse.

Cognitive neuroscience might have its set of problems, but at least when we publish an academic paper, it has undergone peer review, where a few other scientists have carefully read through it, and have had an opportunity to highlight problems. For my current general audience science book, The Ravenous Brain (incidentally due out one month today), I took it upon myself to ensure that the book went through an informal peer review process, with academic colleagues reading the entire manuscript, to check for errors. I believe for any general audience science book, but especially those written by non-scientists like Jonah Lehrer, the publisher and author should always include academic scientific review as part of the process, to catch the kind of errors that Lehrer repeatedly makes before they turn up in print. Although my main experience is in the book field, the same applies to newspaper and magazine articles about science.

Another issue brought into focus by Jonah Lehrer is that he clearly has a winning formula for writing popular science books, and might well be able to retire on the earnings from his handful of years of communicating science. This is a very rare position in publishing. Science writing for the general audience should definitely be engaging, fascinating, even inspiring, and there’s no doubt that Lehrer has solved this part of the equation. But ideally it should also be substantive, even challenging at times, not hiding the complexities inherent in almost all science, but guiding the reader carefully through them. Can a non-scientist succeed in this second aim? Possibly in rare cases, but undoubtedly it is far easier for a research scientist within the field to capture this aspect of the work. I wish more scientists wrote for a general audience, and definitely wish that more newspapers and magazines engaged scientists on articles of scientific content. In the magazine sphere it isn’t uncommon for scientists to pair with journalists on articles, and I think it would be great if more articles were written with such partnerships. And perhaps this should be a more common model in the science book realm as well.

In this increasingly competitive academic culture, career respect is almost entirely related to academic publication quantity and quality. I would love for at least a gentle cultural shift, where public engagement of science is given more priority for scientists, not just in the odd talk, or afternoon of public experiments once a year, but in actively providing the time and space for scientists to produce general audience science articles or even books.

Finally, I think the bottom line here is trust. Lehrer betrayed the reader’s trust, not just with making up Bob Dylan quotes, but perhaps more importantly by pretending that his scientific descriptions were carefully, rigorously checked and sourced, when they weren’t. And just as vitally, he didn’t think to update his work when mistakes were apparent. But part of the blame also lies with the industry in failing to create a pressure for accuracy, such as with a pre-publication professional critique. We trust our magazines and publishers to oversee their writers, but this isn’t necessarily the case. With the explosion of blogging, tweeting and so on, I hope increasingly that scientists can keep tabs on such issues, and make them public, as Chabris so ably did with his review of Imagine.

But perhaps you also have a role to play in keeping a hint of doubt always in your mind, perhaps a little more broadly for journalists than scientists. And with the internet an increasingly interactive place, many times you have the power to check facts yourself, badger authors for sources, or other scientist bloggers with questions and clarifications. This way we can all do our bit to raise the quality of scientific writing.

Like this:

57 comments

Great post. I agree that journalist-scientist collaborations are desirable. In addition, I’ve been wondering for a while if popular science writing, like science itself, might not benefit from the involvement of multiple authors. (Even novels could be written collaboratively.) Whether or not a culture of collaboration spreads to these genres, it’s great that the web helps us put our heads together and separate the wheat from the chaff.

From my small experience in popular science writing so far, I think multiple authors works surprisingly well, particular with a scientist-journalist pairing. In general, a system where the scientist steers the content and the journalist steers the style and format is particularly productive, I reckon, and I’d love to see it become far more common.

And I totally agree that the informality and interactivity of the web is potentially such a wonderful tool – though it’s also one that’s good for exploitation (for instance, amplifying scientific mistakes).

This is spot on. Sometimes people get carried away with criticising the arbitrariness and inaccuracy of the peer review process, but it does serve this key function of helping trap errors and saving the author later embarrassment. It really could have made a big difference to Lehrer if a neuroscientist had previewed the book.
Pleased to hear you have a book on the way and agree it’s a good idea to get it reviewed by academic colleagues if the publisher isn’t going to do that. On a rather different point, I’d also strongly recommend getting it reviewed by one or two people from the target audience. They aren’t likely to spot errors, but in a book of mine there were numerous places where I thought I was crystal clear and jargon-free but feedback from a couple of students made it clear I was not.

Yes, I definitely solicited critiques from non-neuroscientists too. My brother, wife and mother took the brunt of this work and are probably sick of the site of the book now, after tirelessly reading through various drafts. But if my mother particularly (with no science background) didn’t understand something, it was rewritten or cut.

David

In his book on Australia, Bill Bryson calls a spider an insect. His book A Short History of Everything is also littered within inaccuracies and mistakes. Popular writers who are not scientists should possibly check their facts a little more deeply. Of course, science doesn’t really hold with altering details to sell a story, whereas other journalism does. Ergo whilst the cross over is good for the promotion of science, popular writing of science is never likely to be 100% perfect when its main aim is to sell…

“I want to emphasise that I think Jonah Lehrer is incredibly talented. He has an enviable writing style, and can talk with amazing eloquence…”

He’s a very good public communicator, but if you look at other fields, there are lots of people who are very good public communicators, he only stood out because (brutal honesty time) he was a science writer where the standards are generally lower. At least historically they have been; they’re getting better today I think.

Why the standards are lower in science is a whole other question and it’s a crying shame because scientists and science writers generally have more important things to say than, say, film writers or political commentators.

I’m always surprised at the relatively low frequency of science journalists, especially in newspapers, who have a proper science background. Perhaps decades ago this was a workable system, but now the technical requirements to grasp results and then communicate findings to a lay audience must be an increasing challenge.

I think scientific journalism could immediately be improved by assuming that you need at least a PhD for the job, and then the newspaper/magazine trains you to be a jouranlist, rather than hiring someone for their journalistic background and assuming they can pick up the science as they go along. But as I said, another obvious step would be for journalists to work more with scientists in partnership for features and books.

Thanks for the comments, Ed, Pete and Biochembelle (which I’ll take together as you are making broadly the same point).

First off, many apologies if I caused any offence. If you read the whole paragraph, rather than taking the PhD point alone, you’ll see I also wrote “another obvious step would be for journalists to work more with scientists in partnership for features and books.” Don’t you guys, which I’m a great admire of (see my fan-like tweet a while back to David Dobbs, for example), take this route religiously, in an informal way, on the whole, talking to scientists loads, to build up your features?

I did want to raise a debate over this, and am happy it’s sparked one, but am also happy to admit I went too far over the PhD point, and am sorry about this. Let me explain twin frustrations which caused me possibly to overstate my position, and then try to expand on it a little, to help clarify it.

First, my comment was mainly (as you’ll see in the text), aimed at some newspaper journalists who write the daily stories, rather than your more elite group, which tend to centre more on features, which you hone over weeks. From a scientist’s point of view, it’s very frustrating just how much poor scientific newspaper journalism comes out in the UK (although of course there are great exceptions, particularly in the Guardian), and sometimes (such as with global warming) this is frankly downright dangerous. It’s obvious that much of this is caused by the journalist not understanding the science.

Second, I was also thinking of the comparison between the Jonah Lehrers of this world, who sell hundreds of thousands of book copies, and fellow scientists I personally know, who write general audience science books that carry far more authority and insight, but which hardly anyone knows about. My former 2nd PhD supervisor, John Duncan, is a top-notch scientist who wrote a fascinating book (How Intelligence Happens), which no one knows of, for instance. Another example is Chris Frith’s book, Making up the Mind, which is a wonderful read, but hardly sells compared to Lehrer’s work.

Now onto the clarification. Of course there are some fantastic science writers, like yourself and those you list, who don’t have science backgrounds. But are you the exception rather than the rule? Mo Costandi has lots of research experience, even if he doesn’t quite have a PhD. Michael Brooks has a PhD, etc. as do many people that work for the science magazines. I think, on average, some science background, especially with research experience, is very useful for science journalism and book writing. That way, you just get a feel for how messy experiments can be, for how controls work, even how the culture works, and myriad other little details it’s difficult to pick up from the outside. This gives you general experience for science, but obviously also greater depth in one or a few fields. To take a technique I use regularly, so much neuroimaging work is reported in newspapers and magazines, but neuroimaging methods are fiendishly complex, even to those experienced in the technique, and there may be a deep flaw in a prominent paper that requires quite a bit of inside knowledge to notice. Again, I think non-science background journalists can get away with this by a close collaboration with scientists for pieces, but this doesn’t happen as much as it should. I’m not saying you, or any of the group you mention are in any way lax in collaborating and checking with scientists, but there are some journalists who could improve their reporting by more scientific connections.

In my opinion having a PhD is absolutely not necessary to be a good science writer but thinking “like someone with a PhD”, for want of a better phrase. And I would say all of the good non-PhD science writers do think like that.

What I mean by this is that most people at the end of their PhD come out with a more realistic appreciation of the strengths and weaknesses of their field, than when they started, they have a “nose” for the good and the bad papers, the sensible and the wacky claims.

Doing a PhD doesn’t always give you that but it often does. And there are other ways to get it.

I’ve heard PhD supervisors say that actually the main point of a PhD is to get that sense – the research you do along the way is a bonus, if it works, but if it fails, you’ll at least learn not to make those mistakes again & be able to spot them – which is the “sense” I’m talking about…..

P.S. But the problem with doing a PhD is that it doesn’t teach you how to write; if anything most of them train you to write badly…! Which is why good science writers are so rare. What we need is, either, more journalists with a sense for science or more scientists who can write, or both; and actually two groups would end up resembling each other to a surprising degree 🙂

I agree with Ed, Pete, Neuroskeptic and others that having a PhD isn’t necessary to be a good science journalist – and it’s good to see you clarify your position.

One point you make really well – and which is easily overlooked – is that Jonah Lehrer really shouldn’t have risen to the status he did. There are so many other excellent writers out there who are relatively ignored. Lehrer seems to get a lot of credit for being a superb “communicator”, but frankly, that means precisely nothing to me when the content his “superb communication” is the product of fraud. We worship communicators at our peril.

Maybe Lehrer rose too far too fast, and I haven’t yet figured out whether this was a cause or consequence of his fraud. I suspect both, as one feeds into the other. But the more I read about his deceptions and lying (and not innocent ones either, often landing other people in shit, like his editor), the less sympathetic I become. Until frankly I’d rather never hear from him again.

As an audience and a critical community I think we really need to beware of creating rock stars, whether they be scientists or science writers.

Chris: he rose because he sold books and magazines and generated pageviews. The question to ask is why didn’t the public care about his inaccuracies, if they were so evident? At the end of the day, journalism is shaped by market forces.

I write about medicine. If you watch the evening news, you’ll see plenty of MDs who over-hype new medical results without warranted skepticism. An advanced degree does not always mean better journalism.

Disagree completely. A lot of scientists, quite frankly, suck at writing. Conversely, there are a lot of people out there who are bloody good science journalists who don’t have a PhD. I’ve got one in psychology, does that qualify me to become a science writer and write stuff about, physics or genomics?

I think scientific journalism could immediately be improved by assuming that you need at least a PhD for the job…

Nonsense. If that were the case, an organization would have to hire a new writer for nearly every story it covered. PhD programs do not provide general science training; they provide expert science training – and by expert, I mean a miniscule sliver of a given field. As we move further into PhD training, the focus becomes ever more narrow. We don’t receive a mantle of critical thinking and broad scientific knowledge and comprehension when we are hooded. These elements are dependent on mentoring, which is not exclusive to (and frankly, might not always be found in) PhD training.

Agreed with biochembelle above – the intensely narrow focus of a PhD is probably inappropriate training for a science writer who must sometimes skip from one area to another. I’m always pretty impressed by writers who seem to comment sensibly on, say, genetics studies one day, and fMRI studies the next.

This raises the question though – what (scientific) training would be appropriate for an aspiring science journalist? It would have to be a) fairly general in terms of discipline, but b) include a fair amount of research methods theory, and statistics.

Kat McGowan

Also, having a PhD does not grant immunity from overstepping your knowledge or getting things blindingly wrong. A recent book by two prominent cognitive scientists about the development of the brain turned on an erroneous or misconstrued understanding of the archeological record–if they’d done due diligence, they would’ve figured this out. The key is not a piece of paper granting a degree, but critical thinking, and a mindset which always asks: How do you know that? What is your evidence?

Great post. From a publishers perspective, I can see the dilemma. Often, journalists are wonderful story tellers who can express themselves in a much more engaging way than working scientists (although there are some fantastic exceptions, of course). Scientists bring with them an insiders view – but are often reluctant to express the facts through stories which are often easier for the general reader to digest. Ironically, I was asked to remove many stories from Rainy Brain Sunny Brain as the editor felt that the bare facts were better from a scientist! (perhaps she was just being kind and my story telling just wasn’t very good!). I too am passionate about engaging people with science and ultimately we need both the “insiders view” from the laboratory and the story telling ability of the journalist. I also wish that popular science writing was given more priority for academics – I think a “gentle cultural shift” is indeed underway. Hopefully, this will gain momentum. I think for those of us who are working scientists it is important to hone our story telling skills as much as possible and make sure we do make ourselves available to magazine and newspapers editors when given the opportunity. Well done on a thoughtful piece.

I agree there’s a dilemma, which can be exacerbated by habit. When we write academic papers, these are meant to be very dry and precise, and we are discouraged from too much speculation. This writing habit can make it difficult to transfer to writing for a general audience, where less formality, and more dramatic padding is useful, and perhaps some wider speculation is appropriate. But I think scientists can overcome this with practice, especially with help from editors, journalists and so on. But if they aren’t given the space for that practice as their department only cares for academic papers, then that is an obvious barrier.

Another dilemma that exists, I think, is basically to do with money. Academics, frankly, aren’t particularly interested in money. Most of our writing is analogous to what the publishing industry calls vanity publishing, where the author (or at least his/her department) forks out the money to the journal for their academic paper to get published. But money is the main thing that a general audience book publisher cares about, as otherwise it will go bankrupt and cease to exist. Profits come far more easily from the science writers who centre on the stories, and so a publisher can centre on this sellable feature more than the accuracy of the words. However, I don’t think it’s such a big ask to do a basic fact-checking review for an article or book pre-publication, at least to make sure there aren’t any clangers that end up on the page.

I read large numbers of popular science books for review and whether they are written by journalists or scientists they almost always contain errors – its almost inevitable with something of that length. But the vast majority of the books by journalists are better than those by scientists which are often awfully written.

Everyone makes errors, I’m sure. But I’d hope and expect scientists to be far less likely to make elemental mistakes of the kind Jonah Lehrer did, and perhaps make less frequent errors about the science generally.

I agree that some scientists write awful science books. Part of the reason for this, as I mentioned above, is it not being part of their job description, so they have to squeeze writing in during their spare time, and lack the practice.

But would you say that those science books that are classed as classics, and last for many years, tend largely to be written by scientists? (I’m thinking Dawkins, Stephen Hawking, Steven Pinker, and so on).

No not particularly. Hawking’s book has sold a lot of copies, but isn’t very good. Simon Singh’s books, for example are much better labelled as classics. I’d say there are five good books by someone like Marcus Chown for every one by Pinker or Dawkins. I’d say it’s more that scientists aren’t professional communicators and many of them never will be.

Hi Brian, thanks for this interesting perspective. I can’t help agreeing with you over labelling Simon Singh’s books as classics – I’m a huge fan of his, and am in awe of his double skills of making science gripping, at the same time as explaining complex ideas with crystal clarity.

Simon Singh has a PhD from Cambridge University, doesn’t he? And Marcus Chown, I think, got partway through a PhD (incidentally, with my uncle, John Beckman). So,as a nod to the debate above, I was wondering if you could think of any science books you’d label as classics, which were written by someone who didn’t even have a science degree. Are these far more rare?

Angela Chen

My question is: What concrete advice (other than more partnerships) do you have for aspiring science journalists who don’t have a formal science background? (Ahem, myself.) Would you tell them (me) to simply stay away from the field? This is not a snarky question, but an honest one, because I am so interested in science and science communication, and so afraid of committing errors. It’s not that I think I’m in any danger of becoming lazy and not fact-checking, and I definitely would not do something Lehrer would, but I fear that there’s something fundamentally missing in my view. For example, the Bill Bryson “spider isn’t an insect” example raised above is something that I would have missed. Without formal scientific background, I simply see things differently and have a different level of precision, and one of the big questions I keep asking myself is whether I can truly, effectively bridge that gap through hard work, or if I actually do need a PhD to avoid writing reductionist articles filled with lies-of-omission “facts.”

Having challenged the PhD thing, I do want to thank you for pointing out the Brendan Maher incident, which does mark the point in this sorry tale when my attitude turns from “disappointed and sympathetic” to “outright furious”. It’s one thing to deceive people; it’s another to deceive people and to throw your colleagues under the bus for it.

Yep, I’m with you, Ed. I was feeling somewhat sad for Jonah, wondering if the pressure got to be too much, till I read this. Really? That lowest of responses, “My editor did it”? Clearly this was all a lot more deliberate than I’d realized.

One theme in this discussion, not about Lehrer’s clear and serious violations but about science journalism in general, needs a bit more focused attention; just how much specificity is required before things are “accurate”? This is the age old and unresolvable culture clash between journalism and science regarding accuracy. Daniel, you seem to suggest PhD level specificity and accuracy. But I would suggest that in writing for the general public, that it is not only permissable but necessary to drop some qualifying caveats and small details, and ‘smooth out’ some others, in the name of clarity and narrative flow. And I truly believe this can be done without disservice to overall accuracy. For a few years I wrote a 500 word science column for the Boston Globe, “How and Why”, answering the public’s science questions; “How is plastic made?” “Why do armpits smell different than feet?” It was great fun. I researched each piece and then had scientists, other than those I had interviewed in many cases, review it for accuracy. They corrected glaring mistakes (I made many…glad they were caught) but were fine with the simplifications and omissions the form required.
The first chapter of my book “How Risky Is It, Really?” is about the neuroscience of fear, your field. I think it fails to meet your standard, yet I had it reviewed by a prominent neuroscientist for accuracy and, noting my omissions but understanding the restraints of writing a general audience book, she OK’d the end result.
Egregious errors are unforgivable by any standards, but there need to be more wiggle room in the definition of accuracy than the high standard you seem to think is the right one.

Thanks for the comment, David. It sounds like you’ve already been doing exactly what I was suggesting in terms of using scientists for fact checking/review.

In terms of how much you drop/smooth out caveats/small details in the name of clarity and narrative flow, I agree that this is a crucial and difficult question. I totally agree that some of this needs to occur in order to communicate the science to a general audience. And level of smoothing over matters more in some fields than others – if you overly smooth a story about how spiders walk up walls, say, there are few consequences. But there are some scientific issues in the newspapers, for instance in the medical sphere, or to do with global warming, where exactly what you report may really matter, and where ignoring certain caveats or details can paint a distorted picture of the science, with serious consequences. Is a newspaper reporter without the relevant scientific background more likely to miss some crucial caveats here, and choose to write an essentially unscientific position, with dangerous consequences? Before anyone jumps on me over this, I’m not claiming this is the case, just genuinely asking!

Yes. But most science articles are written by specialists at this point. Sometimes the specialists mess up details, too. A bigger question: is it more important that they be accurate, or that they don’t take an “unscientific position”? Is it better to convey the gist and mess up the numbers or get all the facts write but build them into something that’s not true?

I come from a marketing background and have been interested in the subject of behavioral economics. Jonah Lehrer’s writing as you rightly mentioned is engaging. Like many other readers, I feel cheated as well.

My question is more general. Coming from a non science background, books, articles, are the only sources for people like me to gain knowledge about these subjects. And someone like Jonah Lehrer does what he did, how do we trust anyone else? I guess more has to do with policy reform when it comes to book publishing and changes need to be made which keep stuff like this in check before the material reaches consumers.

I agree that strategic changes, with more reviews and checks, would help build trust and accuracy. But perhaps it’s somewhat healthy not completely to trust this stuff, and instead to have a chronic sense of doubt and a critical attitude, as scientists themselves foster for their research. And I don’t think a general audience is completely powerless: if a magazine or book fact is unsourced, you could chase the source with the author or beyond. If it is sourced, many academic papers are available and in many cases a general audience can get a gist or check basic facts. Finally, bug other scientists (so many now on Twitter or with blogs) with questions, other opinions, or potential corroborations.

Bob R.

Great post, and the Shereshevsky anecdote is quite telling. I’d watched Lehrer’s rise with much skepticism. I’d noticed a lot of oversimplifications and facile thinking in his books, and I kept waiting for some experts to step in, point out the inadequacies and set the public straight. Alas, though there were a handful of really spot-on early reviews like this–http://www.slate.com/articles/health_and_science/science/2007/11/proust_wasnt_a_neuroscientist.html (by Daniel Engber, who has a neuroscience degree, I think)–the vast majority of critics wrote puff pieces. It was clear that when it came to science and anything having to do with the brain, they were uninformed and complete pushovers. I think, as your post suggests, the Lehrer case argues not only for more careful, better-trained science journalists, but also for book reviewers with more subject-matter competence. Had there been more smart and tough-minded critics, this whole thing might have been nipped in the bud sooner.

I’d just like to say, the quality of the comments on this blog are off the charts. Definitely an outlier relative to most of what I’m familiar with on the internet. Props to you Daniel for creating a really good environment.

Peter Prontzos

You write that Lehrer “…seems habitually to adopt a deceptive schoolboy attitude to mistakes when
they are revealed to him, rather than maturely owning up to his errors.”
This is a tad ironic, since, in his, “How We Decide”, Lehrer makes the case that the most
successful people admit their mistakes and learn from them, rather than repeating them over and over again.
I must confess that I was one of those who reviewed his book (favorably) and didn’t catch the errors.

Thanks for the comment, Peter. And thanks for the openness about positively reviewing him.

Yes, his behaviour has been sadly ironnic, I agree.

And I also agree that fact-checking/having reviews of your work pre-publication is only part of the equation – admitting your mistakes and correcting them is an important aspect of the job, I’d say, if you are trying to be a beacon of scientific reporting.

Jennifer Kim

I think this was a great article. Thank you for writing it. I enjoyed your portrayal of the facts, as well as practical suggestions for how Lherer might rebound in his career and how the field as a whole might improve. Many of the articles I’ve read have had a negative tone about the scandal, without many suggestions for improvement. I enjoy your different take on the situation.

I was actually in Jonah Lehrer’s cognitive pscyhology class during college and was struck by his dual talent for science and eloquence. It’s a shame that his career went this way, due to his breech of ethics. Hopefully lessons will be learned and the field may be improved with time.

Jonathan Russell

Stephen Pinker, to me, is the best example of supremely successful popular science writing. He is genuinely a scientist and believes deeply in hard data and verifiable claims, but he is also able to tell vivid stories that make the science engaging and visceral to the lay reader.

Tomas

Given that you challenged Lehrer’s description of Luria’s account, I hope you don’t mind if I pick up on a point in what you wrote. You write: “This doesn’t detract from Shereshevsky’s exceptional skills, though, since he was tested on this foreign set of lines 15 years after this single reading, and was able perfectly to recall not only every word, but every aspect of stress and pronunciation.”

In Luria’s book, I can find no mention of this being a “single reading”, so he may have had it read to him multiple times (and indeed, given that it’s a published work, S could have studied it for as long as he wanted in his own time). I think you’ve taken this phrase from Lehrer, not Luria!

Thanks for the compliment and the chance for me to clarify this point. I definitely didn’t take the phrase from Lehrer, though, and have quite carefully studied the original source over the years.

See page 45-48 of the book, The Mind of A Mnemonist, for the mention of this section (you might also want to check with a different English translation of the Russian here).

On page 45, it states, “In December 1937, S, who had no knowledge of Italian, was read the first four lines of The Divine Comedy [then the lines are quoted]… As always, S asked that the words in each line be pronounced distinctly, with slight pauses between words – his one requirement for converting meaningless sound combinations into comprehensible images. And, of course, he was able to use his technique and reproduce several stanzas of The Divine Comedy, not only with perfect accuracy of recall, but with the exact stress and pronunciation. Moreover, the test session took place fifteen years after he had memorized these stanzas and, as usual, he was not forwarned about it.”

There follows a couple of pages of description of the strategies Shereshevsky uses in order to memorise the sounds, and then Luria continues, “S could take these lines, which were written in a language he did not understand, and in a matter of minutes compose images that he could ‘read off’.”

In the alternative translation it states “a few minutes”. I think a few minutes of slowly reading a few stanzas is about the right time for a single reading. And the implication in the text, given the context of the wider passage, is overwhelminngly that he just needs a single hearing, rather than the more mundane repeated presentation.

S could have studied it to help him over the years, that’s true. But in page 48, the passage implies that he was both tested at the time and 15 years later. So studying can only explain the later test, which happened 15 years after the fact, and, as you can see from the quote above, he was not forewarned about. So that’s a pretty unlikely explanation, particularly if you take it in the context of the rest of the book.

Alex

I find it a little odd that you post about Lehrer’s transgressions, and then, when someone points out that your own recounting of one of the incidents in question presents as unambiguous fact something that turns out to be supposition, you say he’s off-topic and ask him to stop clogging up your comments section! 🙂

I thought it made sense to try to steer the comments away from the minutiae of Luria’s book, and back to Lehrer and issues with current science writing, to try to keep the discussion constructive (although I can see I’m not succeeding very well at this!).

I really don’t think it’s fair at all to liken Lehrer’s transgressions with what I wrote above about Luria. The Shereshevsky point was not so much that Lehrer made a mistake, but that he lied about the editor making the mistake instead of him, then repeated the mistake in two later blogs. None of that applies to me!!

But the main Lehrer point of my piece, I guess, was that his books carried a series of elementary scientific mistakes that undergraduates shouldn’t be making, and how we can stop this getting to print in future. Again, that doesn’t apply to what I wrote about Shereshevsky at all. I made no elementary scientific mistakes in talking about Shereshevsky, and if you read Luria’s book and view the entire context you’ll see that it really isn’t that ambiguous, and I gave an accurate portrayal of what Luria wrote. And I’m not quoting someone like Malcolm Gladwell or another non-scientist science book writer, but one of the most eminent psychologists of the 20th century.

The wider issue surrounding this is how much you critique your source, which I definitely think is a point relevant to Lehrer and some of his problems and very much worth discussing. I think there are many factors that modulate how much you critique the sources – for instance, the length of article (a hundred word piece can mean there is little space to be critical, but a whole book gives much more room for this), whether the source is making a side point (as my Shereshevsky description above essentially was!), or is crucial to the main thrust of the piece, the type and trustworthiness of the source, the type of problems with the source (was the flaw trivial or crucial?), the simplicity or complexity of the science or paper, and so on. Getting this balance right means writing something that is both accurate and interesting and is probably one of the key features of a good science writer, I guess.

I’ve been a big fan of your blog, Sweat Science, for quite a while, Alex (and also recently bought your book, which is high on my reading list), and think you are a wonderful example of getting the balance right. You write about a young, complex science that importantly relates to health, so you rightly dwell more on criticisms and caveats than you might have done in some other fields, but I’m sure even you don’t mention every possible caveat or criticism for each blog. Instead, you pick and choose, according to what’s most important and so on.

Tomas

I agree that one possible interpretation of Luria’s account is that S learned the poem after only a single hearing – and indeed, this is the impression given by the presentation and context. However, I don’t think this is explicitly stated in the source text.

I don’t think this is a completely trivial point. Learning something like a poem is hugely more difficult if you have just one hearing compared with, say, two or three. Repetition helps memory.

I would encourage you to be much more skeptical of Luria’s writings. For example, the passage we are discussing seems to imply that S was able to learn a poem and reproduce it in perfect Italian, including “every aspect of stress and pronunciation”, after 15 years, despite not being able to speak the language.

However, a detailed reading of this passage makes this interpretation implausible. The mnemonic descriptions show a basic mistake in pronunciation – he memorised the word “Che” as if it was pronounced with an English “Ch” sound rather than the correct “K” sound (this is pointed out in a translator’s note in the 1968 translation). This is a basic mistake on one of the most common words in the Italian language. It suggests that whoever was reading the poem to S didn’t have even a rudimentary grasp of Italian, and was basically guessing at the pronunciation.

That suggests that rather than memorising perfect Italian, at best S would have been memorising a poem spoken in a halting and incorrect Russian accent. Given that, how could someone 15 years later reliably decide that it was correct in “every aspect of stress and pronunciation?”

Luria is a respected authority, but that respect should only go so far. In particular, Luria appears to demonstrate no understanding of the standard mnemonic techniques that have been used by showmen, magicians and memory performers through the ages. These techniques are described in books available in early 20th Century Russia that S could well have read, especially given that S’s parents owned a bookshop.

Typically, memory performers would use mnemnonic techniques, which allow feats such as remembering a poem after a few minutes. They may combine this with a certain amount of showmanship – for example, keeping a diary and revising things they might be asked to remember, and they may give colourful and misleading descriptions of how they do it.

You are absolutely right to be skeptical of popular science writing like Lehrer’s. But this skepticism should also extend to Luria’s work, which is in many respects also the popular science of his day, rather than peer reviewed academic work. It should be treated as such.

I mentioned Shereshevsky primarily to highlight that Lehrer was inaccurately reporting what Luria had written about, then Lehrer lied about the mistake, and then repeated it. You commented that in my redescription of what Luria actually said, I had taken my facts from Lehrer. I replied that of course I didn’t do this, and, at length, showed that my description was the most parsimonious interpretation of Luria’s text.

Now you are making the quite different point that we shouldn’t trust what Luria writes about. Luria is one of the most respected and insightful psychologists of the 20th century, and this book of his is written about a single person that he studied for decades. I am cautious about negatively interpeting his work, as some problems may have occurred during translation (hence my linking of an alternative translation above). I also have no problem interpreting the phrase “the exact stress and pronunciation” as meaning perfect in terms of what he actually heard, probably from a thick Russian accent. That doesn’t make Luria’s claims wrong, or Shereshevsky’s memory any less exceptional – it is just a quirk of the test. And I would strongly imagine that the event was recorded on audio tape, so that it could be checked years later if necessary. It’s also worth noting that Shereshevsky is not an isolated instance of apparently natural exceptional memory (which strategies can enhance further, but which aren’t the entire explanation for), linked to some abnormal condition, such as synaesthesia or autism. For instance, I have published a descriptive/behavioural and an fMRI paper on one modern equivalent, Daniel Tammet, who has synaesthesia and autism, as well as an exceptional memory. Having said all this, that doesn’t make Luria bulletproof, and of course you’re right that we shouldn’t perfectly believe his work and claims, or those of any others, especially in popular science books, and especially when there aren’t proper references to peer reviewed sources. I made the same point in the last paragraph of this post above, saying readers should be “keeping a hint of doubt always in your mind.”

But, as I said, the main issue isn’t this, but that Lehrer made false claims about what Luria said, which he then didn’t own up to or correct. Therefore, can I suggest, given that the specific case of Luria’s work, written half a century ago, isn’t so pertinent to issues I wanted to raise, via Jonah Lehrer, of science writing today, that if you want to continue this discussion, we do it via email?

Tomas

Thanks for clarifying your reasoning behind the “single reading” phrase. In summary, I understand this represents your view of the most parsimonious interpretation of Luria’s writing, according to your reasoning above, and isn’t to be understood as a direct quote from Lehrer or Luria.

And thanks again for being so responsive to questions from the general public in your excellent blog.

Popular science writing is difficult. It is difficult for those with scientific training and for those with journalism training. Only the problems may be different.

I made the conscious decision when I went to college that I would really be a better writer than a scientist. However, I try to maintain close ties with those who chose the “science path” and I took as many basic science courses as I could. I read research papers and then read how writers have interpreted them.

I believe that good science writing lies in the writer’s responsibilities to their reader, to accuracy and to the person(s) whose work or ideas they are representing. In this, Lehrer fell off the bandwagon.

Kenny E. Williams

I couldn’t agree with Kevin more: The insights of those commenting, the style of writing used by Daniel and all those who either agree or disagree with him, and the relevance of this topic in contemporary science articles for laymen make this worthwhile and thoughtful reading. While Jonthan obviously committed a serious breach of journalistic and ethical trust amongst his readership and professional colleagues, my hope is that Jonathan can redeem himself and restore our faith in his work. His talents and insights concerning psychology warrant our forgiveness; my hope is that he does some psychological and relational purging and begins anew. Much can be gained by each of us should he do so…

I am glad that I was able to find your blog and read your thoughts regarding Jonah Lehrer and his most recent book, Imagine.

His ability to weave a story is impeccable. I regularly pick up a book and read to about the 20th page before I determine to continue reading or not. In this instance I quickly finished the book as a testament to his storytelling ability.

I enjoy consuming knowledge in a narrative form as this increases my retention and causes a feedback loop where I am then encouraged to continue to question and search for better narratives to explain my new found understanding (maybe your new book will give some insight regarding this insatiable desire?). I have read a fair amount of the popular science books surrounding creativity, belief, irrational behavior, neuroplasticity, etc. It was quickly evident to me that he was stating empirical examples which other authors have treated in almost polar opposite ways. It is unfortunate that he may have knowingly committed fraud in order to increase his narrative ability in the search for greater sales.

The most poignant moment of the facts not matching the relevant scientific theories, came with a very casual mentioning of schizophrenia. Does anyone else remember reading about this? I fail to remember what page it was mentioned on and if my memory is correct in my harsh assessment of his wording.

Have you read his book? It would seem that with subjects like mental illness being closer to the facts would seem much more necessary due to misunderstandings in the lay press regarding these disorders.