In the UK, there is no more famous scourge of bad science journalism than Ben Goldacre, author of the Guardian’s well-named Bad Science column. In last week’s column, Goldacre published a critique of an inaccuracy-laden piece in the Observer, penned by health correspondent Denis Campbell. This triggered a sequence of ripostes including an opinion piece from the Independent’s health editor Jeremy Laurence criticising Goldacre, a response from Goldacre criticising Laurance, and a defence of Laurance from Fiona Fox of the Science Media Centre.

I have already commented on Laurance’s frankly appalling view of what journalism is, and I will leave that aside for now. Both he and Fox essentially argue that a critical overview of science journalism is necessary but both advocate a softly-softly approach that doesn’t get under anyone’s skin too much.

Laurence said, “While raging rightly at the scientific illiteracy of the media, [Goldacre] might reflect when naming young, eager reporters starting out on their careers that most don’t enjoy, as he does, the luxury of time.” Fox chimed in with “Ben was well within his rights to do his weekly column on the weaknesses in the Observer report on Omega 3 but he would not have prompted this backlash if he had done it in a different style”, and elsewhere, “I think it’s about the tone of Ben’s particular brand of critique.”

I will summarise these arguments: we like watchdogs, but we’d prefer it if they had no bite.

Both mention the difficult, high-pressure environment of the modern newsroom, which Fox refers to as “mitigating circumstances”. I disagree but there is certainly a grain of truth here about the lifeof ajournalist. I have argued before that critics of journalism would do well to better understand such day-to-day routines, filled as they are with deadlines, editor-wrangling, dictats about what stories to cover, and many people to interview. In Flat Earth News, Nick Davies derides the culture of intense pressure for more stories in less time with less fact-checking, while simultaneously empathising with young journalists who are ground down by it.

You can understand why people who work in that environment might get a little narked with critics, especially when certain subtleties of the profession are commonly missed (hint: the journalist didn’t write the headline). This isn’t helped by the typically ferocious nature of internet criticism. It’s easy to rain vitriol on a name on a webpage over a wrong headline or a dodgy stat, while forgetting that behind the name is a real people with a real livelihoods. So I empathise with science journalists who feel that their backs are up against the wall, or who feel that they are sometimes criticisedunfairly.

But none of this means that people shouldn’t be criticised if they screw up or that during such criticisms, they should be given an easy ride.

The high-pressure nature of the job merely explains some of the mistakes that are made – they don’t excuse them. At the most basic level, as an employee of a workplace, you are contracted to do a job, with all the stress and pressure that entails. If you can’t cope with that and fulfil your obligations, then you’re in the wrong job. This is particularly important in science and health journalism, because the costs of errorcan be very substantial.

Then there’s the old canard that the critics have it easy. Laurance accuses Goldacre of having “the luxury of time” while Fox contrasts the day of a “jobbing journalist” to the “luxury of a columnist like Ben who gets to lay bare the flaws in those stories once a week”. That’s absolute rubbish. I can’t speak for Ben but it’s worth noting that his column is written on top of his activities as a full-time doctor. I can, however, speak for myself. In the upcoming week, I will be writing 6 lengthy news pieces for this blog and a 1,500-word feature for the Times, outside of my day job in my spare time. In fact, writing this piece is eating into that time. The critic’s schedule is no less hectic and indeed part of the reason that bad science journalism is such an irritation is that correcting it soaks up time!

During my day job, I have to answer enquiries from people who have been misled by an inaccurate headline. I respond to sensationalist coverage and provide a more measured take on things. I also provide some of the quotes that work into those news pieces, often dropping all my other work to meet a reporter’s deadline – furiously reading the relevant paper (if it’s provided, otherwise, hunting it), second-guessing the angle of the story, drafting a response, and getting it signed off. That high-pressure news environment turns my office into a high-pressure working environment.

And really, regardless of how intense the schedule of a journalist is, that defence really starts falling apart when you consider that many people cope with it admirably. You’ll note that somereportershardlyever seem to draw the wrath of critical bloggers. Why? Is it because they’re part of some secret club? Do they know where the off switch is? No – it’s because they’re simply better at what they do. They’re more careful. They do their homework. They check their facts. And most importantly of all (because we’re all human) if they do make mistakes, they take it on the chin and engage with their critics (check out that last link for my own personal fiasco and its swift resolution).

Earlier yesterday, Petra Boynton asked for resources to help journalists avoid making common mistakes and I answered that the only things people really need are humility, a willingness to learn, and time. We’ve talked about time already; the other two are just as important because they ensure that if you make mistakes, you’ll make them only once, and that you maintain accountability and professionalism.

It’s the lack of such accountability that fuelsmuchof the frustration with bad science journalism. In fact, those who repeatedly do the worst job have a habit of not holding themselves to account. Goldacre’s attempts to track the source of the article that started all of this were protracted and difficult. The article in question has since disappeared from the Guardian website with no correction or explanation, even though the Code of Conduct from the National Union of Journalists calls for journalists to do “her/his utmost to correct harmful inaccuracies”. Instead, we get one piece in another national newspaper and one blog post criticising Goldacre for his tone.

This is not the type of reaction that instils confidence in a softly-softly approach.

The bottom line is that if people like Laurance and Fox feel that the “self-appointed critics” of science journalism are being too harsh, there must be some evidence that a more cordial tone would actually yield dividends (after all, scientists like evidence). To my knowledge, those data are sorely lacking.

Of course, most of this piece has been focused on bad science journalism and we must be careful to avoid confirmation bias. As I’ve argued repeatedly, there is plenty of good science journalism out there that often gets lost amid the venom triggered by the worst exemplars. I’m currently judging the ABSW Science Writer Awards and it’s a joy; there is no shortage of truly excellent science journalism of the sort that takes specialist skills (and a lot of time) to go out and find.

The critic who thinks that all journalists are rubbish is a straw man, but we could certainly do more to collectively highlight good science journalism (in this, I actually agree with Fox). It would serve to show the world what the craft actually looks like when done well, and it would hopefully encourage the best practitioners, who might otherwise think that their entire profession is being condemned despite their high-quality efforts. This will contribute towards raising overall standards just as much as debunking the worst articles. Social media is excellent for this and there is clearly a culture developing on Twitter where science journalists who do excellent work get praised for it. That can only be encouraged.

But in the meantime, the watchdogs are still needed. Their bites and barks may be unpleasant, but so are the consequences of the errors that draw their attention. In the end, the best way to avoid such criticisms is to give people as little as possible to criticise.

Related

36 thoughts on “Are science journalists being overly criticised?”

As a journalist myself, I feel I must take issue with the “then you’re in the wrong job” line. Let me first grant that there may indeed be people who are not suited to the stress inherent in writing anything involving a deadline. But I’m afraid that’s not really the issue here. The issue seems to be that fewer and fewer media outlets are willing to invest the time and money that would be strictly necessary to produce quality science journalism—or any journalism, for that matter.

I am not in the wrong job if I recognize that I’m prevented from doing my job properly. What is needed, I think, is much more pressure on the editors to provide journalists with working conditions that actually allow them to do an adequate job. And, of course, many more journalists should protest against more or less arbitrary deadlines and pay cuts. A good place to start, incidentally, would be with the (usually equally arbitrary) rewriting of headlines by copy-editors, who more often that not, it sometimes seems, manage to completely misrepresent the story. No journalist should be made to accept a rewrite without at least being consulted.

I’m glad these issues re getting aired. Journalists may not like Goldacre, but my God is he needed. Three reasons why normally mild-mannered scientists get hot under the collar about this:
a) Scientists themselves have to be obsessively accurate about what they do; no result, however, sensational, is worth having unless it is true
b) Lack of trust in journalists is very corrosive. Most scientists I know are reluctant to talk to journalists about their work, on the grounds that it will be simplified at best and reported erroneously at worst. This has the bad effect that it leaves the field clear for the self-promoters and charlatans to monopolise the media.
c) People believe what they read in newspapers, and act on it, MMR being the most striking recent example. I suspect fish oil sales increased after the Observer piece.
Part of the problem is that editors seem to think that anyone can do science reporting, and one hears tales of people with no science background being drafted in to write science stories. That may seem to make sense, since many of the people they are writing for also have no science background. The problem is that the journalist isn’t just required to speak the same language as the readership: as the conduit from scientist to readership, he/she also needs to speak same language as the scientist and have the skills to understand and evaluate basic aspects of a piece of scientific work. I’m not talking about doing a high-level critique of the work; just enough scientific savvy to distinguish between fact and opinion, and to pick up on major flaws.
Finally, re Ben Goldacre’s style. He recognises that mockery is far more powerful than attack. If he just got cross every week, or gave a painstaking critique of errors in an article, I doubt he’d be much read. The fact that he is witty mean that people read him; however, he never makes fun of someone just for the sake of it – he always explains *why* he has taken the line he has, and anyone who has read Bad Science will have had an excellent education in how to evaluate scientific claims.

Ah fair point Peter. I definitely recognise that there are systemic problems in journalism that flummox even the best reporters so that para could probably have ended
differently. Thanks for the comment

Look, can’t you science journos do both? Satisfy your charming “professionalism” needs for “accuracy” and “precision” AND produce the stuff we really need: the pieces about cancer that will lead to a high clickthrough rate? For that matter, they don’t even HAVE to be about cancer! I hear there’s a doctor out in California who’s developed a new diet; where’s our piece about that?! Diet… woww… i like sushiiii… yummy

A key point Ed makes is that SOME science journalists avoid this sort of sub-standard stuff, partly because they are more knowledgeable and careful, and partly – it seems to me – because they engage with a wider community that often includes actual experts who can advise them/put them right. Instead, people like Laurance, Campbell and Fox seem to want to sit stumm in their privileged mainstream media silos, and talking only to one another, whilst expecting all the rest of us to trust them.

As Ed also alludes, the better science writers front up when they get it wrong, rather than disappearing and/or weaselling and/or blustering. In professions like science and medicine, being able to recognise, acknowledge and learn from your mistakes is important, largely because it helps you not to repeat them.

Great post, Ed. I have always found it ironic that journalists, who demand the right to scrutinise others, are often so allergic to having their own practices examined.

That said, criticism is only really helpful (to the journalist and the industry) if it is constructive. As you point out, more needs to be done to highlight good science journalism. If newspaper money men could see that the market values quality, they may be more inclined to invest it it, and thus slow the sad decline of UK journalism.

I also worry that scientists (who were rarely fans of the media in the first place) are developing an overly-pessimistic view of journalists. According to an international survey published in Science in 2008 (paywall: http://www.sciencemag.org/cgi/content/short/321/5886/204), the majority of scientists surveyed were satisfied with the way the media had portrayed their work. Of course, this is no cause for complacency, but it raises the question of whether the publicity that surrounds bad journalism is overshadowing good writing and eroding scientists’ confidence in all journalists.

Science stands to benefit from the scrutiny and questioning that good science journalism brings, which is why I think scientists need to appreciate and support it (as well as criticising it). But they also need to understand that science PR is not the job of a journalist, and that scientists can’t control what journalists write. As the authors of the Science paper ask in a follow-up analysis (free access: http://scx.sagepub.com/cgi/reprint/30/2/266 ), if all scientists are completely happy with what journalists produce, are journalists doing their jobs properly?

I’m wondering to what extent this is a science-specific problem. Most economists can’t stand economics reporting either (e.g., Brad DeLong’s Washington Post Death Spiral Watch). I know tax attorneys who cringe every time there’s an article about taxes.

Related to the points other commenters have made about editors not providing enough time for background, to what extent is this a problem of lack of expertise by particular reporters (or even high-level dilettantism)? In my anecdotal read of ‘bad’ science reporting, it’s often perpetrated by reporters with no background in the area they’ve been asked to cover.

“if all scientists are completely happy with what journalists produce, are journalists doing their jobs properly?”
—–
Exactly. As a science writer, I don’t expect scientists to always be happy with my work. Nor do I care. Scientists can be at least as close-minded as any other professional cadre. They don’t “trust” journalists. Neither do doctors, lawyers or politicians. Understandable. Journalists are in a different line of work; one that is commonly misunderstood by narrowly-specialized professionals who don’t deal in mass communication.

I think Mike is right there and there is opportunity for woeful misreporting coming out in all sections of the paper (my big bugbear is human rights court cases in which the wrong of the stick is firmly grasped, and often waved at immigrants) and in some ways the main stream media is the reason that the specialist press even exists. It is true that technical issues may not be fully covered by general (to a section of the paper) journalists and sometimes even they need to talk to an expert.

At the same time, in what way does that excuse anyone? “My job is hard” is a weak sort of defence. I think if you’re going to hold yourself out as an authority on the news then being reasonably accurate is the least of it.

The first three things I was taught in science media studies as an undergrad:

1) The science press pack (including press officers) are all mates and love to congratulate and support each other. It has it advantages, but also problems too – so keep an eye out and do some of the badly needed critiquing for them. (Seriously, and years before twitter meant we could see it with our own eyes 🙂 )

2) Leave the emotional hand-wringing and cries over “death of science journalism” at the door, we need to take an evidenced-based rather than normative approach to these issues, and we should keep debate reasonably calm. Science, journalism, science journalism and the broader science communicaiton context are all very important, and therefore need to be treated with respect and thought not knee-jerk rants.

3) On that evidence base – don’t assume that everyone simply “believes what they read in newspapers”. The evidence-base says “ish” to that one (see that Cardiff report I tweeted last week, or about any piece of empirical media studies of the last 20 years). This is worth thinking about if feeling the need to complain about impact of bad science on lay audiences – if you are really that worried about science in public, maybe there are other places to focus your attention?

Personally, I don’t think we should expect our science journalists to be gods. That’s not to say we shouldn’t want them to be great, just that everyone slips up every now and again. That doesn’t mean we should write them off. There’s an awful lot of hand-ringing, finger-pointing and personal attacks that get flung around in these debates. I really can’t see how a tone of damnation is productive.

Maybe, rather than biting watchdogs, we need to be “critical friends” to each other, and ourselves – journalists, bloggers, scientists, audiences, academics and the rest. I’d like to keep this debate diverse, I worry a bit about singular gatekeepers or organisations that are supposed to solve all the problems for us. Science reporting will never will be perfect, lets all work in continual supportive critique to make it better.

Claire – great point about the effect on newspaper money men. Regarding that survey in Science, I actually analysed it way back when and while I think the basic conclusion is sound, there are lots of caveats to bear in mind. Meanwhile, I fully agree with you (and to a certain extent with Dirk later) that the job of the science journalist includes a watchdog element that is about scrutinising the claims of scientists rather than simply reporting them. It’s an approach that I tried to bring to my discussion of last week’s acupuncture study, for instance. Of course, as far as I know, most of the critical ‘skeptical’ bloggers aren’t suggesting that journalists need to be friendly with scientists or act as cheerleaders for science – as in the case of the acupuncture study, most were hoping that journalists would be more critical of the scientists behind that particular paper. This is why I was particularly shocked at Laurance’s comments, which are clearly advocating stenography over critical analysis.

Mike – I’m sure it’s not a science-specific problem at all. And I strongly suspect that the practice of drafting in people from other departments to cover science stories is a contributing factor (but far from the only one) towards crap reporting. The story that started off this particular debate comes from Denis Campbell, who joined the Observer’s health beat after a lengthy tenure at its sports desk.

Alice – “Science, journalism, science journalism and the broader science communicaiton context are all very important, and therefore need to be treated with respect and thought not knee-jerk rants.” Well said. Knee-jerkism doesn’t help anyone and I agree that we need to remember that people are human. But I hark back to my point about taking things on the chin – it would be gloriously unfair to expect people to never make mistakes, but really quite reasonable to expect them to learn from them.

For most of the good journalists I know, making a mistake in the form of a demonstrable, non-trivial error of fact in an article/post is the worst kind of mistake there is–and, from time to time, an inevitable one.

I really, really hate being wrong in print. And yes, it is utterly reasonable to expect a writer to learn something from such episodes.

I made a comment over on the layscience blog to the effect that I think you and other commentators have been quite generous by focussing on “inaccuracies” in science reporting. I think there’s also quite a lot of wilful exaggeration in science coverage… and a lack of balance in terms of which stories are covered and a lack of understanding of the amount of evidence that is required before conclusions can be drawn and policy decisions taken.

So, for instance, irrespective of the malpractice that subsequently came to light, the original Wakefield/MMR paper – a case series on 12 patients (if I remember right) did NOT warrant the level of coverage or the level of scaremongering. I don’t really accept that this was just about pressure for deadlines OR a role of reporters as messengers. The story, based on the Lancet paper, ran and ran and ran, whereas few outlets covered the much more substantial evidence that contradicted Wakefield’s findings.

We’ve also got a big problem with media outlets covering, almost always uncritically, PR-based “surveys” – every week a new story breaks, and the copy from different outlets is almost identical, suggesting that the story is cut and paste straight from a press release.

Matt – indeed, those problems are very real and big ones. And they’re not about inaccuracy/tight deadlines. But they can be about editor-wrangling as mentioned above. Journalists are sometimes told what to cover and what angle to take, and it takes a lot of time and energy to actually tell their boss that they won’t do it – in this respect, us freelancers/hobbyists have it much easier. We write what we want to write. Of course, the problems can also stem from basic incompetence, but just bear in mind that there’s an extra confounding factor to take into account.

Again, I go back to the main argument of this post. Critics would do well to bear in mind the day-to-day life of the people they criticise, not to excuse bad journalism, but to make their critiques more watertight and carefully directed. Journalists would do well to remember that such factors in no way excuses bad journalism and that they need to hold themselves and their profession to account when things slip.

In a way, this entire post has been an argument against generalisation and confirmation bias – both in criticising journalism and in defending it.

Ed, fair enough and I guess journalists are probably more “visible” for criticism than editors, since their name appears above every article. And I wouldn’t be surprised if it turned out that editors are happy to hang their writers out to dry when things go wrong (e.g., pulling Campbell’s article rather than actually writing a retraction/apology/explanation).

I just rail against the newspaper/network as a whole, though that’s probably unfair too!

The report of the “just funded” study will almost certainly also have started from a press release, either by the funder or the University. It is commonplace for the Univ PR Dept to put them out. The study PI then gets invited on to (e.g.) local radio and is asked “What are you going to find out?”

The problem here is the potential for one of those “systems failures” -the PR Dept have already “potted” the thing into a press release (which usually means talking it up), the pressured journo thus already has a ready-made story. All it takes is a slightly unwary / carried away PI saying the odd silly thing and – kaboom – you have a rolling bandwagon. I think this is most dangerous in social science / policy /psychology areas where the stories have a lot of eye-catching potential (like the one matt linked).

I also agree with Matt about the routine exaggeration, or “making it look like a story”. Of course, this is also dangerous because it is synergistic with both the “talking up” already inherent in press releases, AND the dangers of journalists not trying hard enough (for whatever reason) to get behind the press release.

In the case of the MMR debacle, what the journalists all reported was the Royal Free press conference and the statements Andrew Wakefield made there. They did not dig deeper. An obvious example was that in the VERY SAME ISSUE of the Lancet was an insightful commentary on the Wakefield paper, written by a couple of wary experts. This commentary identified very presciently all the ways (at least, all the ways short of deliberate fakery) in which such small case series were typically likely to be flawed. Some discussion and a link is here. The commentary was not written in super-technical language, and the caveats and warnings were very clear. I wonder if any journo covering the story – apart from Brian Deer much later – ever read it?

I’ve added my own thoughts on my blog (see link on my name). Written before I saw the comments here, to my chagrin…

But one point I made could perhaps be added. (He says tentatively.) Do people here think that MSM articles where the journalism attempts to write “on their own” rather than seek the advice of experts in the relevant area are part of the problem. It’s a theme I’ve found myself commenting on a couple of time now. (I realise this runs into some of the time/editing/boss-asks-for-slant issues that Ed raises.)

Very good post Ed — as ever, I think you capture the nuances of this issue rather well. My own thoughts are broadly similar. I can understand why people like Jeremy get frustrated with this sort of criticism, but I think he was shooting at the wrong target in this case, and picked a bad case to defend.

There really isn’t any excuse for shoddy reporting that completely misrepresents research in this way. This article, which I read at the time, seemed to add 2 and 2 to get 5, and deserved to be called out. I think science reporters who take accuracy seriously accept and welcome such scrutiny. It actually strengthens our own hands, by imposing a cost on sensationalist exaggeration.

There’s a wider point, though, about *why* Jeremy and others (Steve Connor had a similar go at Ben G last year) sometimes react this way to criticism. It think it’s born of frustration, which I often share, at the way science reporters are often judged by the worst of our number, without ready acknowledgment that there is in reality a spectrum of quality in science journalism. Those of us who do take accuracy and perspective seriously (and I would certainly include both Jeremy and Steve among their number) do find it annoying to be lumped in with shoddy reporting that we ourselves deplore.

As I often say when I talk about this subject, it’s usually a mistake to talk about “the media” or “the mainstream media” or “science reporting” as if it were some monolithic whole. There is a great diversity of standards — often within the same organisation. Critics of bad reporting do sometimes acknowledge this, but usually in response to challenge — it is rarely volunteered in advance. Ben’s “media MMR hoax”, for example, makes a nice catchphrase that captures a certain truth, but it ignores the role of journalists like Nigel Hawkes, Sarah Boseley and, yes, Jeremy Laurence, who were critical of Wakefield from the outset.

The deadlines and editorial pressures issues are separate ones, and I have some, but only some, sympathy with Jeremy and Fiona’s arguments. As Peter Beattie points out in the first comment, we’re not always the sole arbiters of what we get to write — most journalists have had instances in their careers when they were pressed to file something they’d rather not, or had insufficient time before deadline to check everything they’d like to. It’s easy to say just refuse, but the reality is that that can often be difficult.

Just to be clear, I am not saying this is always a valid excuse, but on occasion it can be. I do see my role as advising my editors on what we should not be doing, as well as what we should be doing. But there are occasions when opinions differ. Few journalists are in the position of bloggers where they can write only what they choose to.

There’s also the issue that with limited time available, the occasional cut corner can free up time to work on genuinely original or fact-checked journalism. If you have a press release from an institution you trust, and a paper to check it against, is it really a great crime to use parcelled quotes that explain the research accurately, if it saves you a couple of hours? Is it very different from using material gathered from a press conference or conference call, to which everyone has access?

It’s often a question of priorities. Those hours spent chasing first-hand interviews that might ultimately add little to a diary story can be spent on original work instead. I’m as critical of churnalism as anyone, if that’s all you do. But good and reliable PR can, on occasion, actually be enabling of good journalism, by freeing up time to make it possible.

I suspect the real lesson of Jeremy’s piece is never to file when you’re angry. I’m not sure he actually believes that journalism should be stenography — his certainly isn’t. Sleep on it first!

Well said, Mark. There are plenty of good nuggests within your reply to Ed’s post (which is itself excellent), but the one which resonated most with me was the overwhelmingly negative light in which science writers/journalists are often cast. Criticizing a bad or credulous story comes easier than complimenting someone who has written something good or done a respectable job at balancing accessibility with accuracy. (Part of the the whole “Someone is wrong on the internet” phenomenon, I think – http://xkcd.com/386/) Since I started off as a blogger first, I have to admit that it still feels more natural to say “This report is shit – I’m going to blog this!” than to link over to something good and say “Hey, this is a really awesome story. Check it out.” (Although linking to cool stories/posts has become a large part of my Twitter experience. Weekly linkfests on my blog also provide an opportunity for this.)

The odd part of all this is that – thanks to encouragement and advice from people such as Ed and yourself – I am now publishing stories through newspapers and magazines. Portions of my work have become a part of the nebulous thing often referred to as “The Media” which we all love to complain about. My personal goal is to generate the kind of change I want to see – I would rather push to change things than just whine about how this or that science report sucks – but I have to wonder if I, too, will eventually wind up receiving complaints about how I was not accurate enough, oversimplified, etc. That just comes with the territory, I suppose, but now that I have begun to contribute to the more traditional aspect of science writing I have gained a greater appreciation for what reporters, journalists, etc. go through. This does not excuse bad reporting – we should keep our standards high – but I think we would do well to praise people who are doing a good job and not present science writers as a monolithic group of PR monkeys. I am not optimistic about this comment driving much change, but, for my own part, I want to do a better job of interacting with the disparate aspects of science communication which are trying to carve out new niches during this somewhat chaotic time.

As a journalist, it’s not just professional pride that makes me irritated with broad brush statements about “the media” and its failings. If we want to try and fix these problems, we first need an accurate picture of what is going wrong. Generalisations don’t make for insightful diagnoses. Worse, they could lose us the trust of our readers, the very people we need to value, support, and ultimately pay for good-quality journalism.

re: Dr Aust’s question: Yes, I (and many of my colleagues) did indeed read the Lancet commentary when we were writing stories debunking the “link” between MMR and autism nearly 10 years ago. I haven’t written any since, mainly because there are only so many ways you can say the same thing.

Totally agree with Claire’s point, and Brian is surely right about the “someone is wrong on the internet” phenomenon.

Just to clarify again, I’m not saying that journalists should be so precious that we need to be patted on the head every time we do something acceptable. Or that we should be thin-skinned to criticism — I’ve often found that constructive criticism of my mistakes has made me less likely to make them in the future. But it does no harm (indeed, it may well be much more effective) to contrast the bad with the good, rather than always to seek out the worst in isolation.

Mark – Agreed. I wasn’t suggesting that journalists need to be constantly “patted on the head” for reassurance. Speaking only for myself, I was simply recognizing my habit of jumping on bad reporting (“Someone said ‘missing link’? Who?!”) and the need to balance that out.

A follow up is: why didn’t those stories at the time debunking (or at least heavily caveat-ing) the MMR-autism link “take”? Was it just they got less play than the “Tragic MMR parents: we were betrayed” stuff? Did they only run in the semi-specialist or specialist science press (New Scientist, Nature etc) while the stuff that did get all the play was in the mainstream print media and all penned by non-science writers?

A lot of Ben Goldacre’s thesis re. MMR has always been that the problem was that it got framed early on as “Govt conspires and covers up”. It was then covered mostly by news or by “social issues” writers who found that a plausible way to see it and lacked the science B/G to go beyond that, or re-think later in the light of more and more counter-evidence. It was certainly my subjective impression (having looked on as a member of the public) that what one got on the broadcast media was endless “We accuse MMR!” narrative-of-tragedy figures like the JABS folk, plus squirming Govt ministers being asked about cover-ups by news presenters.

An obvious scientific analogy is that in academic science, if one ends up writing /talking about something one doesn’t know enough about to feel “secure”, one goes and finds a “man/woman who can”, i.e. a colleague who knows more. I have often wondered if the print people who churned out all the “Brave Maverick Doctor” rubbish about Wakefield ever actually asked their more science literate specialist journalist colleagues. “Is there anything in this?” Or was it one of those scenarios where the line (tragedy + cover-up) was set from the start by editorial or by the “this way it makes a story” imperative?

As Mark alludes to, I wonder what exactly Science Editors or science specialist writers do /should do when their paper is hanging on a runaway bandwagon like the MMR thing, perhaps pushed along by the News or Families/Features Dept?

DrAust, a quick comment on MMR. As I recall it (and I only took on science in 2000 so missed a lot of it), it actually took a couple of years after the 1998 paper for the scare really to take off. The initial coverage in many papers wasn’t that bad, and Science/Health editors had some success in telling editors that this wasn’t good science and thus didn’t really deserve any more than a little sceptical coverage.

What really changed the game was Blair, and his prolonged vacillating over whether to say Leo had been vaccinated — not least as Cherie is just the sort of person people imagine might be anti-MMR. At the time, I had a lot of sympathy with his non-disclosure stance, because of the precedent it might set. But in retrospect, I think they gave the story legs.

@Dr Aust. Great question. I’m no social scientist, so what follows is mere personal opinion.

One of the issues seems to have been the issue of false “balance” in the MMR reporting (see the link to the Cardiff report in Alice’s first comment). That is, to be “fair” in their reporting, some journalists gave equal space to both sides of the argument. This left readers with the impression that both arguments were equally valid, when they were not.

As to why the debunking didn’t “take”–I wish I knew. It may be because merely informing people of a fact does not mean you will convince them of its truth.

Re: Leo Blair–I think the Blairs were damned if they did and damned if they didn’t. I remember watching John Gummer feeding his little girl a burger at the height of the BSE scare to show that “British beef is safe”. I’m not sure that it worked http://news.bbc.co.uk/1/hi/uk/369625.stm

Folks, to begin with, I am absolutely thrilled at the high quality of the comments in this discussion. It’s really nice to see people going beyond the typical “Media sucks/rocks” angles.

Mark and Claire – I absolutely agree, and have argued before, that critics of science journalism often suffer from major confirmation bias, and sweeping statements only serve to isolate people who are (a) doing a good job and (b) really ought to be allies. It’s the same reaction that some of us get when bloggers are lumped into one homogenous category and I’m constantly surprised at the ability to people to generalise about others when they would abhor the same treatment of themselves. The point about organisations is well-taken too. It used to irritate me no end to hear people complain about ScienceBlogs as if it were a single entity, but then I would criticise the coverage of an entire paper like the Telegraph. (Well the Telegraph do sort of deserve it more than most, but I’ll try to refrain out of sympathy for Tom Chivers!)

Brian – you and I are totally on the same page about praising those who do a good job. We’ve both done well on the back of such support and I know we both feel the need to repay it in kind. Your point about the benefits of your own media experience reminds me that most of the people I know who have the most realistic attitudes about science communication/journalism/blogging/whatever are those who have experienced many of these activities and can empathise with a wide range of people. Which goes back to Alice’s excellent point about diversity.

I have to say this (post+comments) is the most interesting debate of these issues I’ve read in a long time!

Picking up on Ed’s mention of Alice’s point on diversity, I’d like to thank Mark Henderson for (in comment #20) saying press releases aren’t all bad. I work as a press officer, and I’d much rather a journalist write an in-depth article on the paper I highlight in a press release than have him/her copy and paste my text. Having said that, I agree with Mark that if a press release is good it can be a useful tool for journalists and even free up time for them to produce other stories. Of course, this is only true if the press release is good – but I’d say that if an institution issues a bad press release about a particular piece of scientific research, both the press officer and the scientist are to blame, as no self-respecting institution I know of puts their scientists’ name on a press release without clearing that release with the scientist first.

Anyway, back to the matter at hand: I’ve only been on the job for under a year, but my experience so far is that convincing scientists to take the time to engage with journalists and even (dare I say it?) bloggers is often complicated, because of a misunderstanding of the real-life workings of a newsroom and of the roles of scientist and journalist (the difficulty in accepting that their status as ‘experts’ doesn’t mean they have the final say on a news story about their work, as Claire pointed out). Unfortunately, this gets exacerbated by bad science reporting – partly, I guess, because (as has been pointed out) bad science stories generate a much bigger fuss, and effectively become *the* examples of science journalism that people remember, and hold up as evidence.

So I have a request/challenge: anyone care to share some examples of *good* science reporting that scientists will recognise/remember, and which I can bring up next time one of ‘my’ scientists starts telling me why it’s not worth bothering?

I think that’s the easiest challenge I’ve seen in a long while. I share examples of good science reporting virtually every day on Twitter. At the end of every week, I collate some of those examples into link posts. Many other people I know do the same. I like to think that I contribute some good science reporting on a weekly basis and many other blogs do excellent journalism – click on Brian’s name in the comments above for an example. Some institutions are havens for good science reporting including the Times, the Guardian, Wired, Nature News etc. I would venture that in any of these sources, you will find a far higher proportion of good science reporting than bad.

Criticism of reporters is easy and fun, and usually deserved, but let’s not let the scientists off the hook. Often they don’t understand the wider implications of their results any better than the reporters do. I’ve spent some time trying to understand why this is. My take is that they are attuned to controversies entirely local to their subfield, and even limited to a competition for grants against their chief rival. While such conflict is supposed to make for a good story, it can look a lot to regular people like the problems of Star-bellied Sneetches.

For example, the recent story about the stork-like lifestyles of azhdarchids offered a reporters an opportunity to lead readers to imagine an exotic scene in the late Cretaceous, with long-beaked giraffe-sized creatures stalking the high grass, perhaps exploding aloft after snatching up a baby tyrannosaur. Researchers, though, were concerned with putting to bed long-promoted notions that they were aquatic skim-feeders, or that they were absurdly scrawny, or were obliged to leap off cliffs to get airborne. These are important within the field, but a reporter’s audience benefits more from context in which the new result may be more or less a footnote, providing more motivation for the story more than substance for it.

Right, my two cents. Personally, I, like Ed, am much more interested in celebrating the good rather than criticising the shit. But sometimes the latter is necessary. I don’t buy the whole deadline pressure argument: filing stories that are ill-researched and poorly understood is the sign of a bad journalist. Most of Ben G’s columns are more Bad Journalism than Bad Science. And let’s not forget that most of us manage to avoid the rapier by not getting stuff wrong.

So, as I mentioned on Twitter a couple of weeks back, I gave a lecture at UCL recently about science and the media, which ended up focussing on issues of bad science/bad journalism (specifically MMR and the Climategate events). One student made a suggestion which I think is extraordinarily interesting (Marc Warner, full credit, it’s entirely his idea, and we are looking at developing it into an article). Journalists (specifically science journalists in newspapers) could operate under a bond system, where they stake personal wealth on the current veracity of the story they are responsible for. If challenged, the journalist/editor would have to demonstrate that to a reasonable extent the story published represented the known facts at the time as verified independently. Thus, in retrospect, every stated link between MMR and autism is punished financially. Every regurgitated PR from a holiday company would be subject to scrutiny and potential punishment. More trivially, the hack who wrote that life had been discovered on Titan last week would be fined. Every demonstrable error is punished. This way, journalists are incentivised to do their research more thoroughly, and disincentivised from aiming for the biggest impact whilst damning the consequences.

Needless to say, there are a berzillion issues with this as an idea, its being fairly radical (Marc derived it from Warren Buffet’s and others’ similar plans for incentivisation in markets) . Alice Bell alerted me to the fact that Simon Singh has suggested this last year to City SciComm students, but i am unaware of it being published anywhere. [Happy to be wrong]. The main issue would be “who arbitrates?” I don’t have a good answer to this.

I’m not advocating this proposed system, merely exploring it as an idea. I’d love to hear your thoughts.

Ed, I was by no means implying there are not many examples of good science reporting out there – much the contrary. But what I was looking for was a ‘big’ good story, one that made as much of a splash as the bad ones that everyone seems to remember… a story or topic that was widely covered well in the media and that scientists would recognize without me having to send them a link (which, realistically, they’re unlikely to read, I’m afraid…).

What wasn’t made available at the time of publication was the analysis method used to draw the conclusions, which, to be fair, featured phrases like “huge disparity”. So Ben decided to hurriedly post a link to the data accompanied by a very, very quick linear analysis of the same data from which no such conclusions could be drawn. Others were, quite rightly, encouraged to perform their own, more complicated analyses to see what conclusions they could draw.

What got to me was the way Ben went about it, given an earlier Twitter discussion about public vs. private criticism wrt to science communcation shows (and the fact that one of the authors, @GozdeZorlu, is a friend – something Goldacre quickly picked up on, of course!). It was rushed, he hadn’t asked (or at least waited for) the authors’ response to questions about how the analysis was done, and it was very, very public. After all, Ben Goldacre is a well-respected figure in matters of bad science journalism and a lot of people will listen to phrases like “I think this is a bit wrong” without necessarily reading the caveats in the rest post. Seeing “something wrong on the internet”, I leapt into action and called him on it, declaring that he was “publicly calling bullshit” on the story. He rejected this, a Twitter exchange followed, and I commented on his post:

Which basically went along the lines of, “Given that you didn’t wait for a response, they weren’t fair game and your title is a bit harsh”. To be fair, he has since changed the title of the post to “I think this Guardian story might be a bit wrong” (the URL is still the same though), but this is quite a shift in emphasis. I would have preferred a little less inflammatory, like “What can _you_ see in the data?” (which is a lot more Feynman-esque), but hey, you can’t have everything. What’s a shame, as @EvidenceMatters has eloquently pointed out, is that debate over the analysis has overshadowed the more important aspect of the story – i.e. the huge difficulties associated with obtaining good data, which we’re all calling for politicians to use when making policy.

I suspect that it’s this sort of thing that gets up people’s noses, makes them wary of criticism, and might put them off making their data available (which would be a shame). Bad Science Journalism is bad, but Bad Science Journalism Journalism is just as bad. Bad.

Adam: interesting idea. Instead of having a single arbitrator, however, (too much risk of vested interests sticking their oars in) I’d suggest letting the market do the rewarding/ punishing. If readers perceive value and quality in a newspaper’s journalism, they pay for it, if they don’t, they take their money elsewhere. To a certain extent, I think this is happening already. The growth of social media and widgets that let readers rank stories could surely help this to develop. Outlets that are consistently highly rated will attract more paying readers. Proprietors start caring more about reputation and accuracy.

As with any democratic model, this does of course have problems. One has to hope that there are enough readers out there who value science/ accuracy over slebrity goss/ pseudoscience/UFO abduction stories.

Perhaps the answer to this lies in the way we educate our kids. We seem to spend a lot of time teaching them how to be good consumers of material goods. If we put more effort into teaching them to be good consumers of information (Carl Sagan’s “Baloney Detection Kit” is a great place to start) we’d have a market force to be reckoned with.

Gary Schwitzer’s HealthNewsReview.org is another way to critique science journalism, at least the parts of it that involve health and medicine. The site uses rigorous, transparent, and validated criteria to rate stories, more than 1,000 to date: http://www.healthnewsreview.org/

There are similar sites in a number of countries, and Gary has published his findings, in closure of a nice journalism criticism peer review loop.

In the interests of full disclosure, Gary is a friend and former colleague on the Association of Health Care Journalists board of directors. The AHCJ, by the way, is another great resource for reporters who want to improve their skills: http://www.healthjournalism.org

Below I’m providing my affiliations for disclosure purposes only. This post reflects my personal opinion, not necessarily that of any of these organizations.

Ivan Oransky, MD
Executive Editor, Reuters Health
Adjunct Assistant Professor, New York University’s Science, Health, and Environmental Reporting Program
Treasurer, Association of Health Care Journalists
Clinical Assistant Professor of Medicine, New York University School of Medicine
Blogger, Embargo Watch http://embargowatch.wordpress.com (a blog independent of Reuters that does not necessarily reflect its views)http://twitter.com/ivanoransky

Who We Are

Phenomena is a gathering of spirited science writers who take delight in the new, the strange, the beautiful and awe-inspiring details of our world. Phenomena is hosted by National Geographic magazine, which invites you to join the conversation. Follow on Twitter at @natgeoscience.

Ed Yong is an award-winning British science writer. Not Exactly Rocket Science is his hub for talking about the awe-inspiring, beautiful and quirky world of science to as many people as possible.
Follow @edyong209
Subscribe via RSS