Feed URL:

One click:

Close

There are a few ways to improve health reporting. One is doing as some science commentators do: lumping all journalists together in a totally “unscientific” way sniping and sneering to their pals on the blogosphere.

Or you can explain how to cover medical research by offering advice about how to approach a story that your editor wants (even if you don’t); other media outlets have featured; or generally seems to be generating a bit of a furore.

This is the tack taken by Medicine in the Media, a National Institutes of Health funded workshop for mainly US journalists held at Dartmouth College this month.

I spoke about the BMJ’s investigations at this Ivy League college in New England that, with its fraternity dorms and white clapboard houses, looks like an American movie set.

Set up by the rather witty Barry Kramer, from the National Cancer Institute, the workshops’s aim is to help journalists unpick the constant slew of information released by journals, advertisers, medical societies, and other interest groups.

Not always an easy job and one journalists don’t always get right, as Gary Schwitzer, a former CNN medical news reporter, documents on his website, Health News Review. This has a predefined set of ten measures by which health stories are judged and rated. Included in the criteria are how benefits and harms are approached; if the story grasps the quality of the evidence; if costs are mentioned at all; and if any conflict of interests are covered.

All very good in theory, but how do you report on research papers? And how do you interpret confidence intervals, p values, relative and absolute risks? By giving examples of what’s gone wrong in the past and working through a series of research papers (of varying quality), husband and wife duo, Steve Woloshin and Lisa Schwartz, both professors of medicine, described ways of laying out statistics in a comprehensible fashion.

From past examples of news articles and broadcasts, it’s clear that academics, doctors and medical organisations are culpable of poor promoting poor science. Overblown quotes from scientists pepper articles and broadcasts. Their exaggeration of research or an intervention might because they have a particular intellectual or financial conflict of interest (or they simply like the sound of their own voice). So rather than constantly lambast journalists, science commentators might like to turn their attention on their own. How about a “hyperbolic scientist watch?” Or “misleading quote monitor?” And unless these quotes are science fiction, then the scientific community should take some responsibility of the message they broadcast.

It’s also evident that journalists over-rely on the main medical journals for their stories—they’re trusted resources. But they too get it wrong by publishing some highly dubious papers, which journalists were taken through step by step—pitfalls to look out for and notes of caution. Journalists are urged to use absolute rather than relative risks. Reporters are castigated for highlighting the relative risk ratio to make a greater impact. But it seems that it might only be partially their fault. Woloshin and Schwartz pointed out that often the absolute risk is absent from the initial paper. Nor may the paper have put the research into context or detailed any associated harms. The urged that journalists should go back to the researchers to ask for further information particularly about the absolute risks is missing.

Some journalists at the workshop complained that if journal editors with their armies of peer reviewers, statisticians and experts on hand can’t spot dodgy science, how are they expected to do their job? A good point—simply castigating journalists for getting it wrong is missing a bigger issue. Attention needs to be turned on those who really do hold the cards and maybe reporters should hold journals to account.

The same applies for guidelines. Who should journalists—or indeed doctors—trust? Examples were given of contradictory advice offered up by different organisations for vascular screening and journalists were told what to look out for—multidisciplinary committees, transparency of members (including conflicts of interest), and selection of evidence (systematic or not).

But the issue that really caused heads spin was screening. When one life might be saved from screening, how can you not implement this invention? And how do you convince editors that it’s not irresponsible to cover screening tests with a critical eye? Journalists were introduced to concepts such as: 5 year survival rates versus mortality rates; population versus the individual; benefits versus harms. They were told to consider who is advocating a screening test and what their conflicts of interest might be—are they a private company, a hospital cashing in, or a government organisation (who may well have their own political agenda).

Deborah Cohen, investigations editor, BMJ. I received NIH funding to speak at the event.

Nice piece, Deb. You're right that misleading information can come from scientists, papers etc. It often doesn't start with the journalist, something that tends to go amiss in these discussions. But it should end with the journalist, right? There shouldn't need to be a specific “hyperbolic scientist watch” or “misleading quote monitor”. That's just a long way of saying “journalism”. All that stuff should be in the regular paper-of-the-day stories.

To me, one of the main barriers to better health reporting is the fact that many people feel their job is simply to report and write down what other people are saying, rather than to analyse it for accuracy, context and so on. It's the flawed idea that journalists are meant to be passive and impartial observers, rather than, say, the final filter that stops misinformation from reaching the public.

Also, while I love that conferences like these happen, I always think that the people who are responsible for the worst health reporting are probably not the ones who are attending the meeting. Good to boost the skills of those who are eager to learn, of course, but an important question when it comes to improving health reporting is how to deal with the bottom quartile?

Anna Sayburn

Some good points here, Deb. We try to steer clear of the pitfalls with our Best Health news feed, http://besthealth.bmj.com/x/se…. I would add one plea to journal and academic press officers. Please send out the original research paper with the press release! We do actually read the research before we write the story, and often waste time chasing that up with press officers.

Deb Cohen

Ed, I totally agree with you. Journalists should be able to separatethe facts from the hyperbole and that is the essence of journalism.

But I worry that sometimes we go for the journalists and letthe medical societies, academics, scientists, journals and doctors off the hook.I feel sometimes they occupy the moral high ground and would never admit they’respeaking in the role of a PR.

They are generally trusted to speak the truth – and in manycases rightly so. However, with trust comes responsibility. I think there’s aneed to be critical of the information churned out by such people and sciencebloggers are in the perfect position to do this. They’re comparative expertswho can unpick research or statements. But sometimes they are all too happy tocriticise the journalist rather than go after the original source of themisinformation. There’s also a tendency to go for pseudoscientists, when someof the really problematic science is coming from the establishment itself.

Clearly, journalists need to be able to figure out whoseopinion they give particular weight to when selecting quotes. In particularlyfraught areas of health this can be tricky. Anecdotes are always going to be partof news reporting particularly on television. My view is nothing beats a straighttalking epidemiologist or statistician on speed dial.

Given your former life as a press officer, I’d be interestedto hear what questions you wished journalists asked on a routine basis. And, aslightly trickier question, if you felt you could point out any weaknesses inresearch produced by your organisation (if indeed there were any) when asked?

michael power

I agree with the previous commentators: you have written a nice piece.