When Good Science Turns Into Bad Reporting

Sometimes you’ll see something that’s easily identified as straight-up bullshit, like celebrities talking about which herbs to take to protect against solar radiation when flying, or just about anything whatsoever from the naturalnews.com Facebook page. (No, we are not linking to that. Do not go to there.) But other times you’ll see articles that are citing actual studies, by actual scientists, and yet if you take the article at face value you’ll come away with completely the wrong impression.

1. Never trust a headline. (Even when it’s not clickbait. Hopefully you already knew not to trust clickbait.)

Not only are the article headlines not written by the scientists, most headlines weren’t even written by the reporter. At the bare minimum, no matter how provocative a headline is, click through and read the article before you get excited.

2. Beware of spurious correlations.

If there is one thing everyone should learn at some point — in science class, in math class, somewhere — it’s that correlation does not necessarily equal causation. I mean, sometimes it does: cigarette smoking had a strong correlation with lung cancer and lo and behold, despite decades of denials from the tobacco industry, it turned out to totally cause lung cancer (and heart disease and emphysema and practically every other disease out there other than possibly herpes). However, there are several ways in which this can break down:

You might assume that one thing causes another thing, but actually the causation goes the other way.

Two things might be correlated, but actually there is a third factor that causes both.

Sometimes, things create a vicious circle and cause each other.

There are things that correlate entirely coincidentally. There’s a whole website graphing spurious correlations, like per capita consumption of cheese / number of people who died by becoming entangled in their bedsheets.

If you look up the actual study (which you should, if you’re intrigued by an article about new research), you will often find that the researchers will not only acknowledge the possibility of one or more of these, they’ll suggest ways in which this could have happened so that follow-up studies can try to control for it. This usually gets glossed right over in articles written for laypeople. And to be fair, sometimes this is totally the fault of the scientists, who release an excited press release or give an excited interview that glosses over the obvious alternative explanation.

3. We totally cured this thing! (in mice.)

There are so many things we have cured in rodents, guys. So many. We cured mice of Type 1 diabetes in the 1970s. We’ve cured Alzheimer’s in mice. If you’re a rat, we can vaccinate you against tooth decay.
When it’s a seriously new process, like the stem cell treatment for diabetes that was tested in mice and announced last October, that’s exciting news, but it doesn’t mean “hurray, this disease is about to be cast into the historical dustbin with smallpox and Bubonic Plague!”

4. Science doesn’t happen overnight.

If something’s in “Phase 1 clinical trials,” that at least means it’s made it out of the animal research and into human subjects. But Phase 1 isn’t even to test whether it works on people. it’s just to see if it’s safe and to establish dosing. (If you swallow 100 mg of this drug, how much of it actually winds up in your bloodstream?) Sometimes you’ll run another Phase 1 trial to see, okay, what if we have everyone fast for an hour before they take the drug, then how much goes into the bloodstream? This can go on for years. And we haven’t even gotten to the part where we see if it does anything useful.

In Phase 2, they test it on a smallish group of patients (100 to 300 patients) to see if it’s at all effective. This means they have to recruit patients, give them the treatment, and see if it works. With some conditions, you can at least tell within a few days if it did the thing you’re hoping it does, but with others, it can take weeks. Or months. Or years.

Phase 3 is like Phase 2, but with more people. Once you’ve finished Phase 3, you can submit your new treatment to the FDA for approval (and start “scaling up,” i.e., figuring out how to produce large batches of whatever it is you’re making, also not an overnight process). Even when you are truly motivated and well funded, this is a slow process.

A new treatment for a serious illness being in any sort of clinical trials is news. But not “totally going to be cured tomorrow, hurray!” news.

5. Let’s take this scientific discovery to its logical conclusion even if it’s a ten-mile hike over rough territory.

The more marginal the news source, the more likely you are to see this. Maybe someone discovers that alcoholics who are heavy coffee drinkers have a lower risk of developing cirrhosis of the liver. If you see this reported as, “attention binge drinkers: better mix your whiskey with coffee!” then you are seeing this at work. Or, “doctor’s prescription for a healthy liver: java!” Pretty much any article that’s about Foods You Should Always/Never Eat (that was not spun out of the purest unadulterated woo) is going to be an example of “dragged to someone’s logical conclusion no matter how long it took.” Moderate alcohol consumption raises the immune response of monkey to vaccines? CLEARLY IT CURES COLDS.

6. Failure to define terms.

One of my favorite things to see in any news article about a medical study is where they’ll tell you that “moderate” consumption of (coffee, alcohol, stevia-sweetened lemon curd on rye toast, whatever) will do something (beneficial/harmful) but never tell you anywhere in the article what “moderate” consumption is. (Or “heavy” consumption, and ditto.)
As a general rule, most people have a useful definition for “moderate” consumption of something they consume: they define it as “the amount that I consume.” If you go to the study, they’ll define the terms, so check. Never make assumptions about this stuff. For some people, you’re a moderate coffee drinker if you consume only one 12-cup pot of the stuff per day.

7. Researchers with an agenda.

Not all research funded by industrial groups is going to be bad research. But you should certainly treat it with skepticism and care. More importantly, you should be really aware of the fact that some research was probably funded by an industry group, even if the article doesn’t say so.

8. If it sounds too good to be true…

Science always makes a better story if it’s something you totally want to believe. And actually, it can go either way — the story could run with the conventional wisdom, or dramatically against it. The problem is that legit but very minor findings can get massively overblown because it makes for a good story.

To defend yourself against being taken for a ride by bad science reporting, take a look at the actual study. If you can, or the abstract, if you can’t, and see if it actually says what the article claims it says. (Google Scholar often has PDFs of research papers.) Look to see how study participants were recruited and selected, what they were paying attention to controlling for, whether they consider reverse causation, who funded the study. Check the credentials of the people quoted. Every now and then you’ll discover something truly startling, like that their PhD is in French Literature rather than Biochemistry, or that they don’t actually exist. The whole “not existing” part is usually a tipoff that bad science reporting shenanigans are afoot.