Dear Newspapers: Individual Studies Do Not Exist In A Vacuum

All too frequently, newspapers portray individual studies as the definitive answer on a given topic. This is a problem because most studies are not the definitive answer on anything. That is why researchers are constantly trying to replicate each others’ work.

Just because one study finds a relationship between A and B, does not mean that other studies will be able to replicate that finding, or that it will extend to other situations. On the face of it, this seems like an incredibly obvious statement. And yet it’s something that newspapers often forget, and which I think could have some very negative consqeuences.

As an example, I’d like to bring your attention to the media attention surrounding a recent article on sedentary behaviour.

The story

Earlier this year my friend and colleague Valerie Carson published an interesting paper examining the health impact of various types of sedentary behaviour in a sample of 2500 children and adolescents. They created a clustered risk score (CRS) which took into account a child’s waist circumference, blood pressure, cholesterol, and inflammation, and then examined whether it was associated with 3 different measures of sedentary behaviour – accelerometry (an objective measure of movement), self-reported TV watching, and self-reported computer use.

Here is what they found (emphasis mine):

Volume and patterns of sedentary behavior were not predictors of high CRS after adjusting for MVPA and other confounders (P > 0.1). For types of sedentary behavior, high TV use, but not high computer use, was a predictor of high CRS after adjustment for MVPA and other confounders. Children and adolescents who watched ≥4 hours per day of TV were 2.53 (95% confidence interval: 1.45-4.42) times more likely to have high CRS than those who watched <1 hour per day.

The study is an interesting one and has some important implications. Not surprisingly, it has received some well-deserved media attention in the past few weeks. Here is what the Daily Mail had to say:

Watching television is the most damaging activity an inactive child can indulge in, a study has warned.

Exploring the health impact of different types of sedentary behaviour, scientists discovered that high levels of TV viewing were associated with an increased risk of heart disease, compared with other pursuits such as computer use.

It is now hoped that the findings will encourage parents to be more aware of the damaging effects certain activities can have.

Reading those articles and headlines, a person could quite reasonably conclude that TV watching is worse for a kid’s health than using a computer. Case closed, right? Not exactly.

The above study looked at one specific group of children, and a specific group of health outcomes. Last month, our group in Ottawa published another paper (led by Dr Gary Goldfield) looking at different types of sedentary behaviour and heart disease risk factors in a cohort of overweight and obese teens (in contrast, the earlier study was on a sample of nationally representative youth). Interestingly, we found that neither TV time nor computer time was associated with increased risk in this group - in our dataset it was video games that were by far the most important sedentary behaviour.

Given this new information, if we only care about the results of individual studies, the Daily Mail may want to re-write their headline to read:

Watching TV Playing video games most damaging pastime for inactive children, increasing risk of heart disease

But of course that would be ridiculous. We expect studies to disagree, that’s the way science works. You look at slightly different populations, different measures, etc, and suddenly things change. Everyone knows that, and yet it’s not the way that science is typically portrayed by newspapers and other news agencies.

Why is this a problem?

Put yourself in the shoes of someone who just read the Daily Mail article, and who now believes that TV viewing is the single most damaging sedentary behaviour for kids to engage in. What reaction are you going to have when you read a similar article about our new study, suggesting that TV viewing and computer use aren’t important at all, but that video games are actually “the most damaging activity an inactive child can indulge in”?

You would probably be confused – if television was so important last week, how is it so completely unimportant this week?!? You might begin to question why these researchers can’t get their act together and figure out what’s actually going on, rather than making one claim and then following it up with a contradictory one. And then you may tune out from any articles on the topic in the future, since you can’t really trust those researchers to stick with a finding for more than a few weeks anyway. In more controversial areas of research (e.g. cell phones and cancer), this approach of sensationalizing every new study can have a very negative impact on the public discourse.

What is the solution?

The way to solve this problem is not to write a bunch of articles saying that TV viewing is less important than computer use. To be honest, all journalists really need to do is dial back the enthusiasm a bit, rather than painting every study as a GROUNDBREAKING NEW FINDING.

Journalists may also want to shift away from writing about individual studies, and look instead to systematic reviews. This is what researchers and policy-makers are doing already. We know that many published findings turn out to be false (some have argued that most findings are false) and so when we want to know the definitive answer to a question, we look at systematic reviews rather than individual studies.

Trying to understand the health impact of any given behaviour (e.g. sedentary behaviour, physical activity, smoking, etc) is a bit like trying to make a map of a city by taking thousands of independent pictures using different angles, distances, and resolution, without knowing how all the pictures link together. Any one picture (or study) tells you relatively little about the city, and some pictures may seem to contradict (e.g. one picture may suggest the city is grassland, while another picture may suggest it is incredibly urban). But if you take enough pictures from enough angles, you start to get a pretty good sense of what the city looks like.

Systematic reviews are an attempt to bring order to that chaos by organizing the pictures, grouping types of pictures together, and placing more weight on the high quality pictures, while reducing the emphasis of low quality pictures, or simply throwing them out entirely.

If journalists focused more on systematic reviews rather than individual studies (and there are plenty of systematic reviews coming out these days), they’d be less likely to steer people in the wrong direction, and more likely to be spreading a message that will hold up over the long term.

On the bright side…

Did I mention that my friend Val and I have new papers published? They are both open access so you can read and share them for free (Val’s paper here, mine here). Despite the differing findings, they are both good papers that will hopefully stand the test of time.

And if you have any other suggestions for ways to present data from individual studies to the general public, I’d love to hear them!

About Travis Saunders, Phd, MSc, CEP

Travis Saunders has a PhD in Human Kinetics, and is currently an Assistant Professor in Applied Human Science. His research focuses on the relationship between sedentary time (e.g. sitting) and chronic disease risk in both children and adults. He is also a Certified Exercise Physiologist and (former) competitive distance runner. You can connect with him on Twitter @TravisSaunders.

Share this page

A note to readers…

The PLOS BLOGS Network is made up of two types of blogs, the six staff-written blogs from PLOS journal editors or departmental teams, at the top of the next column, and PLOS BLOGS Network-hosted independent blogs, listed below them. Independent blogs are not pre-screened or edited by PLOS; as such any views presented are solely those of their authors, and do not necessarily represent views of PLOS. Unless otherwise noted, all posts on active PLOS BLOGS are published under a Creative Commons CC BY 4.0 license, making them available for reuse by anyone, for any purpose, with appropriate attribution. For questions or comments please contact blogs@plos.org