Publication Bias: Enemy of the e-Patient

As a student of Public Health, I’ve had the opportunity to critically understand our research system and the way our nation’s academics take on the pursuit of knowledge and discovery. Through my years as an undergraduate student/researcher and now a graduate student/researcher, I’ve seen examples of great research papers and examples of papers that quickly find their way to the fireplace. It’s probably fair to say that all papers that get published into journals have some sort of contribution to make, yet many of them contribute the wrong type of knowledge, frequently misleading the industry and the readers. Remember that paper you read yesterday about coffee and cancer? Read another one today that contradicts it.

All of these papers have one thing in common: presence of bias. To make my Epidemiology professor proud, I give you the definition of bias:

Bias refers to any systematic error in design, conduct or analysis of a study that results in a mistaken estimate of exposure’s effect on the risk of disease.

To put it plainly, bias distorts what we believe is true. Hypothetically, in a research study that analyzes the effect of chewing gum on oral health, any introduction of bias has the potential to bring out results that aren’t really there, leading us to draw inaccurate conclusions. There are many different types of bias (so many that I probably wouldn’t be able to adequately enumerate them all), yet from an e-Patient’s perspective, the worst of them might just be publication bias. This type of bias refers to the propensity of journals to only publish articles that show specific positive effects of one factor over an outcome, many times misdirecting the reader and his/her belief.

A quick example of publication bias is presented in this article performed by researchers at the University of East Anglia. In this article, they present results from a meta-analysis (combining and contrasting results from multiple studies) of 40 total studies. The findings: publications from the 40 studies were 2.78 times more likely (95% CI: 2.10 to 3.69) to report positive findings over negative findings (pooled odds ratio). When looking at studies submitted to regulatory authorities, studies were 5 times more likely (95% CI: 2.01 to 12.45) to have reported positive findings. This means that I, as a reader, have a higher chance of picking up an article that significantly characterizes something to be a factor of some outcome. This doesn’t just affect me as a student/patient; this affects my doctors, my professors, as well as government and industry regulators.

The funny thing is that even the article just provided above has some sort of bias in it. Notice how there was a positive, significant finding of publication bias, potentially contributing to more publication bias…HA! Even the post that I’m writing has the potential to introduce “publication bias” (although this is not a publication), since I’m researching for examples that prove my point. Here’s an example that disproves my assumption.

There are, however, lessons to be learned from all of this! As we read through different studies, in different journals, it’s important to understand that publication bias, as well as any other type of relevant bias, will always be present and we must take any publication with a grain of salt. At the same time, we shouldn’t simply dismiss findings for the fear that authors may have had a different agenda.

One extremely important number to remember is the impact factor of a journal, generally used to describe the relative importance a journal has to society and the research community. The higher the impact factor, the higher the number of citations stemming from this article. We must, however, realize that an impact factor relates to the journal as a whole and not any one single publication. Here’s a good resource to find the impact factor of whatever journals you are currently reading.

So the next time you read a journal article, remember to think critically and draw your own conclusions! Just because a researcher draws one conclusion doesn’t mean that they are 100% correct! Have examples of publication bias or other biases in journal articles? Share them in the comments!