It is easy to forget that science works through a constant process by which researchers replicate and revisit older studies. The assumptions and conclusions are discussed, the experiment is replicated, sometimes in a different population, sometimes using a slightly different dosage or research methods, and our knowledge grows, step by step. This may seem cumbersome, but it is necessary (the inevitable mistakes get corrected over time), and is actually one of the greatest strengths of the scientific process.

This, however, can also produce studies whose results and conclusions can contradict each other. In the omega-3 world examples can be found in the ongoing research on the related questions of:

Do EPA and DHA reduce the risk of chronic diseases and health events, like cardiovascular disease and strokes?

How big is this protective effect, and who benefits more and who less from it?

What is the best way to achieve this protection?

These are important questions whose answers have big implications on public health policy and our health choices. They have been researched for decades, and considerable positive evidence has been accumulated. But it is hardly surprising to see that some study results appear to contradict this otherwise positive knowledge. Every new study and article needs to be evaluated carefully, over time, and interpreted in the context of all existing prior evidence.

Unfortunately, we live in a world in which we have grown accustomed to a 24-hour news cycle, and in which we expect to receive information in fewer than 140 characters. Science does not lend itself easily to that style of communication.

In the last few years we have seen several neutral articles on omega-3s receive an inordinate amount of press coverage, touted as “the final word” on whether EPA and DHA protect us against disease. This is not how science works! A new study does not simply replace everything that has been done before. Science journalists need to find ways to communicate the results of ongoing research to a wider audience, and this often requires a great deal of simplifying. However, reporting a scientific issue as settled on the basis of a single cherry-picked article is inadequate and, in the words of the quote that starts this entry, simpler than possible.

Enter the Grey and Bolland Article

In most cases, this kind of press treatment is short-lived, and after an initial flurry of attention, the scientific article in question is more or less forgotten by the media. One notable (and to me, extremely perplexing) exception is the research letter published by Drs. Andrew Grey and Mark Bolland in the March 2014 issue of JAMA Internal Medicine.

Their purpose was to try to attract attention to the fact that reports in the popular press do not always accurately reflect existing scientific knowledge. To illustrate this, they compared the prevailing sentiment of press articles about fish oil studies with the conclusions of randomized controlled trials and meta-analyses published between 2005 and 2012 in seven well-respected scientific journals. Their conclusion was that while the majority of articles published in that small sample of journals failed to find a protective effect, the sentiment of press articles was overwhelmingly positive.

This approach is flawed, and their conclusions are at odds with the existing body of scientific research. Unsurprisingly, their article was largely ignored by the scientific community. But this is where things become really strange. The authors concluded that fish oil supplementation is ineffective for any condition, and the popular press took this statement at face value. Some newspapers, including (disappointingly) the New York Times, cited this article as the final word on the question of fish oil supplementation. Television documentaries, including (again, disappointingly) Frontline, interviewed one of the authors and lionized them, describing them as the top experts in the field, and used this article to conclude that fish oil supplementation is ineffective. This is in spite of the fact that it is a rather poor article whose conclusions are based on a shallow and biased review of a slice of the existing scientific literature.

There are a couple of reasons why I believe the article and its conclusions are flawed, and therefore the attention is undeserved:

1. The overwhelming majority of articles published about interventional trials on the protective effects of EPA and DHA are positive.

2. Basing one’s conclusions on the evidence presented in only seven journals is a bad idea, independently of the reputation of those journals.

1. The overwhelming majority of clinical trials are positive.

It is important to observe that, to date, almost 31,000 scientific articles on EPA and DHA have been published, and the number of articles published per year has been increasing rapidly (Figure 1). This puts EPA and DHA among the most studied compounds in the scientific literature, and is not consistent with the idea that they have no health effects.

Figure 1: Number of articles on EPA and DHA published by year

Let’s take this a step further and restrict the discussion to articles that report only on interventional trials involving EPA and DHA. This excludes the rich existing literature on epidemiological studies, the extensive research using animal models, and all we know about the biochemical role of these compounds. But reviewing interventional trials is the closest approximation to what Grey and Bolland attempted.

GOED is currently working on identifying all articles in Pubmed about trials conducted in humans and involving an intervention that includes EPA or DHA. This is an ongoing project, but so far we have identified all such articles published from 2006 to 2015.

In that period, there were a total of 1863 articles on interventional trials, of which 1516 (81.4%) found a positive result in their primary study outcomes, based on a review of their abstracts. The percentage of positive results is very stable, and does not change significantly from year to year (Figure 2). As seen in Figure 1, the number of such articles has increased steadily from year to year. This is not the pattern to be expected from compounds without a health effect (as claimed in Grey and Bolland’s research letter).

Grey and Bolland evaluated the editorial disposition of articles about fish oils published in the popular press, using a scale from 1 (clearly unfavorable) to 5 (clearly favorable), and found a median score of 4, which corresponds to about the 75% mark in their scale. It is actually quite remarkable how close this figure is to the percentage of positive scientific articles. While it is hard to compare those two numbers directly, it is certainly clear that Grey and Bolland’s conclusion that the editorial disposition of the popular press is too high for existing scientific evidence is incorrect. And certainly so is the assertion that fish oils have no protective health effects.

2. Basing one’s conclusions on the evidence presented in only seven journals is a bad idea

Going through years of Pubmed entries, identifying all interventional trials and determining whether their conclusions are positive is a lot of work. Grey and Bolland instead restricted their analysis of the literature to only seven journals. Their argument is that because these journals have the highest impact factor in the field of internal medicine, then the results published in their pages must be representative of the entire body of research. This is an ingenious idea, and a very efficient way to evaluate the volume and sentiment of the press response that scientific articles receive. After all, these are precisely the journals that are most likely to get media attention. But as a method to review the effectiveness of any intervention, this approach is flawed, for three simple reasons:

1. It provides a very partial view of all interventional studies in the existing literature.

2. Relying on the impact factor of a journal can be misleading.

3. The journals selected are not representative of all journals that have published this research.

As mentioned above, researchers have published a total of 1516 articles on interventional trials in the last 10 years. These articles have been published in 740 different journals. Grey and Bolland review 24 articles, which include 18 interventional studies (slightly over 1% of all interventional trials), published in seven journals (less than 1% of the journals).

Relying on the impact factor (IP) of a journal as the ultimate measure of the credibility and completeness of the articles they publish can be misleading. Of the seven journals reviewed in the research letter, the one with the highest IP is JAMA, whose 2014 IP was 35.289. This is a measure of how frequently JAMA articles are cited, and it means that, on average, each article published in JAMA will be cited by 35 other articles in any given two-year period. The impact factor of JAMA Internal Medicine, the journal that published Grey and Bolland’s letter, is around 15.

The problem is that this is an average for the entire journal. Not every article will be cited an equal number of times, so just because an article has been published in a high impact journal doesn’t mean it will attract a lot of attention – most articles are never cited, or cited a handful of times.

I can’t easily examine the number of times an article has been cited. Counting citations requires a costly subscription to a database that keeps track of this. But I can look at the number of times an article has been cited within Pubmed Central, which should at least allow us some informative comparison. Pubmed Central is a repository of about 3.8 million freely accessible biomedical articles. This is a sizeable portion of the total number or articles tracked by Pubmed (about 25 million). More important for our discussion, Pubmed keeps track of the number of times each paper is cited by articles in Pubmed Central. We can then use a little data mining to compare the scientific attention received by different articles.

Let’s look at JAMA. In 2014, the year Grey and Bolland’s letter was published, JAMA published 1286 articles, and these articles were cited in Pubmed Central an average of 5.4 times. The two most cited articles (and the only two cited over 200 times) were cited 275 and 272 times respectively, and neither of them is about omega-3s.

By contrast, to date there have been over 650 articles on randomized controlled trials with cardiovascular outcomes involving EPA and DHA. The three most cited (each one of them has been cited over 200 times by Pubmed Central articles), were all published in Lancet, and they present the conclusions of the JELIS, DART, and GISSI-Prevenzione trials. All three of these trials found positive prevention effects on interventions involving EPA/DHA, but only the first one was included in Grey and Bolland’s analysis. Their choice to restrict their analysis to only a small number of years resulted in their missing some important trials, and biased their conclusions.

It is worth noting that Grey and Bolland’s article has been cited by only one single article in Pubmed Central and by four articles tracked by Web of Science. It is fair to conclude that their article has received very little attention (positive or negative) from the rest of the scientific community.

Based on our findings that 81.4% of all scientific articles on interventional trials involving EPA/DHA were positive in the last 10 years, it is possible to use a simple statistical test to identify the few journals that deviate too much from this proportion. Figure 3 shows every journal that has published more than five articles on interventional trials involving EPA/DHA, displaying the number of articles published and the percentage of positive ones. The plot highlights the three journals whose number of published articles and rate of positive results are not consistent with the expected rate.

Figure 3: Number and Percentage of positive papers on interventional trials, by journal

I am not sure why the journal Lipids in Health and Disease has such a surprisingly high rate of positive results. I am not surprised to see the Journal of Renal Nutrition on the list – they specialize in kidney conditions, and most of the interventional studies they publish involve patients in dialysis, whose metabolism is different from the general population. It is to be expected that the effect of EPA/DHA might be different in this population.

The most extreme outlier is JAMA, with 14 articles, 12 of which reported neutral results. This is a positive rate of 14.3%, which is well below the average of 81.4%. There are several possible reasons for this difference:

1. An editorial bias against dietary supplements, which could put a high barrier to publication for positive EPA/DHA articles.

2. Higher standards for the strength of protective effects measured by the trials. This is a high impact journal, so it may choose only to publish high impact trials. But then one would expect other high impact publications to have unusually low positive rates.

3. A higher threshold for how much attention a trial may get. It is possible that JAMA considers new positive EPA/DHA trials not likely to get much attention, because there are already so many of them, while a neutral trial would generate discussion, controversy, and more research.

It is impossible to determine what the reason is for the low rate of positive trials published, but it is clear that JAMA is not representative of the entire research on the field, and including this journal in a review that only covers a handful of scientific publications is very likely to bias the results.

Conclusions

The research letter published by Drs. Andrew Grey and Mark Bolland in the March 2014 issue of JAMA Internal Medicine is, at its core, an opinion piece whose goal was to attract attention to the fact that the overall sentiment of articles in the popular press does not always match the consensus of scientific opinion. If this were the entire content of the article, I believe it would have been an extremely interesting article. However, because they chose to illustrate their point by reviewing only 24 articles, published in just seven journals, both their letter and its conclusions are flawed, for the following reasons:

1. The review of the evidence is shallow and incomplete, and covers only a small fraction of the existing body of scientific knowledge.

2. The published body of research overwhelmingly supports the conclusion that EPA and DHA play an important role in the prevention and treatment of diseases. A large majority of publications on interventional trials involving EPA/DHA find a positive effect.

3. By restricting themselves to such a small number of non-representative journals, the authors ended up reviewing a small and biased slice of the existing evidence.

Because the article is based on a non-systematic review of a small and biased part of what is currently known, and because it introduces no new evidence or analysis to the question of the role of fish oil supplementation, it has been largely ignored by the scientific community.

Yet in spite of the article’s flaws, both the authors and their research letter have received an inordinate amount of press coverage. Altmetric, which tracks the press and social media coverage received by scientific articles, rates it among the top 5% of all research outputs they have ever scored. If the authors want to illustrate the fact that the attention of the popular press and social media do not always agree with the existing scientific knowledge, and that some articles and conclusions get an undeserved amount of attention, they now need to look no further than to their own research letter for a compelling example.