McKnight: Blinding us with science journals

On Feb. 6, 2010, the prestigious medical journal the Lancet published one of the most anticipated papers in its 187-year history. Yet the paper was not a medical study, and provided no new medical information.

Rather, the paper retracted a previously published paper — specifically, the now infamous 1998 study in which former British surgeon Andrew Wakefield proposed, using falsified data, the existence of a link between the measles, mumps and rubella (MMR) vaccine and autism.

Amazingly, the Lancet took 12 years to publish the retraction, even though it became known, within the first few years after publication, that other researchers could not reproduce Wakefield’s results.

Indeed, the retraction was only published after the British General Medical Council found Wakefield guilty of three dozen charges, including dishonesty and abusing developmentally disabled children for research purposes, and revoked his licence to practice medicine. And the GMC’s hearing only occurred after Sunday Times journalist Brian Deer completed an investigation into Wakefield’s fraudulent activities.

Hence, it took an investigative reporter to bring to light one of the biggest scientific scandals in recent memory, a scandal that placed disabled children in jeopardy and that fuelled — and continues to fuel — the anti-science, anti-vaccination movement. This suggests that something is seriously wrong, not just with those who oppose science, but with science itself.

Glenn Begley appears to agree with this sentiment, even though he doesn’t discuss the Wakefield debacle. Rather, Begley, the former vice-president of biopharmaceutical company Amgen, outlined, in a recent Nature article, serious problems with the scientific enterprise.

Begley noted that over the last decade, scientists at Amgen identified 53 “landmark” cancer studies and, before engaging in their own research, tried to reproduce the landmark studies’ results. Yet in only six cases (11 per cent) were Amgen scientists able to confirm the studies’ findings.

And Amgen’s results are not unique. In 2011, for example, researchers at Bayer Healthcare in Germany conducted a similar review and found that they could validate only 25 per cent of studies they considered.

This is not to suggest that fraud is rampant, but it does reveal, as was evidently the case in the early years after publication of Wakefield’s study, that reproducibility is a problem. And that means that there is something wrong with a large proportion of studies, whether it’s the result of fraud or innocent error.

In fact, the New York Times recently reported that the number of retracted studies has increased tenfold in the past decade, while the number of papers published has increased only 44 per cent. And one study in the Journal of Medical Ethics found about three quarters of retractions were due to error and one quarter due to fraud. The question, then, is why this is happening and, even more importantly, what can be done to stop it?

One easy answer is that it’s not the number of mistaken or fraudulent studies that has increased, but rather the number of such studies that have been discovered. While there is probably some truth to this given that online access has made verification much easier, the sheer number of erroneous studies is suggestive of a serious problem, even if the extent of that problem is only now being discovered.

And that problem, it seems, extends far beyond individual researchers or individual laboratories to the academic environment itself. Indeed, Begley charges that the “academic system and peer-review process tolerates and perhaps inadvertently encourages ... the publication of erroneous, selective or irreproducible data.”

That’s a serious charge, but there is substantial evidence to support it. Begley notes, for example, that for a young researcher to obtain tenure in a university, or for a more seasoned researcher to obtain a promotion or grant, a strong publication record, typically with publications in the most prestigious journals, is required.

This places tremendous pressure to publish on researchers, particularly given the increasingly limited number of tenured positions available, and the fact that researchers are now often required to pay part of their salaries through grants.

In a separate paper, Ferric Fang, editor in chief of the journal Infection and Immunity, and Arturo Casadevall, editor in chief of the journal mBio, echo these concerns, noting that intense pressure for positions and grants has produced a hyper-competitive environment where fraud and error, while never justifiable, become more likely.

Fang and Casadevall argue that while competitiveness can motivate creativity and innovation, the current hyper-competitive environment actually distorts the scientific enterprise. For example, while virtually all scientific discoveries are the result of the work of many people over a long period of time, a culture that values high-profile publications rewards only those who announce a discovery first.

This not only presents a distorted portrait of the nature of science, but serves to further distort that picture as it discourages communication among scientists for fear that their results may aid the work of others. And this desire of researchers to keep their work close to their chests is harmful or fatal to science, since science works, and works best, through collaboration.

Given these problems, Begley, Casadevall and Fang all agree that the scientific enterprise needs to engage in a fundamental reordering of its values. Specifically, while competition itself cannot and should not be eliminated, collaboration and teamwork, and teaching and mentoring, should play a much greater role in the awarding of tenure, promotion, grants and scientific prizes.

Furthermore, Begley argues that journals must similarly reconsider what they value when making publication decisions. Specifically, he notes that journals prefer to publish papers in which the hypothesis being tested is confirmed.

Yet hypotheses are often not confirmed, and the failure to confirm tells us that the hypothesis might well be false. Hence, failure to publish such information presents a distorted picture, not of science, but of reality.

Indeed, as epidemiologist John Ioannides has been arguing for the better of the last decade, the practice of ignoring “negative” data — data which fails to confirm a hypothesis — has resulted in the publication of many false findings.

In addition, Begley argues that it virtually guarantees that publication-hungry researchers will “submit selected data sets for publication, or even massage ... data to fit the underlying hypothesis.”

Consequently, he argues that journals must give researchers more opportunities to present such “negative” data.

As Begley, Fang and Casadevall all admit, it won’t be easy to make such fundamental changes to the scientific enterprise and to scientific culture. But then science has never been about what’s easy.

Rather, science has always been about what’s true, and about what works. And as long as things remain the way they are, it isn’t true, and it’s not working.

Almost Done!

Postmedia wants to improve your reading experience as well as share the best deals and promotions from our advertisers with you. The information below will be used to optimize the content and make ads across the network more relevant to you. You can always change the information you share with us by editing your profile.

By clicking "Create Account", I hearby grant permission to Postmedia to use my account information to create my account.

I also accept and agree to be bound by Postmedia's Terms and Conditions with respect to my use of the Site and I have read and understand Postmedia's Privacy Statement. I consent to the collection, use, maintenance, and disclosure of my information in accordance with the Postmedia's Privacy Policy.

Postmedia wants to improve your reading experience as well as share the best deals and promotions from our advertisers with you. The information below will be used to optimize the content and make ads across the network more relevant to you. You can always change the information you share with us by editing your profile.

By clicking "Create Account", I hearby grant permission to Postmedia to use my account information to create my account.

I also accept and agree to be bound by Postmedia's Terms and Conditions with respect to my use of the Site and I have read and understand Postmedia's Privacy Statement. I consent to the collection, use, maintenance, and disclosure of my information in accordance with the Postmedia's Privacy Policy.