Reproducibility: An important altmetric

Abstract

Reproducibility, the ability to replicate the findings of a research publication, is one of the major tenets of the scientific method. However, in the case of academic preclinical research, reproducibility (more accurately the lack thereof) has become a significant problem. An increasing number of reports have found discrepancies in published preclinical studies across scientific disciplines. For instance:

Amgen found that 47 of 53 “landmark” oncology publications could not be reproduced (1).

Bayer found that 43 of 67 oncology & cardiovascular projects were based on contradictory results from academic publications (2).

Dr. John Ioannidis and his colleagues found that of 432 publications purporting sex differences in hypertension, multiple sclerosis, or lung cancer, only one data set was reproducible (3).

These studies, and the many others that report similar results, highlight a significant problem in the development of new therapies to treat disease. Because the identification of potential drug candidates typically happens in academic research labs, pharmaceutical companies that use these new drug candidates as the basis for their drug development efforts are being forced to re-evaluate their reliance on academic research and are looking for a new way of discovering the most robust findings (2).

One reason so many preclinical publications contain research that can’t be reproduced may be attributable to the lack of academic incentives for producing high-quality robust research. The current system rewards researchers who quickly acquire publications and grants, which puts those who devote time to replicating results at a disadvantage. Further, the system promotes competition over publications, not collaboration with experts to produce robust data. The above data indicate a need for an initiative to incentivize reproducibility and robustness in academic research.

We propose a potential solution taking advantage of the Science Exchange network of experts who operate outside of the current academic incentive structure (core facilities and commercial scientific service providers). These experts conduct experiments on a fee-for-service basis. Importantly, the only incentive for these researchers is to produce high quality data. We propose using this network of providers to independently validate results produced in academic labs identifying potential novel drug targets, to improve the discovery of translatable drug targets. Replications of high-impact published work can be published in PLoS ONE, giving the original publication a reproducibility stamp.

To facilitate this process we plan to launch a reproducibility initiative in July 2012. This $1M pilot fund, will allow researchers to apply for independent replication of their data to be performed at core facilities via Science Exchange. 20-25 replications will be conducted in the initial pilot. We hope to show proof of principle for how NIH and other funders can address the reproducibility issue via funding independent replications in addition to novel research.

To incentivize researchers to take part, several inducers are proposed:

no cost: replications are funded by the reproducibility initiative

original results are fast tracked for publication in a high impact journal and “stamped” as independently replicated

independently replicated results published in PLoS ONE (“two publications for the price of one”)

As awareness of irreproducibility grows, this initiative will provide a way for top quality researchers to distinguish themselves and the reproducibility stamp will serve as the incentive to promote robust, high-quality work.

In the longer-term, it is hoped that the reproducibility initiative will promote a cultural change in replicating and validating research. Large funding agencies and government bodies should fund independent replications as well as novel studies and incentivize collaboration between researchers. In addition, journals should reward independent validation of results. Eventually this will facilitate a culture shift towards impact of reproducibility as a key ‘altmetric’ over novelty.

Scientometrics (2014). Volume: 102, Issue: 2. Pages: 1773-1779. J. C. F. de Winter et al.An analysis of article-level metrics of 27,856 PLOS ONE articles reveals that the number of tweets was weakly associated with the number of citations (β = 0.10), and weakly negatively associated with citations when the number of article views was held constant (β = −0.06). […]