This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Abstract

Data are the foundation of empirical research, yet all too often the datasets underlying published papers are unavailable, incorrect, or poorly curated. This is a serious issue, because future researchers are then unable to validate published results or reuse data to explore new ideas and hypotheses. While data files may be securely stored and accessible, they must also be accompanied by accurate labels and identifiers. To assess how often problems with metadata or data curation affect the reproducibility of published results, we attempted to reproduce Discriminant Function Analyses (DFAs) from the field of organismal biology. DFA is a commonly used statistical analysis that has changed little since its inception almost eight decades ago, and therefore provides an excellent case study to test reproducibility. Out of 100 papers we initially surveyed, fourteen were excluded because they did not present the common types of quantitative result from their DFA, used complex and unique data transformations, or gave insufficient details of their DFA. Of the remaining 86 datasets, there were 16 cases for which we were unable to confidently relate the dataset we received to the one used in the published analysis. The reasons ranged from incomprehensible or absent variable labels, the DFA being performed on an unspecified subset of the data, or incomplete data sets. We focused on reproducing three common summary statistics from DFAs: the percent variance explained, the percentage correctly assigned and the largest discriminant function coefficient. The reproducibility of the first two was high (20 of 25 and 43 of 59 datasets, respectively), whereas our success rate with the discriminant function coefficients was lower (15 of 36 datasets). When considering all three summary statistics, we were able to completely reproduce 46 (66%) of 70 datasets. While our results are encouraging, they highlight the fact that science still has some way to go before we have the carefully curated and reproducible research that the public expects.

R code used for reanalyzing the discriminant functions

Additional Information

Competing Interests

Author Contributions

Rose L Andrew conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper.

Arianne YK Albert conceived and designed the experiments, analyzed the data, reviewed drafts of the paper.

Sebastien Renaut conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, reviewed drafts of the paper.

Diana J Rennison conceived and designed the experiments, performed the experiments, prepared figures and/or tables, reviewed drafts of the paper.

Dan G Bock conceived and designed the experiments, performed the experiments, prepared figures and/or tables, reviewed drafts of the paper.

Tim Vines conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper.

Funding

Diana J Rennison was supported by an NSERC postgraduate studentship. Dan G Bock was supported by an NSERC Vanier CGS and a Killam doctoral scholarship. Sebastien Renaut was supported by an NSERC postdoctoral fellowship. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Add your feedback

Before adding feedback, consider if it can be asked as a question instead, and if so then use the Question tab. Pointing out typos is fine, but authors are encouraged to accept only substantially helpful feedback.

Follow this preprint for updates

"Following" is like subscribing to any updates related to a preprint.
These updates will appear in your home dashboard each time you visit PeerJ.

You can also choose to receive updates via daily or weekly email digests.
If you are following multiple preprints then we will send you
no more than one email per day or week based on your preferences.

Note: You are now also subscribed to the subject areas of this preprint
and will receive updates in the daily or weekly email digests if turned on.
You can add specific subject areas through your profile settings.