Dom cram's research page

﻿﻿

﻿﻿I AM A BEHAVIOURAL ECOLOGIST WITH A BROAD INTEREST IN REPRODUCTION, SOCIALITY, AND HEALTH. I AM PARTICULARLY INTERESTED IN LINKS BETWEEN REPRODUCTIVE DECISIONS AND PHYSIOLOGICAL AGEING, AND I STUDY THESE LINKS IN A RANGE OF WILD SOCIAL SPECIES.﻿﻿

Picture the scene: you battle to get funding for your research, you trudge through data collection and analysis, write and re-write drafts of your paper, navigate the treacherous peer-review process and - FINALLY - get your paper published! HURRAH!

Next, you decide to build on your work, and repeat the same experiments with additional factors or treatments. This time, to your dismay, the results don't match those of the first time around. What do you do?

Too often, one of two things happen:

You abandon this research. Clearly something weird is going on, and further research might just confuse things. You can't afford confusion: you need high-impact papers, and you need them yesterday. As researchers, it is our duty to share our findings, and sweeping inconvenient results under the rug must stop.

You power through and write-up your new results. Journal after journal reject your manuscript, claiming it is 'not novel' and that they 'do not publish studies which repeat work published elsewhere.' Journals have a duty to publish research, regardless of how impactful or 'sexy' they think it is (what do they know anyway?).

For too long, the above issues have meant that studies that fail to replicate published findings never see the light of day. This is not just a problem for the unlucky researcher who can't get their paper published - it affects the entire field. Think of the countless hours and funding other labs may be wasting in a vain attempt to replicate the findings of others.

Bucking this trend, Brown-Schmidt and Horton recently published a paper detailing a total failure to replicate their own published results from several years earlier. They should be congratulated for their honesty and integrity (indeed, their paper has received a huge amount of Kudos on Twitter).

Bravo too, to PLOS, for publishing this study. Too many journals would reject this paper based on the title alone. Yet shouldn't these results be seen by the research community, and the public who funded the work? PLOS ONE plays a valuable role in ensuring any properly-conducted research is freely accessible to one and all. For these reasons and many more, I've decided to send my next paper to PLOS ONE. As a study dominated by extra-large p-values, it is at risk of becoming a publication bias statistic! Let's see what the folks at PLOS ONE think of it...