Solid science: How graduate students foster research transparency

Reproducibility is seen as the gold standard for solid science. However, three are few incentive to work transparently, and even less incentives to conduct
replication studies. To change this, more and more teachers are assigning
replication studies to graduate students as a class assignment. Will this turn early career researchers into witch hunters?

Many scientists still remember the largest academic scandal of 2013: The article “Growth in a Time of Debt” by Carmen Reinhart and Kenneth Rogoff had produced results in their paper that could not be replicated. One of the reasons why no researcher could reproduce their results was that the authors made a simple coding error in their excel spread sheet. For many years, this paper had been heavily relied on by politicians to introduce austerity measures. How was the error uncovered? A graduate student conducted a
replicationstudy and asked to see their data set, which had not been openly available before.

This example shows that research transparency – or the lack thereof – can have a direct impact on society. In order to establish such transparency, more and more instructors assign
replication studies to students. This way, students learn statistical methods based on real life data, add important knowledge to their
field, and they can get published early in their career. Most importantly, however, students develop a reproducibility routine in their own work. When struggling to access data, or when being unable to follow the exact analysis of published work, students learn how to improve their own transparency. So far, this trend of assigning
replication studies has been welcomed in the academic world.

However, recently there seems to be a
replicationbacklash. Is it really a good idea to let students ‘take on’ established researchers and ‘correct’ their results? A Cambridge scholar in the
field of social psychology, whose paper failed to replicate, stressed that
replication attempts by other authors had led to the “defamation” of her work. Simone Schnall wrote in her blog post “careers and funding decisions are based on reputations. The implicit accusations that currently come with failure to replicate an existing finding can do tremendous damage to somebody’s reputation, especially if accompanied by mocking and bullying on
social media. So the burden of proof needs to be high before claims about
replication evidence can be made.” Similarly, biologist Mina Bissell wrote that a failed
replication could “jeopardize the original scientists’ chances of obtaining funding”

The potential reputational damage when published articles are not reproducible should not be ignored. It cannot be in the interest of students - or anyone – to ‘bring down’ big names in the
field and start an academic career based on slashing someone else’s work.

Therefore, it is all the more important that replicators work in a professional way. Students need to learn how to draft their
replication papers with care, and make sure that they publicly only call a
replication ‘failed’ when they have conducted extensive analysis. Assigning
replication in class is a perfect way to introduce students to
replication – because the instructor can help them to do it properly, and in a professional manner.

For example, students in my own Replication Workshop at Cambridge are asked to follow specific guidelines: The students first try to duplicate the existing results, and then conduct robustness checks. Before posting the results in our dataverse, the students are asked to notify the original author by email that the
replication will be shared, and to invite comments. Students are also asked to use professional, collaborative language in their write-up, and to provide their data, analysis and software code. A recent article in the
field of Social Psychology proposes a
replicationrecipe with very similar ingredients.

If students make sure to work on their
replication project carefully and in a professional manner,
replication studies will enhance knowledge. Most
replication projects are not intended to ‘bring own’ established researchers anyways. In my own classes, students do not hunt for errors. In fact, I have experienced that students felt ‘successful’ and motivated when they could replicate tables and figures, but they were very frustrated when they could not. When students could not replicate a result they spent several weeks checking their own analysis because they assumed that they (not the original author) had made mistakes.

The painful and time-intensive process of re-analysing data and adding to an existing body of work helps students to understand that science is about reproducibility, not about error hunting. Learning first-hand what it means to work transparently is the best socialisation into science graduate students can experience. Only if universities invest a reproducibility and
replication culture among graduate students can we ensure that the gold standard of reliable and solid science can be upheld.