Please log in

Register now for free

First Name *

Last Name *

Email *

A valid e-mail address. All e-mails from the system will be sent to this address. The e-mail address is not made public and will only be used if you wish to receive a new password or wish to receive certain news or notifications by e-mail.

The “deluge” of new scientific papers that academics have to read makes it all but impossible to know what research to trust, according to a new paper based on candid interviews with biomedical researchers.

This “overflow” of new information, which the current system of peer review does not have the capacity to filter and check, risks good research being ignored and experiments duplicated because scientists are unaware of all the work being done in their area.

In recent years prominent scandals over research fraud and irreproducible results have shaken trust in science. To explore these concerns, scholars interviewed 20 experienced biomedical principal investigators in the US.

Actual scientific fraud was not seen as a major issue, because although very serious, it was also extremely rare, according to “Overflow in science and its implications for trust”, published in the journal eLife.

Instead the authors raised concerns about “the rapid proliferation of journals, and the immense number of papers being published in relatively new mega-journals”.

“There’s a flood, there’s a deluge of published and unpublished papers,” said co-author Sabina Siebert, a senior lecturer in management at the University of Glasgow. Scholars who previously did not publish were now under pressure to do so, while emerging economies were also contributing, she added.

One researcher, interviewed anonymously, pointed out that the sheer size of papers had also ballooned. “I tell people in my lab, go and…pick up a copy of Cell from the 1970s or Journal of Cell Biology…papers that were published then, would be Figure 1A of a Cell paper today,” they said.

Too much information – known as “overflow” in social science – forces scientists to rely on colleagues’ reputations or journal prestige to judge a paper’s importance or trustworthiness, the paper argues. “If I don’t know the authors then I will have to look more carefully at the data,” said one interviewee.

This can lead to an “unjustified lack of trust” in relatively unknown authors, the paper argues. There was “absolutely” a possibility that this “overflow” could increase the risk that research is done twice, or heavily overlaps, because scientists are unaware research has been done before, Dr Siebert said.

Yet at the same time, the mass of new research has led to intense competition to be published in high-profile journals, she added, leading to the risk of corners being cut, thereby further reducing trust among scientists.

One interviewee said: “For me the bigger concern is…trying to make it a hot, sexy story to try to get into a high tier journal. For me that’s a bigger problem because it creates pressure on junior people…to feel that they will only get a job if they publish in Nature or Science or Cell, and if they don’t publish there they’re washed-up, useless, a failure.”

Dr Siebert said that while the current system of peer review “broadly” works, “it has only so much capacity”. Instead, peer review could be supplemented by an initial stage of checking experimental design, statistics, analysis and images before a paper is sent to an academic reviewer, the study suggests.