Share this story

A number of studies have spotted a worrisome trend: although the number of scientific journals and articles published is increasing each year, the rate of papers being retracted as invalid is increasing even faster. Some of these are being retracted due to obvious ethical lapses—fraudulent data or plagiarism—but some past studies have suggested errors and technical problems were the cause of the majority of problems.

A new analysis, released by PNAS, shows this rosy picture probably isn't true. Researchers like to portray their retractions as being the result of errors, but a lot of these same papers turn out to be fraudulent when fully investigated. If there's any good news here, it's that a limited number of labs (38, to be exact) are responsible for a third of the fraudulent papers that end up being retracted.

The new paper focuses on the fact that problems with a paper may crop up after—sometimes years after—it's first published. If these problems are minor, authors can issue clarifications or corrections (typically called errata or corrigenda). But if they raise questions about the validity of the work in general, the journal will sometimes retract the paper entirely. This is the equivalent of saying the paper never existed, and shouldn't be considered part of the peer-reviewed literature. To notify the scientific community of this action, the journals will typically mark the online versions of the original publication, and separately publish a short retraction notice.

The authors of the new paper took advantage of this by searching for every retraction notification that has appeared in the NIH's PubMed database. They found just over 2,000 of them as of May of this year, then categorized them according to the reason for the retraction, based on the contents of the announcement: plagiarism, duplicated work, errors, and fraud.

A bumper crop of fraud

Honest errors occur in research all the time, and I've seen papers retracted because of software errors or after a commercial product didn't live up to its promised capabilities. And past studies (such as this one) have suggested these errors account for the majority of retractions. But the authors of the new study find that this isn't the case. Many retractions (over 15 percent by one measure) claim to be because of errors, but ultimately turn out to be because of fraud. The authors discovered this by checking the author list against reports prepared by the Office of Research Integrity, which polices research fraud.

This may be because the retractions come before investigations are complete, or because researchers get to write their own retraction notices. But whatever the cause, the gap between retractions and reality can be hilariously large. One paper ostensibly retracted because of "flaws in methodological execution and data analysis" turned out to have "many instances of data fabrication and falsification." Another that was pulled because one image "was not correct" was found, after investigation, to be the result an author knowingly selecting an experiment that gave the answer he wanted.

With these retractions reevaluated, the authors find that cases of fraud account for over 43 percent of all retractions. Duplicate publications and plagiarism account for another 24 (the Retraction Watch blog found a awkward example of the former just today). Those numbers drop honest errors down to the point where they only account for just over 20 percent of the total retractions. Fraud is a bigger problem than we'd thought.

And it's getting bigger. The authors find that, since 1975, the rate of retracted articles as a percent of total publications has increased nearly tenfold. Duplicate publications and plagiarism, which didn't use to be a significant problem, have boomed since 2005. And while retractions due to errors have increased, those due to fraud have increased much faster.

Patterns of deceit

When it comes to fraud, the traditional research powers are leading the way. The US has the largest number of cases, followed by Germany and Japan. But things like plagiarism and duplicating publications are quite different, with China being a major player, and India having a large presence. These sorts of copying problems are rare in high-profile journals like Nature and Science. Instead, there was a strong correlation between the incidence of fraud and the prominence of the journal, as measured by its impact factor.

The authors suggest that the increasing levels of fraud may come from "the incentive system of science, which is based on a winner-takes-all economics that confers disproportionate rewards to winners in the form of grants, jobs, and prizes at a time of research funding scarcity." That could certainly explain its prevalence in the US, where competition for grant money has become increasingly fierce in a way that roughly parallels the rising rates of fraud.

If there's good news in the data, it's that fraud may be increasing, but it might not be as widespread as the numbers would appear to suggest. A total of 38 labs, each of which had at least five retractions, accounted for a hefty 34 percent of the total frauds evaluated by the authors.

One interesting result the study also turns up is that retractions often do their jobs: other papers stop citing the original work shortly after the retraction notice is printed. But this doesn't always happen. In the case of one 2005 article, the HTML and PDF versions available online are clearly marked with retraction notices, but they found it continues to get cited. Even though the paper contained what appears to be inaccurate data about a molecule's function, it remains the first description of a molecule that turned out to be important.

Promoted Comments

Acadamia is incredibly focused on publishing as the stardard for one's professional worth. The pressure to publish something, anything, is so immense that it's gone from being a product of research, to being a significant reason to do research in the first place.

Makes sense to me. We've cut funding and made competition for research positions very tight. I have two friends with hard science PhD's (mathematics and biochem ) working at universities on projects that are grant funded - with one year terms. They feel that in the next year, they have to publish, and preferably prominently publish. If they don't, odds are they'll not have grants next year, and they'll have to start looking for work in the private sector - the mathematician is looking at actuarial work, and the biochemist is considering working for a food conglomerate.

Both of them would prefer to remain in research, but there's just not that much funding to go around. Having a major study published would go a long way toward getting them on the tenure track they both want. Even if it was later retracted. I believe my friends to be honest, but I could certainly see how some people would be willing to fudge things in order to get published.

The authors of the new paper took advantage of this by searching for every retraction notification that has appeared in the NIH's PubMed database. They found just over 2,000 of them as of May of this year, then categorized them according to the reason for the retraction, based on the contents of the announcement: plagiarism, duplicated work, errors, and fraud.

Or, at least, they claim to have done this search and analysis. This field of research is just begging for a serious hoax.

[moderation="trolling"]There are entire venues encouraging fraudulent research. For example, journal Nature Climate Change makes certain assertions obvious from the title. It is hard to see how anything compelling can be published there. Witness shrinking fish article trumpeted all over the news today. I think even for scientifically illiterate journalists it was obvious junk science, who made thorough enjoyment giving them 5 minutes of fame.[/moderation]

Acadamia is incredibly focused on publishing as the stardard for one's professional worth. The pressure to publish something, anything, is so immense that it's gone from being a product of research, to being a significant reason to do research in the first place.

A friend of mine was doing their PhD had their funding pulled and was forced to start again and do something slightly different, because their results contradicted other students prior to them working on similar problems for the same institute.

Getting things wrong and into publications is so utterly damaging to a person's career and the reputation of the institute that helped produce the work that institutes would rather cover up possible issues than confront them.

Acadamia is incredibly focused on publishing as the stardard for one's professional worth. The pressure to publish something, anything, is so immense that it's gone from being a product of research, to being a significant reason to do research in the first place.

For example, journal Nature Climate Change makes certain assertions obvious from the title. It is hard to see how anything compelling can be published there.

Not to mention Science, whose mere title clearly suggests that so-called scientific method is valid, and that the giant man in the sky isn't just capriciously making things up as He goes along. How can we really know? Do you think Science is going to publish the divine revelations that God wrote on my pancakes this morning? I don't think so. The bias is blatant.

Acadamia is incredibly focused on publishing as the stardard for one's professional worth. The pressure to publish something, anything, is so immense that it's gone from being a product of research, to being a significant reason to do research in the first place.

+1. I think this hits the nail on the head

Another +1 from an academic. Same idea put differently, is that a research professor's job nowadays is not research---it's publishing papers. And I assure you, they're not the same.

Acadamia is incredibly focused on publishing as the stardard for one's professional worth. The pressure to publish something, anything, is so immense that it's gone from being a product of research, to being a significant reason to do research in the first place.

+1. I think this hits the nail on the head

Another +1 from an academic. Same idea put differently, is that a research professor's job nowadays is not research---it's publishing papers. And I assure you, they're not the same.

I'll give my +1 as well. My wife is scientist and I see the pressure she gets to publish results (her lab just got an article to be published in Nature's next issue) because not having enough publications can harm a researcher future possibilities in getting grants.

Acadamia is incredibly focused on publishing as the stardard for one's professional worth. The pressure to publish something, anything, is so immense that it's gone from being a product of research, to being a significant reason to do research in the first place.

+1. I think this hits the nail on the head

Another +1 from an academic. Same idea put differently, is that a research professor's job nowadays is not research---it's publishing papers. And I assure you, they're not the same.

I'll give my +1 as well. My wife is scientist and I see the pressure she gets to publish results (her lab just got an article to be published in Nature's next issue) because not having enough publications can harm a researcher future possibilities in getting grants.

For example, journal Nature Climate Change makes certain assertions obvious from the title. It is hard to see how anything compelling can be published there.

Not to mention Science, whose mere title clearly suggests that so-called scientific method is valid, and that the giant man in the sky isn't just capriciously making things up as He goes along. How can we really know? Do you think Science is going to publish the divine revelations that God wrote on my pancakes this morning? I don't think so. The bias is blatant.

Nature Climate Change and Science are by no means adjacent on the graph of scientific publications. There is at least one entry between them: Nature, and one can easily find articles of dubious value in the later. Therefore, I find it plausible that:1. All Science articles are good2. Most of Nature articles are good, but some are goofy (e.g. Mercer's West Antarctic melt prediction, or more recent Stein's where with poor data but statistic trick he "demonstrated" nonexistent warming) 3. All Nature Climate Change content is junk.

I want to be a researcher for the RIAA because that would be the easiest job ever.

Step 1 - Make up figures so ridiculous it's impossible to tell if you're being sarcastic or you're just plain stupid.Step 2 - Put new figures into a hat.Step 3 - Pull a figure from the hat and present it to the public as 100% fact.Step 4 - Profit.

Acadamia is incredibly focused on publishing as the stardard for one's professional worth. The pressure to publish something, anything, is so immense that it's gone from being a product of research, to being a significant reason to do research in the first place.

+1. I think this hits the nail on the head

Another +1 from an academic. Same idea put differently, is that a research professor's job nowadays is not research---it's publishing papers. And I assure you, they're not the same.

+1 also.

And I think that it is just another example the short-term metric-ization of practically everything these days. Actual value of a thing is hard to measure, number of things is not.

Measuring value in academia is not easy, but we need some measure. I'm reasonably optimistic about the h-index: number of citations ought to be a far better guide to the value an academic has created than the number of publications. What is a reasonable number of citations is, of course, field specific. Moreover, book sales should somehow factor in as well...

But I think an even bigger issue than outright dishonesty is the failure to report accurately the details of an experiment. How many researchers omit treatments that failed to work? Models they ran but did not produce significant results? Questions they asked but that did not go into the (significant) analysis? If you run 20 tests, you're going to get, on average, one significant result. That's a fishing expedition and not great insight.

Here's some data on that from the social sciences: John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychological Science, 23(5), 524 –532. doi:10.1177/0956797611430953

Quote:

In a paper, failing to report all of a study's dependent measures: 66.5%Deciding whether to collect more data after looking to see whether the results were significant: 58%In a paper, failing to report all of a study's conditions: 27.4%Stopping collecting data earlier than planned because one found the result that one had been looking for: 22.5%In a paper, "rounding off" a p value (e.g. reporting that a p value of 0.054 is less than 0.05): 22.3%In a paper, selectively reporting studies that "worked": 50%Deciding whether to exclude data after looking at the impact of doing so on the results: 43.4%In a paper, reporting an unexpected finding as having been predicted from the start: 35%In a paper, caliming that results are unaffected by demographic variables (e.g. gender) when one is actually unsure (or knows what they do): 4.5%Falsifying data: 1.7%

And those are self-admission rates. They also don't include honest mistakes, e.g. people not knowing how to run a difference-in-differences analysis.

i found academia disconcertingly similar to a priesthood at my place 'o' higher education. i broke it down into star professors, stale professors, and new professors.

"star" professors acquire doting acolytes due to their esteemed reputation and perceived brilliance. some really reserve it; some of them are just charlatans who manage to stay relevant via a slight sociopathic leaning.

"stale" professors are tenured and dull, and exist to check the boxes of grad students who can't squeeze themselves into a star's office hours.

"new" professors actually taught the classes, under the guidance of either star or stale professors.

"TAs" are just helpers for new professors.

most people don't give a whit either way, and just want a name on their resume in order to walk into a better job on graduation (in theory). predictably, it turns into a bit of a snake pit to get a star professor. i found all of this irrelevant to studying computer science, but i could see how it might drive you to cheat if you were there for your career instead of your education.

edit: i guess my point is, if you're there to ladder climb, the same backstabbing opportunities exist as in any other hierarchical system.

"This practice," the authors conclude, "suggests that under certain circumstances, scientists continue to find utility in retracted articles."I think that the problem here is actually some laziness from the scientists. Sometimes we copy some references from an older work to a newer one, because we remember what was explained there, and think this is again a useful reference. So, the reference is copied without actually checking again from the journal webpage. Not to mention if the paper was published only in paper.... (Damnit! Now that I know, I have to check all of my references again!)

Then, I also support the complaints about the fact that researchers are forced to publish as much as possible. I've got many colleagues which, when asked about their current work, replied with a "Oh! Nothing, just a refrito" (rehash). Why? To accumulate publications for their next career advance. Sometimes this is like an old-time running-against-the-clock racing game.

Finally, the confirmation bias is actually astonishing for me. During my early phd days, I was sometimes adviced not to publish "negative results". That is, sometimes I'd done an experiment that I thought nobody else had done before. The results turned to be completely negative, and I intended to publish the experience to tell about a path that drives nowhere (or maybe someone could get a different perspective and go further on this). Everybody told me that I would be unable to publish this, mainly because people could think: "Yes, this is negative, that's the reason that nobody tried to publish it before. Everyone knew this.". Well, I was on my early phd years, so this might very well be the case, but still, I think I've actually seen really very few negative papers.

I want to be a researcher for the RIAA because that would be the easiest job ever.

Step 1 - Make up figures so ridiculous it's impossible to tell if you're being sarcastic or you're just plain stupid.Step 2 - Put new figures into a hat.Step 3 - Pull a figure from the hat and present it to the public as 100% fact.Step 4 - Profit.

Funny how this post was not moderated, since it is less on subject than those that have been.

I want to be a researcher for the RIAA because that would be the easiest job ever.

Step 1 - Make up figures so ridiculous it's impossible to tell if you're being sarcastic or you're just plain stupid.Step 2 - Put new figures into a hat.Step 3 - Pull a figure from the hat and present it to the public as 100% fact.Step 4 - Profit.

Funny how this post was not moderated, since it is less on subject than those that have been.

This is why I take new studies with a grain of salt until results have been replicated a few times. I wish that media outlets would do the same, but it's doubtful that would ever happen.

Bondi Surfer wrote:

PNAS. Titter, titter

I truly can't believe an group chose that acronym/name. I guess they decided they needed something that would tell people how deep they'll go to find the right answers. Sometimes that road is long and hard. I wonder how big their staff is?

Acadamia is incredibly focused on publishing as the stardard for one's professional worth. The pressure to publish something, anything, is so immense that it's gone from being a product of research, to being a significant reason to do research in the first place.

How many times I've heard from my boss "this year we need to publish at least N papers". Not, we need/have to investigate this and that and maybe it will bring some papers. Number of papers is "requested", now spread to find suitable topic to write about - RIDICULOUS.

Medical Hypotheses, which was established more than 30 years ago, is the only Elsevier journal that does not currently subject its submissions to peer review.

Instead, its editor Bruce Charlton, professor of theoretical medicine at the University of Buckingham, decides what to publish on the basis of whether the submissions are radical, interesting and well argued.

So if anything, the person you cite is more likely than not guilty of aiding and abetting the problem being discussed today.

Thanks again for an interesting topic. I am a scientist too, and I guestimate that fraud is even more widespread then reported here. The thing is most fraudulent papers are in journals that nobody reads and therefore nobody tries to replicate.

I think there are several reasons for fraud. First, you cannot imagine the pressure. It is brutal. You must have a paper in the top 3 journals plus be from a lab that has a pedigree of papers in top 3 journals to get invites for interviews for the best jobs. Postdocs, who are not tenured and their failure to publish often means crushing their lifelong dream, feel the heat the most. This pressure is sometimes playing tricks on you. You can actually start to lean to interpretations that favor your hypothesis. It can happen to everybody. Most of us are professional enough to recognize this and keep rigorous criteria for interpreting results. But I see candidates for going bonkers every day. Red eyes, pumped with caffeine and prozac with a psychopathic PI on their backs... You think I am kidding?

Another thing is the total disconnect between some PIs and their postdocs/students. This leads to sloppy science and I think that is a much bigger problem than intentional fraud. There is absolutely no mentoring in some labs. All real science is done by poorly trained postdocs/students, while the PI just looks at the final figure. If you ask me that's asking for trouble. I simply don't believe papers from people I don't know or they don't have a track record of papers that turned out to be true. I just automatically assume they are sloppy. End of rant.

I suspect that human capability reached its peak or plateau around 1965-75 – at the time of the Apollo moon landings – and has been declining ever since.

The failing of which is not scientific but entirely political and wholly ignores the rise of the semiconductor industry and every innovation that has followed in the last 35-45 years.

when i was in 3rd grade, i wrote a story about a race of ants. the race of ants gained more and more knowledge with each generation, but the amount of time they had to learn the knowledge of their race remained fixed (ants don't live very long!). so, they developed technology to upload everything into their brains. as i wrote this many years before "the matrix" was made, i can't help but suspect the wachowski brothers watch 3rd graders for killer ideas.

Thanks for this interesting article; like many posters before me, I am not surprised by these results which directly stem from the fact that the number of published articles has become the prime metric for researcher evaluation.

This very fact is what prompted me to quit the academic world after completing my PhD. As a young researcher, one can’t ignore this issue; you have to get publications in good papers / conferences, and this shapes the way your research is done. There is an art to making a paper attractive to reviewers that unfortunately has little to do with the actual quality of the research performed. It rather involves something akin to marketing with a strong social network component added in.

Now if you add in the fact that most people reviewing your work won’t be domain specialists and that most people evaluating you won’t ever read one of your papers but just review a list of them and the incentive for fraud becomes quite apparent in an ever more competitive environment.

@ Soriak I kind of disagree with you, while I agree an evaluation metric is necessary; I don’t believe the h-index to be a much better one. For starters, any metric based on a single piece of data will be easily skewed (i.e. clicks of scientists cross-referencing each other…). Moreover, I don’t think the popularity of a given body of research is a good indication of that research’s quality or value.

A friend of mine was doing their PhD had their funding pulled and was forced to start again and do something slightly different, because their results contradicted other students prior to them working on similar problems for the same institute.

Getting things wrong and into publications is so utterly damaging to a person's career and the reputation of the institute that helped produce the work that institutes would rather cover up possible issues than confront them.

Yes myself and my other half encountered academic corruption quite a bit at our local university. My former supervisor was obsessed with producing papers at any cost. The content, while technically a genuine contribution really wasn't a real contribution to the field (basically just repeating the same experiments with a slightly different approach and publishing that) so as far as fraud goes, he was an angel relatively speaking but on the other hand, to get the funding, they step over the bodies they have stabbed in the back.

We just totally avoid association with those types now. It's like there is another community of people out there with genuine research interest, morals and values. Just stick to them. Good to know there are watchdog sites out there too.

Acadamia is incredibly focused on publishing as the stardard for one's professional worth. The pressure to publish something, anything, is so immense that it's gone from being a product of research, to being a significant reason to do research in the first place.

+1. I think this hits the nail on the head

Another +1 from an academic. Same idea put differently, is that a research professor's job nowadays is not research---it's publishing papers. And I assure you, they're not the same.

Yep, sad but true. It doesn't help that journals and reviewers are always looking for clear narratives in research. Publishing negative, or conflicting results is near impossible; it's incentive for omission if not out and out fraud.