From time to time, we find online college syllabi among those sites referring us traffic, and some professors have told us that they use Retraction Watch in their classes. We’re pleased and humbled by that.

In a new paper published in the Journal of College Science Teaching, three professors at Clayton State University in Morrow, Georgia, discuss why retractions are good case studies for teaching ethics and examining the scientific process in class. Stephen Burnett, Richard H. Singiser, and Caroline Clower write:

In this article, we discuss our experience using articles that have been “abandoned” (where their results are no longer accepted due to new evidence) and/or retracted as methods for teaching students about scientific literature in general and specifically about scientific ethics. Being presented with a more accurate picture of primary literature can help students develop an improved understanding of how science is actually practiced and how scientific ideas change over time. By examining retracted articles in which ethical lapses have been uncovered, students are able to develop a more clear understanding of the types of ethical problems that can occur and improve their ability to recognize them.

Their conclusion:

From retracted articles it is easy to stimulate discussion on a variety of issues, even if the specific methods might be too advanced. We have found that the best way to help define ethics is to show examples of ethical failures. Students can clearly see what is wrong in most of the retracted papers, and when they cannot see why a paper has been retracted, it becomes an important teaching tool to help them understand ethical conduct.

Right on! As a science undergraduate, part of my education was research and publication ethics. The specific cases of lapses were real eye openers for the aspiring scientist. It begs at least the question of how much of the body of evidence if wrong. It also points to a system that is reluctant to discipline fraudsters to keep the record clean and pure. Retraction watch does indeed provide valuable case studies and should be a part of every science education. One caveat though: its only those that get caught and then only those that are acted on—-the tip of the iceberg!

I talk about retractions (and Retraction Watch) all the time in my Biological Writing class here at UTPA. While I think the goal of the class was originally just to give student practice at technical writing, I spend a lot of time telling them about the process of publication, which I think makes them better equipped to evaluate claims they see. (What does it mean if a news article is about a conference paper versus a journal article? Was the article peer-reviewed? How can you tell? If it’s old, have there been follow-ups?)

Peer review is generally conducted by written correspondence with the lead author only and lacks the real time face to face questioning that can better flush out any issues of credibility. It gives the fraudster time to parse and obfuscate and hide behind the written paper words. Yes, there are those that an lie with a straight face, but anyone can hide behind words on paper. In the end, peer review can give a false sense of security if the author is a fraudster. Case in point, JAMA issued its first ever “Expression of Concern” because the lead author lied to peer reviewers about a central issue in a flawed study design, but JAMA paradoxically would not retract the study. This would be an excellent case study for your students. You can see more by internet keyword search “JAMA expression of concern” and more at “OHRP Hebrew rehab”

Good to hear this. I have been running a session on “Fraud in Science” in my teaching both at UG and PG level, for around 12 years now. The initial trigger for this was the Jan Hendrik Schon case, and I still use that in the classes along with a plethora of new cases each year. Since the start of Retraction Watch, I have included this site on the list of additional material for the students as well as mentioning and linking to in my lecture. My rationale is that fraud tells us a lot about how Science works, both as science, and as human activity, as well how Science works as a career, since clearly that impacts on the motivation of some.

I have worked at the same University in a staff capacity for 13 years and other universities prior to that. How would you suggest trying to get a class such as this added to the curriculum? Among my other duties is editing manuscripts prior to submission. I cringe at the plagerism (self and non-self) and lack of citations. Entire methods sections are copied from paper to paper along with pieces of other sections. Sometimes the lack of citations is intentional, sometimes not, and sometimes caused by the journal limiting references. A recent review submission was allowed only limited references.

We have a course on “current literature” for our students, with one paper selected for study each week in different areas. I pretty much always try to slip in a ringer – a paper with clear misconduct (splicing, panel duplication etc). The best are papers are those that are not yet retracted – just in case the odd student actually does due diligence! The old Science Fraud blog was a goldmine for source materials, and its passing is a big blow. Fortunately, the eagle eyed hunters on this blog who look at other papers by an author whose work has been questioned serve to fill the gap. Well done to those people, your efforts are greatly appreciated.

Clear evidence of manipulation in a paper that the students get to see (and almost always miss on their reading!) allow the discussion to go in diverse ways- how to REALLY look at a paper (and not blindly take what the authors say), what else is misconduct (leads into discussion of FFP), other research misconduct, social responsibility etc etc. These lectures are some of my most enjoyable teaching moments, you can see the students horizens being broadened as you teach.

Just to play the devil’s advocate, aren’t we also training the next generation of scientists to be better cheaters? Stapel really pushed his luck to ridiculous extrema, that’s why he was caught. The lesson for many young scientists could be simply that if you cheat, just do so in moderation, never do it casually, and always be strategic about when and how to do it. Competition for scientific jobs is fiercer than ever, so there is no reason to believe that cheating rates will go down: a smart and determined person who perceives success in the field as a life or death issue will cheat at some point, if that is what it takes. In other words, we may just encourage people to cheat in a less easily detectable way.

I believe this fear is for the most part unfounded. Retraction Watch has probably already raised the bar for every would be cheater. If any scientists is just now tempted to commit misconduct, he (or she) may be appalled by the prospect of showing up as guest star at this site and getting his 15 minutes of fame. And no matter how painstakingly the tracks are covered, there will be “detectives” going after them. I hope that the main effect will be that the challenges for would be cheaters are so high, that it will be more attractive to do real science. Of course, if Retraction Watch became too effective, it would loose its “raison d’être”. But that won’t happen – there will always be cheaters.

This may vary from discipline to discipline, but I am confident in large parts of biomedicine it is relatively simple to completely undetectably commit fraud. Basically if you have mastered your individual method you have a positive control to show your technique works, a negative control to show you have specificity, for each of your treatments you simply varying the level of your negative and postive controls. This leaving aside basic techniques of using completely different antibodies from what you document or using completely different cell line or seeding differing cell numbers as your starting conditions – all of which is entirely based on trust. In other words if you want to cheat there is no system in the world that can stop you.

Of course, some techniques – like NMR structure or a phylogenetic tree are either unsafe to fake or pointless to fake. But generally these are also techniques where achieving results is either relatively straightforward or – in terms of structural biology – you can have a number of balls in the air that some of your projects ought to work.

My guidelines for faking it would be: 1. Keep it modest, don’t fake anything paradigm changing. Make sure that your faked results agree with the broad outlines of the general consensus of recent publications. And if you are unable to replicate someone else’s work, it is good manners to fake data to agree with the published results – this is good advice generally. As Oscar Wilde said: My idea of an agreeable person is a person who agrees with me. In other words be careful not to put anyone’s nose out of joint. 2. Switch focus – if you have to fake something to publish in order to justify your existing grant and your future grants don’t keep running down the same track but try a new tack that might have a chance of success. This might seem commonsense, but I was in a lab where a couple of groups across the globe just bounced unreproducible nonsense off each other for years – each carefully citing the others’ papers. The lab I was in only stopped when the PI got a job in a different institution, doubtless with some release. 3. Don’t use Photoshop! Kind of wierd that you would have to even mention this, but it is just amateur faking. 4. Go for low and middle ranking journals – remember you are just trying to maintain a foothold using faked results, don’t become dependent on faking. If you keep altering your hypotheses some of them ought to genuinely work and submit these to higher rank journals. 5. Don’t get too tricky! I know of a lab where a PhD student had faked results by altering the levels of the transfection control plasmid in order to obtain a response curve; he was so frightened of being caught he left a series of freezedowns contaminated with the transfection control plasmid so if anyone tried to repeat his results the outcome would be exactly the same. Of course, no one was ever going to try and replicate his results – but someone did use his stocks to do further cloning and ended up scratching their head as to why each one of his freezedowns contained a mixture of plasmids – especially as the illicit contaminating plasmid had a nasty habit replicating at a higher rate and soon dominated. Remember, no one is hardly ever going to try and repeat your results and if they do it is impossible to prove anything against you provided your labbook is in order.

This is frightening. Are you implying that the scientists that are exposed here for their fraud are only the amateurs that are too dumb to follow some elementary rules of caution? The tip of the iceberg? I find that tip already too big. In my field of interest, Diederik Stapel is the king of cheats, and I do not like the thought of some Super-Stapel who pulls it all through. Anyway, since the advent of Retraction Watch, things have become harder for would-be-cheaters. In earlier years, retractions and the detection of misconduct took place largely behind the back of the (scientific) public. Now these things are in the spotlight.

” things have become harder for would-be-cheaters” . In what sense? In the sense that now making up data completely like Stapel did, may not longer be viable? There are probably a bunch of Stapels right under our noses. It does not take long to get suspicious. Paper after paper after paper. Results always come out nice, control experiments always resolve any ambiguities, interactions and triple interactions always come out significant, and just right as predicted. The problem is that, identification is usually only easy after the fact!

” things have become harder for would-be-cheaters” I did not mean that doing the fabrication has become harder. What I mean is that the consequences for your reputation if the fraud is detected have become more serious. In a way, RW is something like a pillory. It has already produced a row of unwilling “stars”, some of them now cursing this site or doing weird things. In earlier years, even if such things were detected they did not make the headlines. As far as I remember, even the concept of a “retraction” was not of great interest previously – at least in science writing.

I’ve seen some iffy things with authorship and interdisciplinary power dynamics; but it is a little bit harder to flat out fake data in chemistry. (Which makes certain recent events involving NMR spectra all the more interesting.)

I’ve done a goodly amount of biochemistry and cloning myself though; LGR is quite correct about how easy it is to cook up fake results. (PS all of my experiments in my phd have been independently replicated!) Not because life science is “easy” exactly, more because we don’t (yet) have analytical tools to make it more quantitative. (I’m working on that.)

I can tell you that the chemistry department at SBU had a whole line of measures in place to keep the hordes of premeds from cheating on their organic chem exams. [I rather liked TAing that class- ochem for premeds- but I don’t mind answering the same Qs 40 different times, 3 times a week :)]

I hope to soon reveal one such evolving group of scientists who have a set of about 5-6 papers with clearly flawed statistics in earlier papers (late 2012 to mid-2013), all in pretty good journals because the results are new, original and interesting. I believe that the novelty factor may have blinded, amazingly, the editors of multiple journals, from seeing the stats flaws, so plain in site, but so unbelievably incompatible with the originality of the papers. In one case, an erratum was issued since the authors forgot to add stats analyses to one column of data, which is what originally drew my attention to the paper and that group, plus a spam-like request by one of the authors for a post-doc in my lab entitled “Dear Professor”. However, in the very latest paper(s) by the group, the stats errors, which had formed a pattern in the previous half-dozen papers, suddenly disappears. This means, as Boris correctly points out, that those committing fraud are evolving to stay alive. It’s almost as if the authors may have picked up on their own flaw, seen the risk, and manipulated the data to suit the chance of publishing success. Of course, it will be difficult to prove intent, but when I release this case, I will surely have to contact the editors-in-chief of all the related journals to come together to collectively analyze the proof. Darwin would be horrified to see this. Soon, I believe, precisely because the noose is tightening on those committing misconduct, we might not only be preventing future frauds by more reflective analysis before research and publication, as Rolf suggests below, but we may, as I believe to be correctly assessed by Boris, be witnessing a chameleon-like evolution to change colors to suit the landscape by those committing misconduct. It will be increasingly difficult to detect the fraud or misconduct, which will breed a new culture of fraud detection. One of these days, it will be even more difficult to detect an honest from a fraudulent scientist.

I have a hypothesis, and I hope I am wrong. I believe that Science is thus slowly but surely, becoming militarized. More checks and controls. More centralized data-bases, spying and cross-linking. More empowerment by agglomeration of powers. Think about it. Thomson Reuters and the global impact factor. COPE and the global “ethics”. ORCID and the global registration. CrossCheck and the global referencing system, empowered by the global DOI. A global numbering system, the ISSN and the ISBN. Globalized manuscript submission systems like Manuscript Central, with centralized data-bases. Globalized (or trying to get there) commercialization of ethics control, iThenticate. Globalized empowerment is dangerous. The establishment is dangerous in the long run. I think, if retractions keep on evolving the way I have seen them evolve on RW in the past 12 months or so, there will no longer be scientists who want to stay in science out of pleasure. We are starting to breed a class of irritated, bitter, suspicious, radically dishonest and radically honest scientists. This may very well be the next scientific Renaissance, but not because of the scientific content. We may be starting to breed the robotic generation of scientists that no longer should think innovatively out of pleasure and not afraid to take risks and make errors, which are natural processes in scientific research, but one which must be extremely calculative, rule out error, eliminate fraud, and make science a cold, calculated “thing” that can be predicted, planned, manipulated by those in power in order to envision profitability on a long-term scale. As a scientist, I have never seen science and science publishing so horribly manipulated by the powers, possibly why the open access and anti-copyright movements have taken off so well, with their own pools of fraud. I predict, finally, that this touch screen technology, lens reading technology will all be integrated into the military industrial complex which is becoming the basal standard for the global society. One day, we will probably have to have to touch the screen to login to submit a manuscript using our fingerprint, or have our eyes read with laser technology with a lie detection test about issues related to data, authorship, originality, etc. I have been trying to understand retractions in the wider picture, and I see them as the evolving factor, something like global warming, volcanic eruptions, or a tsunami, which will irreparably remold a landscape.