Can you commit research misconduct if you fail to detect false data from another scientist?

The answer is yes and here’s how it can happen.

You work in a well-regarded laboratory that receives government funding. You are frequently a principal investigator (PI) and a lead author. The lab suffered from some disorganization so when you took over, you demanded quality work and hired a new lab administrator.

Things are generally good but life in the laboratory is demanding. The size of the lab makes it impossible for you to validate every piece of data. So, you often have to trust that a colleague’s work is reliable and truthful, including from collaborators at other facilities. Funding, as always, is a problem, which means you can’t buy enough equipment and data security software; tracking who did what is difficult. Some lab employees (inherited from your predecessor) have professional or ‘personnel’ issues and you suspect some will leave the laboratory. And of course, there is growing pressure to publish, attend conferences, make new findings, and to keep the funding stream going. There is never enough time.

All of that probably sounds familiar, but here’s where our story takes turn for the worse.

One day, you are summoned to a meeting with the institution’s research integrity officer (RIO) and told some data in a paper are falsified and they began a misconduct investigation. You are shocked. You know the data but didn’t validate them personally before publication and you don’t know precisely who did the work.

One always worries about errors, but deliberate falsification and misconduct? You are angry but soon become uneasy as you wonder why you weren’t the first to hear about the problems with the data. Your shock and unease turn to dread when the RIO looks you square in the eye, hands you a sheaf of documents, and says you are being charged with misconduct, being placed on leave, and must immediately turn all over all of your files, data, and laptops.

As the weeks go by, you learn you are the only person in the lab under investigation. You maintain your innocence and no evidence ever emerges that you falsified the images or knew they were false. You have your suspicions as to who falsified the data, but the dean doesn’t seem interested. No one else is charged. It is starting to feel like you’re being railroaded.

After hiring a lawyer, you protest this “selective prosecution.” You also alert the administration to the waste of research funds and financial mismanagement. Your institution finds you – and only you – committed misconduct. You are terminated.

Years pass, lawsuits and hearings pile up. You win a victory when your employer is found to have retaliated against you — but the misconduct finding stands. The U.S. Office of Research Integrity (ORI), adopting the institution’s report, brings formal misconduct charges and bars you from receiving Federal funding for 10 years. How is this possible if you had no knowledge the data were false? You believe it is wrong and appeal, hoping a neutral judge will rectify this injustice.

The administrative law judge (ALJ) agrees to hold a hearing, the first such ALJ hearing in over a dozen years, and after months of waiting and more briefs and legal fees, he issues a 126-page decision.

Your debarment is upheld, although the judge reduces it to five years.

Not just a hypothetical

If you’ve been a regular reader of Retraction Watch, you will recognize this as the case of Christian Kreipke, a researcher formerly at Wayne State University in Detroit. Although the long-running Kreipke case has been discussed previously, the ALJ’s May 2018 decision has something important for everyone. I suspect that – for many years to come – it will be referred to by ALJs, by ORI, by university administrators, by RIOs, by investigation panels, by science journalists, and by lawyers.

However, there is something especially important in the decision for senior scientists, especially PIs and lead authors.

As a lawyer who represents scientists in misconduct cases, one of the most difficult issues is the responsibility of lead authors and PIs for work done under their supervision. Although the Kreipke decision is important for many reasons, this part of the decision could affect lab practices everywhere.

Perhaps the most significant fact in the case is that Kreipke did not falsify or fabricate any data. Nor did he know any data was false. The first time he learned of false or fabricated data was when he was told by the RIO after the commencement of the investigation. Kriepke’s problem was that he didn’t detect the falsification done by another scientist.

Nevertheless, the ALJ concluded, without much difficulty, Kreipke committed misconduct. Nor did it matter to the ALJ that Kreipke was the only one charged with misconduct.

How is this possible?

What it means to be “reckless”

Followers of research misconduct cases will know that a scientist commits misconduct merely if he or she “recklessly” allowed the inclusion of false or fabricated data. But what does it mean to be “reckless?” The Kriepke decision is important because it puts some “meat on the bones” of this legal term. The ALJ said that including false or fabricated data without validating its accuracy is reckless if one “used materials without exercising proper care or caution and disregarded or was indifferent to the risk that the material were false, fabricated, or plagiarized.” It is not a defense if you assumed others performed research reliably and truthfully reported those results to you. As far as the ALJ was concerned, Dr. Kreipke didn’t do enough to validate the data.

After Kreipke, can one rely on the work of others? One of the witnesses said “[We’ve] known each other for 25 or 30 years. And because of that association, I would have accepted what came out of that laboratory without question unless I saw something.” Even though this kind of trust was a common practice, the ALJ decided it was not a defense because it may result in false or fabricated data. A PI or corresponding author cannot just accept “on faith” the data were correctly labeled and were accurate representations, even if it was coming from a longstanding collaborator or a trusted scientist in one’s own lab.

The ALJ said that an author, editor, other contributor is not presumptively liable for false material just because their name is on a grant application or article. However, if one is a PI or first author, that person is presumably responsible for the content of the work.

Higher validation standards?

What does it mean? More importantly, what do PIs and lead authors now have to do to protect themselves from allegations of misconduct? Can any data be trusted or does the PI or lead author have to verify everything personally?

Some would say the Kreipke decision sets an unreasonably high standard for validating research on a collaborative project and upsets long established norms. Others might say the ALJ got it right and Kreipke, as a lead author and PI, had a personal responsibility to police the work of others and to ensure it was accurate and verifiable.

What is now clear is that senior researchers, lab supervisors, PIs, and lead authors can no longer accept the work of collaborators at other labs or, for that matter, the work of scientists in their own lab, without insisting on (or conducting) some level of validation. More importantly, even if there has not been a hint anyone’s work is suspect, the senior scientist, PI, or lead author must demand the work be validated. In a misconduct case, the lab’s procedures for verifying data will now come under close scrutiny. The Kreipke case establishes that a PI, lead author, or lab head who fails to validate data or employ adequate validation procedures may be personally liable for misconduct, even if the scientist had no knowledge of falsification or fabrication.

Accuracy in reported research is a core value. Scientists entrusted with government funds should take all necessary steps to ensure research issued under their name is correct. The possibility of a research misconduct finding creates a powerful incentive for scientists in a supervisory capacity to demand valid and accurate data. The Kreipke case shows that a scientist is at risk for a research misconduct allegation if he or she does not validate data, even if it comes from a long time collaborator or a trusted colleague.

26 thoughts on “Are you liable for misconduct by scientific collaborators? What a recent court decision could mean for scientists”

I’m not keen on Goldstein’s narrative at the start. Specifically this bit…

You work in a well-regarded laboratory that receives government funding. You are frequently a principal investigator (PI) and a lead author. The lab suffered from some disorganization so when you took over…

Either you’re the PI of the lab, or you’re not. You can’t “frequently be a PI”, there’s no such thing in academia! There’s senior authorship, if that’s what Goldstein is getting at, but that’s not the same as being a PI (the PI).

Is Kreipke claiming that he inherited all the bad stuff from his predecessor when he “took over” the lab? Again, this is not really how things work in academia – labs don’t just get given to someone with all the (salaried) people included when a senior scientist leaves. If Kreipke was really put in charge, then surely he would have had the ability to make hiring/firing decisions, and keep only those people he trusted? Instead, he chose to keep them and (it seems) put his own name as PI and senior author on their data when he published it.

Then there’s the glaring issue of authorships – many of the papers retracted by Kreipke are with him as first author, not senior (last) author, and not as PI. Generally in academia, the first author is the person who generates the data. This does not fit with the notion of coming into a lab and taking over as PI.

Missing from the narrative is how the old PI came to leave, the succession of power from the old PI to Kreipke, and who the “read bad actors” were. The main PI associated with Kreipke appears to be Jose Rafols, but he’s disappeared from Wayne State’s website.

Perhaps someone from there can chime in on how Rafols came to leave? It seems from Goldstein’s narrative that Kreipke is trying to blame his predecssor for all this, but maybe he can’t say it directly because that might open up defamation liability.

Sometimes that is how things work. It’s certainly not the most common scenario, but I’ve seen it happen multiple times. And unfortunately, hiring/firing decisions aren’t always that straightforward.

What you think is “generally” true in academia isn’t entirely accurate. Consensus about authorship order and responsibility varies widely by discipline (and sometimes by location), and there is limited formal guidance on the topic generally. ICMJE to my knowledge gives no specific guidance on author order beyond defining the responsibilities of corresponding author, for example.

It’s so easy and tempting to apply our own experiences to situations to justify our own proclamations of what “should” be the ideal case, but we should avoid doing so. Even your PI comment isn’t accurate — yes, in many cases the lab’s PI is the PI for any given project being done in that lab, but again, this is not always the case. I can pull any number of IACUC or IRB filings at my institution where the named PI is not the same name on the lab door.

I will also point out that one cannot simultaneously advocate for severe, career-ending consequences for misconduct and bemoan the increased role of attorneys in these processes. The latter is to a significant degree a consequence of the former.

No, Boboramus is not. The principle investigator is head of a grant (the “study-by-study basis”). The PI may or may not be the lead author on any papers produced through the research funded by the grant. They may or may not be the senior author on papers produced through research funded by the grant. They may or may not have their own lab – there are a number of post-doc PIs out there, because to be a PI you again just have to be the head person in charge of a grant.

Mr. Brookes: The source of my knowledge of the case comes from the ALJ’s 126 decision, which I had to summarize given space limitations. I urge people to read it but, unfortunately, despite it’s length, there are some ‘facts’ it doesn’t discuss in much detail, such as the state of lab when Dr. Kreipke took over. My posting also had to gloss over there was a lot of conflict between lab members, including that one scientist got a ‘protective order.’ In any case, I am not defending or exonerating anyone; my objective was to point out that ‘senior scientists’ (be they PIs, lead authors, or whatever) are at risk if they don’t validate the work of others. Judging from comments, that is a sentiment many seem to share.

Implement practices from industry:
1. Institute and document training programs for any protocols that are standard throughout your lab.
2. Institute a calibration and preventive maintenance program for all equipment from the basics (e.g. pipettes, scales) to the complex (e.g. dynamic material analyzers, flow cytometers).
3. Review and sign lab notebooks.
4. Look at raw data with your trainees and ensure the appropriate statistical analysis was done.
5. Set foot in the lab on more than one day per year.
6. Consider setting up an audit system. Industry labs and manufacturing facilities are audited on a regular basis – at least annually if not more. No reason we can’t audit research labs other than no one wanting to foot the bill.

The number of times that I witnessed poor lab practices in grad school at a research university left me rather jaded. It’s difficult for me to trust findings from a good number of academic labs because of this. Grad students training other grad students is like the blind leading the blind. Unfortunately, due to the nature of scientific academia, this can mean that the errors of grad students are passed down and even PIs don’t know how to properly conduct their research.

‘What is now clear is that senior researchers, lab supervisors, PIs, and lead authors can no longer accept the work of collaborators at other labs or, for that matter, the work of scientists in their own lab, without insisting on (or conducting) some level of validation.’

Excuse me, isn’t this what everybody does when coauthoring a paper? Or are we talking about ‘Do this and put my name on the paper. Add X and Y because they need papers and Z because I owe one’?

“After hiring a lawyer, you protest this “selective prosecution.” You also alert the administration to the waste of research funds and financial mismanagement. ”

I take that to mean that Kreipke alerted the administration to the waste of research funds and financial mismanagement after he hired a lawyer and after the misconduct investigation started.

Could attorney Richard Goldstein clarify that point?
If Kreipke alerted the administration to waste and mismanagement after the misconduct investigation started it sounds like clumsy virtue signalling and distraction, verging on retaliation.

Mr. Pessoa: The facts of the Kreipke are detailed and had to be condensed for the posting. If you read prior posts on the case, you will see had a dual appointment (at the VA and at WSU) and the retaliation occurred at the VA, not at WSU, which is where he was found to have committed misconduct. The selective prosecution issue is different from retaliation and, as the ALJ noted, troubling. Ultimately, neither concerns about selective prosecution nor proof of retaliation prevented the ALJ from affirming the ORI debarment. I’d be happy to discuss this in greater detail if you wish.

Disgruntled is absolutely right. In industry, auditing is done both by internal auditing groups and by regulatory agencies as well. And in addition to the auditing, lab notebooks & other data sources have 2nd person sign-off. There is a very high level of accountability.

When outside labs are used (i.e., outside the company), it’s common practice to audit them as well.

Good Laboratory Practices (GLP) are common & often required in industry. This includes validating that all instruments are functioning as required. While fraud can still happen in industry, with all of these checks in place, it’s very hard to do so (particularly at large companies that have these many controls in-place).

Our lab in grad school had nothing like this. (virtually no checks & balances).

It’s ironic that the public trusts academic labs more so than industry labs.

All of the traditional approaches that depend on assumptions about the trustworthiness of people haven’t worked, so why not take a chance on trying something new? Moral hazard puts everybody at risk of dishonesty. It is part and parcel of the human condition. Rationality argues for a change in behavior. To resist is insane by one definition of “insanity”: “Repeatedly doing the same thing expecting different results.”

I have worked in industry and in collaboration with academia, and I’m afraid all the checks and balances you discuss are thrown out of the window when data “analysis” occurs. This includes clinical trial data whereby professors/ Medical PIs, Grant PIs and an “external” auditor checks, and even including the final report sent to the MHRA. As a 30+ years researcher I don’t believe 99% of published research (this is the medical field, UK) preclinical and clinical, and I have done both.

I think maybe the dude could pull off “I dunno where this data came from I just published it” for *one* paper. I might have some sympathy for that; miscommunication is a possibility.

This is like… 6 papers? 7? If your entire career is built on data you can’t even source and try to disavow when it turns out to be fraudulent… come on. This article doesn’t even pass the most basic sniff tests for plausibility. I think that if every paper I’d published turned out to be full of data I couldn’t explain, people might rightly conclude that I’d committed career-long fraud.

“The ALJ said that an author, editor, other contributor is not presumptively liable for false material just because their name is on a grant application or article. However, if one is a PI or first author, that person is presumably responsible for the content of the work.”

I’d go farther.

Call me old-fashioned, but I still believe that if you put your name on a paper as an author that you are responsible for the entire paper, even if you are buried in the middle.

Yes, the first authors, senior authors and – i would argue seasoned authors – have prime responsibility for inspecting the data and assuring that it is valid.

That’s why arguments such as – those weren’t “my” papers (even though I was an author) that were retracted – did not ring true to me.

If you are an author, you are responsible. Espicially if data in the paper is questioned (through legitimate channels), it’s your responsibility to ensure that the data is correct or that it is corrected. If the data was falsified or fabricated or seriously in error it’s your responsibility to see that the scientific literature is corrected. Doesn’t matter where you are on the author list.

Come on, people! Much of this discussion is entirely irrelevant. Misconduct is unrelated to position–PI, first author, last author, corresponding author, etc. In research misconduct, as with other misdeeds, the person(s) who did it, did it. There may be other people who were unwitting participants at one level or another and these or others may have some accountability. That doesn’t mean, however, they did the deed.

If a bank teller embezzles funds from the bank, did the branch manager do it? No, the teller did. Maybe the manager didn’t have effective oversight or compliance checks in place, but the TELLER did the crime, not the manager. And maybe the manager will face some sanctions for lax oversight. But that does not mean the manager did the embezzling.

It appears crystal clear that the issue in this case/decision is the fact that ORI’s “scientist-investigator” was not qualified (in the legal sense, not the common usage) by the attorney. Kreipke uses this in his statement indicating that the judge said he was unqualified. Again, that’s a legal determination; in common terms, I’d be willing to bet that the “scientist-investigator” in very formidable and unexcelled in his qualifications!

The outcome of this trial is exactly and entirely because the judge excluded ORI’s investigative results. As stated previously, it’s difficult to imagine the leap of faith required to discount essentially everything a person says, but then accept that their misuse of data is accidental.

It seems to me that this case has the potential to rewrite the whole story of research misconduct, moving it into the realm of quantum-like phenomena: no more local reality.

Alex Runko was one of the good ones. The failure to adequately establish his qualifications and credentials for the purposes of this case falls wholly on ORI. He no longer works there, but he was excellent at his job and was very highly regarded by the RIO community.

What does this finding do to cross-discipline collaborations? The whole point of undertaking such a collaboration is that I don’t have the skill or capacity to test this, but I know someone who does. As I am not an expert, I can’t fully verify the data. Maybe I can notice a badly Photoshopped figure, but beyond that I have to trust that my fellow researcher is properly representing the data.

Excellent observation. It has been my experience a scientist on a project is often required to rely on the expertise of another. In my practice, many cases involve a ‘false’ image. Once the problem is identified, the other scientists on the project are often in the position of trying to explain whey they didn’t catch it. As a lawyer, I can only say that there is no one rule that will apply in all cases; each situation is different. However, the Kreipke case makes it clear that a scientist must exercise ‘due care’ for the accuracy of work contributed by others. In practical terms, this means that, as a general rule, one must implement procedures and safeguards designed to guard against errors and fraud. What procedures and safeguards are adequate will differ in each situation.