Hauser’s work aimed at discerning the roots of human behavior, cognition, morality, and communication in monkeys. He was widely known for this work and wrote several popular and technical books. We chose him to be one of the three plenary speakers at the University of Chicago’s Darwin Day conference in 2009.

Since Hauser did the fabricated and/or flawed research using federal grant money, there was more to come, for government agencies are by law required to investigate. As boston,com reports, the Office of Research Integrity (ORI) of the U.S. Public Health Service (which runs the National Institutes of Health) has given a final report on the accusations that Hauser engaged in research misconduct.

You can download the ORI’s report here; it’s a short 8 pages long, but shows that the investigators went through his research records thoroughly, looking at every graph and data point and comparing them with the original data from tapes of monkey behavior and their transcriptions. Four grants were involved in the dubious research.

Among the finding were that Hauser:

Fabricated data and also misrepresented data in graphs

Falsified the coding of monkey behavior observed in trials

Misrepresented how data were coded in a paper

Misrepresented (apparently through fabrication) sample sizes of some behavioral responses in two papers

Gave false statements about the number of monkeys identifiable by their markings

Produced statistically significant results (when they actually weren’t) by fabricating new coding for data previously coded by a research assistant

This resulted in one paper being retracted and two corrected. Errors were also found in work that Hauser’s lab hadn’t published. Hauser will not admit deliberate misconduct, but made this statement,

“Although I have fundamental differences with some of the findings,” Hauser wrote, “I acknowledge that I made mistakes. … I let important details get away from my control, and as head of the lab, I take responsibility for all errors made within the lab, whether or not I was directly involved.”

To give you an idea of the depth of the investigation, here’s a brief extract from the report:

Respondent published fabricated data in Figure 2 of the paper Hauser, M.D., Weiss, D., & Marcus, G. “Rule learning by cotton-top tamarins.” Cognition 86:B15-B22, 2002, which reported data on experiments designed to determine whether tamarin monkeys habituated to a sound pattern consisting of three sequential syllables (for example AAB) would then distinguish a different sound pattern (i.e., ABB). Figure 2 is a bar graph showing results obtained with 14 monkeys exposed either to the same or different sound patterns than they were habituated to. Because the tamarins were never exposed to the same sound pattern after habituation, half of the data in the graph was fabricated. Figure 2 is also false because the actual height of the bars for the monkeys purportedly receiving the same test pattern that they had been habituated to totaled 16 animals (7.14 subjects as responding and 8.87 subjects as non-responding).

What amazes me is the leniency of the “punishment.” Making data up is the primary sin that a scientist can commit. I expected that, at the least, Hauser would be banned from ever receiving federal grant money. He also ran the chance of going to jail. But what did the ORI do? Virtually nothing: a slap on the wrist. Hauser can still apply for federal grant money, but he must do so by submitting ancillary statements from himself and the university that his research will be supervised for accuracy and, when completed, be given an imprimatur of validity by his university. He also won’t be allowed to serve as a consultant or member of federal grant panels, which is not really a punishment at all:

Respondent neither admits nor denies committing research misconduct but accepts ORI has found evidence of research misconduct as set forth above and has entered into a Voluntary Settlement Agreement to resolve this matter. The settlement is not an admission of liability on the part of the Respondent. Dr. Hauser has voluntarily agreed for a period of three years, beginning on August 9, 2012:

(1) to have any U.S. Public Health Service (PHS)-supported research supervised; Respondent agreed that prior to the submission of an application for PHS support for a research project on which the Respondent’s participation is proposed and prior to Respondent’s participation in any capacity on PHS-supported research, Respondent shall ensure that a plan for supervision of Respondent’s duties is submitted to ORI for approval; the supervision plan must be designed to ensure the scientific integrity of Respondent’s research contribution; Respondent agreed that he shall not participate in any PHS-supported research until such a supervision plan is submitted to and approved by ORI; Respondent agreed to maintain responsibility for compliance with the agreed upon supervision plan;

(2) that any institution employing him shall submit, in conjunction with each 8 application for PHS funds, or report, manuscript, or abstract involving PHS- supported research in which Respondent is involved, a certification to ORI that the data provided by Respondent are based on actual experiments or are otherwise legitimately derived, that the data, procedures, and methodology are accurately reported in the application, report, manuscript, or abstract, and that the text in such submissions is his own or properly cites the source of copied language and ideas; and

(3) to exclude himself voluntarily from serving in any advisory capacity to PHS including, but not limited to, service on any PHS advisory committee, board, and/or peer review committee, or as a consultant.

Frankly, I am both puzzled and appalled that such a light punishment was levied for such severe misconduct. This won’t serve as much of a deterrent to research fraud. However, Hauser did lose an academic plum in the process: his job at Harvard. And his research—if he continues it—will be forever under a pall of doubt.

I agree that the “punishment” was unusually light but at least the investigation was robust and the standards expected, rigorous. Oh that such considerations and procedures were capable of applicaion to “sophisticated theology” and Intelligent Design.

I’m not making excuses for the man, but if you read his papers, you have to agree that his experimental designs were sheer genius.

My guess is that under pressure to push out ever more papers, he just started cutting corners and it got out of hand.

“well, this trial is similar to one we ran last year, so I’ll just assume that data would be the same. I’ll just… make it so.”

regardless of motivations, or what you think the punishments should or should not be, the core of the man’s work is still a valuable contribution to the study of animal behavior and cognition, and should not be disregarded because of a latter lapse of judgement.

his work was examined with a fine tooth comb, and the ONLY paper retracted or amended were the 3 specifically mentioned.

how many does that leave that were examined and found to be solid?

gotta be at least 30 or 40?

I only mention this because while the man made some mistakes, his overall body of work is still valuable, and you really should mention that.

there was, after all, a REASON he was invited to be a primary speaker at that conference, and it had little to do with the specific projects involved in this investigation.

his work was examined with a fine tooth comb, and the ONLY paper retracted or amended were the 3 specifically mentioned.
how many does that leave that were examined and found to be solid?

The correct answer is:: Zero.
Zero.
Why can you not understand this? I’ve just read the previous threads here at WEIT and you’ve been making stuff up and ignoring the truth from the get-go.

Nobody EVER went through his entire body of work with a finetooth comb. OK? Never. It never happened and it never will.

The internal Harvard investigation–still secret, btw, despite your confident predictions–ONLY covered specific allegations (all from students and collaborators, btw, none picked up in peer-review).
And the federal investigation only looked at a subset of those allegations, the ones that had been used in grant proposals to obtain federal funding (that’s why it includes unpublished work).

As far as I know (and, I’ll add, as far as you are able to know) not a single article was examined and found to be ‘solid’. And again, that’s because the only articles investigated were those with known (by students or collaborators) problems.

It doesn’t matter how “ingenious” you find his experimental designs. It’s about data. He has fabricated and falsified data–this is no longer in question–and there is absolutely no way to ever know how much of his published work features such fictional data.

And I guess at this point I can let slip how:
At the time this all went down I was married to a member of his lab.
I knew the students involved and heard their stories first-hand over beers before they even decided to report him.
I knew about the Harvard investigation before Hauser did (the lab was raided while he was on sabbatical in Europe). My (ex-)wife was interviewed several times by Harvard lawyers.
We all had to sit tight and shut up about it for three years until any news broke at all, and even then we weren’t supposed to say anything because of the Fed investigation.

Anyway. When you try to tell me I‘m wrong about any of this, I have to laugh. Try to pick your fights better.

He has fabricated and falsified data–this is no longer in question–and there is absolutely no way to ever know how much of his published work features such fictional data.

so of the studies of his that HAVE been successfully replicated by other labs, do you think they made up their data too?

the reason he was investigated for these instances was for two reasons:

-on specific projects, people taking the data noticed discrepancies between what they entered, and what Hauser wrote in the published work

-there were other labs that were unable to duplicate the results of some of his experiments, and were unable to get Hauser to clarify.

so, while you rant and rail about your inane conclusions, you’ve never read the body of his work, know NOTHING about the field he worked in or what other labs have tried similar things, and are more than happy to conclude without reason his entire body of work is fraud.

The replications by other labs are NOT proof that his original experiments were without fraud. The guy is smart and knows what to expect. He may have fudged the data in the correct direction. You can never know he did not do this. The loss of trust extends to all his work. Replications do not change this; those replications should be evaluated strictly on their own merits, as if Hauser’s papers on the subject did not exist.

dude.
I know precisely whereof I speak.
And I also know that you do not.
You’re wrong about why and how he was investigated but you keep blowing hard in his defense anyway.

And I did not conclude that his entire body of work is a fraud.
I specifically said instead that nobody will ever know how much is or isn’t.

However, nobody can now trust any of the data he has published (except for coauthors who themselves collected and analyzed the data–but that’s not how Hauser normally worked). The data, OK? It’s quite bizarre for you to deny that simple conclusion.

of the studies of his that HAVE been successfully replicated by other labs,

N = 1, as far as I know. Feel free to use your extensive knowledge of his body of work to provide some links for my further edification.

“My guess is that under pressure to push out ever more papers, he just started cutting corners and it got out of hand.”

You are being too kind. Reading the coverage of this case, you’ll see it was his lab workers (Grad students or Post Docs, I presume) who blew the whistle after witnessing serious misconduct. My understanding is that they tried to convince him not to publish such results, and he reacted badly. He got off lightly if you ask me. Fortunately, he seems radioactive and should not be found in any serous labs anymore. I see he now runs a “science based” gaming site.

I doubt there are any applications based on the alleged research; any attempt to develop applications would surely yield negative results. What is in jeopardy are any publications by honest scientists who cited Hauser’s fraudulent articles – their own experiments and conclusions may be in doubt.

For those unfamiliar with day to day scientific research it might be useful to make clear why evidence of fraud in one paper can throw all of a scientists work under suspicion.
Peer review, the quality control mechanism designed to prevent incorrect or unevidenced work being presented to the public, can only pick out ‘honest’ mistakes by the scientist in question. When we review a paper we take it for granted that the researcher is not using subterfuge to get a publication. It would be simple to go down this route. Reviewers only see pictures of results or tables of data, either of which would be easy to fake – it one had the inclination.
A proper review of the data that would take into account the possibility of malfeasance would involve the replication of the experiments by the reviewers – something that almost never happens, for reasons of time and cost and the expectation/hope that future studies from other groups will carry out this function.
Someone revealed to have performed unethical misconduct of the sort that Hauser has been convicted is left standing before his peers saying, ‘well OK, I lied about this experiment, but I’m telling the truth about all the others’!

This has been extremely disappointing for me, as I greatly enjoyed Wild Minds and (prior to the allegations) recommended it to friends. Since this story broke a while back, I can’t recommend it and I don’t trust what I drew from it.

What amazes me is the leniency of the “punishment.”…But what did the ORI do? Virtually nothing: a slap on the wrist. Hauser can still apply for federal grant money, but he must do so by submitting ancillary statements from himself and the university that his research will be supervised for accuracy and, when completed, be given an imprimatur of validity by his university.

Just a guess, but I expect that ORI didn’t impose an outright ban because they don’t need to. Federal grants tend to be highly competitive. You often or typically get more acceptable submissions than you can fund. Hauser’s tarnished reputation and ORI’s demand that he remind reviewers of his fraud in all future grant applications means that, on a practical level, he’s not going to win any highly competitive grants. He may win some grants in which his application is the only good one submitted (and I guess I don’t have much of a problem with that). But in any case where the grant reviewers think both submissions A and B are credible projects worthy of funding, but they can only fund one of them, and one of them is Hauser’s, they’re going to pick the other one.

I don’t really see how the evidence of research misconduct affects how one approaches his review/conceptual/popular writing. He’s a very smart guy, and he understood the issues and nuances of his field(s) very well. There’s no reason to question his writing (well, there’s one thing but it’s tangential) in those venues, or indeed even the theoretical and review portions of the Introductions and Discussions of his published work.
What’s in question, and always will be, is the veracity of his data and, in the absence of replication, the conclusions directly drawn from them.

“I don’t really see how the evidence of research misconduct affects how one approaches his review/conceptual/popular writing.”
This is one thing I’ve been wondering about, since I have a copy of Moral Minds on my shelf and been waiting for this investigation to finish as to whether or not the content was under question.

I agree with most of that article, though IMHO they don’t say it quite correctly.

The problem is that in many areas statistical analysis of the results is vital, but the scientists in question don’t know enough statistics themselves. Hence often they either don’t do any statistics, so that they don’t get as much information out of their results as they could, or they apply statistical analyses that they don’t fully understand, which is often actively misleading. (I wrote a long rant about this in an archaeology journal in the 90s).

My advice is always:
1) Plot the data to get a feel for them. If you’re lucky the conclusions might jump off the page.
2) If not, ask a statistician

It looks like Hauser’s crimes were more serious and deliberate than misusing a statistical method, but arguably the latter is responsible for a greater amount of misinformation.

Fraud like Hauser’s inflicts damage in lots of ways, large and small. Several years ago, before it was uncovered, a former professor of mine who studies cognition in rats told some of us how bad he felt about his own accomplishments by comparison with Marc Hauser’s. I’m sure that, as a scientist, he would have preferred that Hauser had turned out to be honest. But I hope he’s allowing himself to feel at least a little schadenfreude right now. If he is, he came by it honestly.

I read the article and the report yesterday and I looked up his articles in PsychInfo database. I was surprised at the breadth of his inquiries, and that his misconduct was with the species he knew the best. Perhaps negative results with his pet species was especially distressing to him, but a negative answer is still an answer so why fudge the data?

And then as I scanned further I saw something that made me go “oh *that* guy…”

I remember a news report about “research” that proved something about dog intelligence or communication based on dogs looking at what the human researcher pointed to. He was that guy!

Anyone who has owned a dog knows that when you point to something the dog will look at your finger! You can probably train some dogs to look at what you’re pointing to, but not enough of them to prove anything about non-human communication through pointing and it’s certainly not instinctive.

That research must not have been federally funded, or it would have been discredited yesterday too.

Here‘s the article referenced. It was in fact funded by NSF.
But as I understand it the investigation was not into published research funded by grants, but rather of research that was used in grant proposals to obtain funds.

Yes, investigations cost a hell of a lot of time and money so they tend to be quite restricted. Among other things, even if there were criminal proceedings the statutes of limitations would lead a team to investigate only the years which can still be prosecuted (and if they had the resources they might go further back to establish a history). I was surprised to learn that many people believe only the publications which were investigated and shown to be fraudulent are indeed fraudulent and that other publications must somehow be magically free of fraud.

Boys Club comes to mind. Not many people get away with defrauding the federal government. So now he’s trying to perpetuate the myth that he “made mistakes” rather than willfully published fraudulent articles. It’s sad that people these days put up with such nonsense – perhaps they have been inured to lies by the political parties?