Marc Hauser, a prominent Harvard psychology researcher and author, will be taking a leave of absence from the university following “a lengthy internal investigation found evidence of scientific misconduct in his laboratory” that has led to the retraction of one of his papers, according to The Boston Globe.

The retraction, of a 2002 paper in Cognition, reads, in part: “An internal examination at Harvard University . . . found that the data do not support the reported findings. We therefore are retracting this article,” the Globe reports. It also includes the sentence “MH accepts responsibility for the error.”

The retraction notice does not yet appear anywhere on the journal’s site, where the PDF version of the study is still available, nor on the Medline abstract. Its circumstances appear to be atypical, according to the Globe:

The editor of Cognition, Gerry Altmann, said in an interview that he had not been told what specific errors had been made in the paper, which is unusual. “Generally when a manuscript is withdrawn, in my experience at any rate, we know a little more background than is actually published in the retraction,’’ he said. “The data not supporting the findings is ambiguous.’’

Gary Marcus, a psychology professor at New York University and one of the co-authors of the paper, said he drafted the introduction and conclusions of the paper, based on data that Hauser collected and analyzed.

“Professor Hauser alerted me that he was concerned about the nature of the data, and suggested that there were problems with the videotape record of the study,’’ Marcus wrote in an e-mail. “I never actually saw the raw data, just his summaries, so I can’t speak to the exact nature of what went wrong.’’

Hauser, whose studies learning in monkeys, was an associate editor of the journal during 2002 when the paper was published, according to his online bio. Obviously, he would have recused himself from reviewing his own work. [See update at end.]

The paper has been cited 86 times by other studies, according to the Thomson Reuters Web of Knowledge (disclosure: Reuters Health, where Ivan works, is obviously also part of Thomson Reuters).

It has been discovered that the video records and field notes collected by the researcher who performed the experiments (D. Glynn) are incomplete for two of the conditions. Following the discovery of the incomplete video records and field notes, Wood and Hauser returned to Cayo Santiago to re-run the three main experimental conditions reported in the paper, videotaping every trial and accompanying the video records with field notes.

Glynn is not a co-author on the replication paper. The 2007 Royal Proceedings study has been cited seven times.

In case anyone still wonders about the effects of retractions on the scientific process, or why we make a note of how often papers in question have been cited here on Retraction Watch, here’s what a researcher in Hauser’s field told the Globe:

“This retraction creates a quandary for those of us in the field about whether other results are to be trusted as well, especially since there are other papers currently being reconsidered by other journals as well,’’ Michael Tomasello, co-director of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, said in an e-mail. “If scientists can’t trust published papers, the whole process breaks down.’’

Update, 4:20 p.m. Eastern, 8/10/10: Cognition editor Gerry Altmann returned an email we sent him after this post went live. We asked him for the text of the retraction, and for some context on how associate editors’ work was handled during peer review.

The text of the retraction, which is in press and will appear in an upcoming issue:

An internal examination at Harvard University of the research reported in “Rule learning by cotton-top tamarins,” Cognition 86 (2002), pp. B15-B22, found that the data do not support the reported findings. We therefore are retracting this article. MH accepts responsibility for the error.

On Hauser’s position as an associate editor at Cognition during the time this paper was published:

I cannot comment on how papers from editorial board members or associate editors were handled back in 2002 as I was not the editor then. The procedure now is that papers from editorial board members are handled in exactly the same way as regular submissions – no special privileges. The same is true for papers by associate editors (or authored by associate editors). The only difference is that I handle all manuscripts by associate editors, rather than passing them on to one of the other associates. Also, if an associate editor is an author, they are ‘blinded’ within the system so that he/she cannot find out who is reviewing his/her paper. When I’ve been an author (only happened once, while I’ve been editor, but it was a resubmission of something submitted before I was editor), that was handled by one of the associates. Again, I was blinded from the review process. In fact, a close colleague of mine with whom I publish a lot of my work has submitted to the journal and I have even blinded myself to that, even though I am not involved in that research. I wanted to ensure there was no suggestion of influence. I have actually had occasion to reject manuscripts submitted by associate editors – they know the score!

I was not aware until you pointed it out that Hauser was an associate editor back in 2002 – I did know he had been an editor, but I didn’t know the year. However, I do suspect that he had no involvement in the publication of his own paper. And as originally published, it was a good paper, worthy of publication in a journal such as Cognition. Of course, now we know differently. But I do not believe there could have been any impropriety in the manner of the paper’s publication. Whether there has been impropriety in the handling/analysis of the data is another matter, and one that Harvard University might wish to clarify, but I have no direct information one way or another.

That email response from Dr. Altmann is a handbook on how to discuss a sticky situation in a calm and cool manner, while simultaneously being transparent about policies and the facts. Kudos for extracting some fantastic insights.

It will be interesting to note whether further scrutiny will lead to more retractions or if this is an isolated, albeit regrettable, lapse.

It will always be contentious as to whether it is advisable for co-authors to rely on data summaries rather than raw data although it is understandable that there is the perception that there may not be adequate time or resources to work with raw data. It would be absurd if the ‘responsibility’ disclaimers had to be amended to read, “X is responsible for data analysis and interpretations but this responsibility is circumscribed to work performed on data summaries that were performed by Y”.

> there may not be adequate time or resources to work > with raw data.

They have all the time in the world. What’s the hurry? Conference deadlines?!

>It would be absurd if the ‘responsibility’ >disclaimers had to be amended to read, “X is >responsible for data analysis and interpretations >but this responsibility is circumscribed to work >performed on data summaries that were performed >by Y”

Maybe I am not the target audience for this blog, but to me the single most important question about a retracted paper is not ‘Why was it retracted?’ but rather ‘What was it supposed to have proven, that I should not rely upon anymore?’

If I have to click through to find that out then these posts don’t seem that useful to me. Just one person’s perspective.

Gary Marcus, a psychology professor at New York University and one of the co-authors of the paper, said he drafted the introduction and conclusions of the paper, based on data that Hauser collected and analyzed.

“I never actually saw the raw data, just his summaries, so I can’t speak to the exact nature of what went wrong.’’

————————

Looks like the Dr. Marcus was conned. He probably thought he was getting to pad his CV with a good publication for free. All he had to do was “draft” the introduction and conclusions.

No excuses though. You’re on the paper, you’re responsible for its contents. “I was just a bogus author” is no excuse.

I disagree. Co-authorship inevitably relies on some degree of trust, for example I’m second author on a paper because I analyzed the data from one particular experiment out of several which are in that article; the raw data being extensive and very boring, the final results quite interesting. My coauthors trusted me to do the analysis competently and honestly report the results. I don’t see how else it would have worked short of them doing it themselves, and take that to its logical extremes and all papers would only have one author.

I think it’s certainly true that co-authors are rarely blameless in cases like this but being realistic, we’ve all been in the same situation, so it’s a case of “he who is without sin” and I for one am not going to cast the first stone because I suspect I’d have done the same thing, realistically.

Drafting the introduction and conclusion require one to frame the research question and its relevance, as well as interpret the results of the study. They are crucial steps in the scientific process – no more and no less important than the actual collection of the data or the analysis.

I have co-authored scientific papers in which each author performing a particular task. I have played all the roles on various papers – writing the intro, collecting the data, and analyzing the data. There is definitely a level of trust that is required. That’s part of the collaborative process. It is much too onerous for every author to double-check every datapoint. Most researchers are quite honest, and these sorts of case are exceedingly rare. Consider the vast number of scientific publications that are produced every year and how few are ever retracted.

Dr. Marcus should not be vilified if the allegations regarding Dr. Hauser’s contribution prove to be true. He did something that is very typical and acceptable in the scientific world, and it is within reason for him to claim that he did not perform the work that is currently under question.

However, if this sort of situation begins to occur more frequently, journals may need to require coauthors to outline who did what for the paper. That practice would also (hopefully) decrease the possible exploitation of graduate students and postdocs.

Funny you should mention grad students. It is typical of inexperienced grad students to have their supervisor write the introduction and conclusions for them. But why would someone as experienced and knowledgeable as Hauser need someone to write intro and conclusions for him? Isn’t that a little strange? Was he trying to lend more authority to the claims by having well-known names on the paper? It is a little strange also that Dr Marcus did not find the offer suspicious.

if two scientists meet and develop an idea together, then scientist A carries out the study (collects & analyses the data) while scientist B drafts the intro and discussion of the paper, it would make perfectly good sense for A and B to both be authors on the final paper. The two could be a grad-student and PI but could equally well be two collaborators at similar career stages. I don’t think there is anything suspicious about that.

if two scientists meet and develop an idea together, then scientist A carries out the study (collects & analyses the data) while scientist B drafts the intro and discussion of the paper, it would make perfectly good sense for A and B to both be authors on the final paper.

=================

Of course. But the impression I got by reading Dr Marcus’ response was that he had minimal involvement with the work–that’s how some people get to have so many co-authored papers after all. And if he had developed the ideas together with Dr. Hauser, I would think that he would have been curious enough to want to view the video recordings, especially given the revolutionary nature of the results.

What’s wrong with lots of coauthored papers? Great research is done through collaboration – the old adage “two heads (or three or four) are better than one.” There’s nothing wrong with splitting the load, and there’s no reason why Marcus should be expected to review hours of videotape. Posing a research question and interpreting results is extremely valuable and warrants coauthorship. If everyone behaves honestly (and I do not intend to condemn Hauser – I would rather see more evidence), it works beautifully.

Let’s revisit the quote from the retraction:
“MH accepts responsibility for the error.”

And Marcus’s quote: “Professor Hauser alerted me that he was concerned about the nature of the data, and suggested that there were problems with the videotape record of the study. I never actually saw the raw data, just his summaries, so I can’t speak to the exact nature of what went wrong.”

On the basis of this information, Marcus does not have any culpability.