Regular readers of my blog will know that I have been beating the drum for reform for quite a while. I absolutely think psychology in general, and perhaps social psychology especially, can and must work to improve its methods and practices.

But in reading the commission’s press release, which talks about “a general culture of careless, selective and uncritical handling of research and data” in social psychology, I am struck that those conclusions are based on a retrospective review of a known fraud case — a case that the commissions were specifically charged with finding an explanation for. So when they wag their fingers about a field rife with elementary statistical errors and confirmation bias, it’s a bit much for me.

I am writing this as a first reaction based on what I’ve seen in the press. At some point when I have the time and the stomach I plan to dig into the full 100-page commission report. I hope that — as is often the case when you go from a press release to an actual report — it takes a more sober and cautious tone. Because I do think that we have the potential to learn some important things by studying how Diederik Stapel did what he did. Most likely we will learn what kinds of hard questions we need to be asking of ourselves — not necessarily what the answers to those questions will be. Remember that the more we are shocked by the commission’s report, the less willing we should be to reach any sweeping generalizations from it.

So let’s all take a deep breath, face up to the Stapel case for what it is — neither exaggerating nor minimizing it — and then try to have a productive conversation about where we need to go next.

6 thoughts on “What is the Dutch word for “irony?””

We need to let go of publication pressure, of time pressure and efficiency assessment of researchers in general, and provide PhD students with incentives to do research in a way that checks for robustness, that explores potentially unfruitful avenues, and that doesn’t necessarily lead to a handful of publications after 4 years. Research does not have to be productive or efficient, it has to lead to knowledge we can rely on to be true or know to be false.

When will you have time to read the full report? Between the two reviews you have to do this morning? Or between seeing the 10 PhD students that work for you? Science needs time, unproductive time, or at least unproductive in the sense measured by publications. The most important thing in science is not the shop window but the messy workshop, which is, and needs to be, much vaster than the shop window might lead one to suspect.

@Bert: This is the best comment on this issue I have seen. I am so tired of the current more, more, more mentality in academia – and the emphasis on popularity over substance (“I’d rather be wrong and cited than right and ignored” – a comment by one of my social psych colleagues).

The word is “ironie”, and I don’t think it totally applies here. Social psychology had a worse reputation than other fields before this (and for a reason!), it’s not all 20/20 hindsight. That’s not to say that all social psychologists are like this, mind you. All this talk of this being a systemic problem: I believe it. But sometimes I also think it would already be an improvement if there was not as much ingroup unwillingness to talk ill of a colleague on the record. Some are caught up in a bad system, but some deserve it.

I’m with you — the press release, to me, is just another example of an unproductive divide between people who think the sky is falling and those who think there’s nothing wrong with the field. My hope, and I haven’t read the original document either, is that the full review considers these issues more judiciously, while the press release falls prey to pulling the more sensational aspects out for a more compelling story.

Much like you say in the post, I’m a big proponent of reform, but stories like this even make me recoil and take up a defensive posture against the field being unfairly smeared. That sort of attack is not the way to get people to be open to change. I actually quite liked what David Brooks had to say on the topic in his op-ed a few days ago — http://www.nytimes.com/2012/11/27/opinion/brooks-how-people-change.html — it’s about a father’s disappointed email to his kids; here’s a snippet:

“The problem, of course, is that no matter how emotionally satisfying these tirades may be, they don’t really work. You can tell people that they are fat and that they shouldn’t eat more French fries, but that doesn’t mean they will stop. You can make all sorts of New Year’s resolutions, earnestly deciding to behave better, but that doesn’t mean you will.

People don’t behave badly because they lack information about their shortcomings. They behave badly because they’ve fallen into patterns of destructive behavior from which they’re unable to escape.

Human behavior flows from hidden springs and calls for constant and crafty prodding more than blunt hectoring. The way to get someone out of a negative cascade is not with a ferocious e-mail trying to attack their bad behavior. It’s to go on offense and try to maximize some alternative good behavior. There’s a trove of research suggesting that it’s best to tackle negative behaviors obliquely, by redirecting attention toward different, positive ones.”

I did read the report and have been reading interviews with the committee in Dutch today.

They do state that their findings are based on a non-random subsample of soc psy research. On the other hand they also remark that the 70 co-authors they have interviewed painted a picture of sloppy science which is further substantiated not only by reviewers not noticing this sloppy behavior, but even telling authors to drop conditions, DV’s and other measurements that are not significant because that would improve the paper.

I do not think they blame Diederiks fraude on the field in general, they are quite explicit in saying that, while investigating Stapels fraud, they ran into these issues and that they think this is not how proper research should be conducted, and that it is important to address this.

Furthermore they have stated that they assume that this culture is not limited to soc psy, but a problem with the way science in general is conducted.

One of the members of the committee was pointing his fingers at the journals yesterday claiming that their demand for sexy effects and pressure on researchers to only selectively report “what worked” plays a large role in this culture.

So in short: I think they are a bit more nuanced than would maybe appear in American news at the moment.

This is a tough issue just seeing the reactions here and on facebook. Ultimately, I think it is useful to take a step back and digest the report as Sanjay suggests. I only skimmed it and I had a range of reactions. The initial reaction was feeling ill about the scope of the fraud. Right now, however, I am entertaining the thesis that this event is played out.

Indeed, a number of people have outlined a range of positive changes that could improve the rigor of the field and hopefully many of these will be adopted. I also think we need to routinely record what each person did for each paper so co-authors have some cover if they are unlucky enough to collaborate with a fraud. It will also make it easier (but not necessarily easy) to judge scholarly contributions for P & T committees and hiring committees. This could go in the Author Note of each paper. What else needs to be said or learned?

One last thing – I really hope no one buys Stapel’s book and I hope he does not try to launch a comeback. Sadly, I can imagine an alternate universe in which he writes about the unimaginable pressures of the modern university and describes how he has received treatment for whatever condition prompted his behaviors so he is ready to contribute to the field he loves as penance. He should receive a Pete Rose-style lifetime ban.