When Retraction Watch readers think of problematic psychology research, their minds might naturally turn to Diederik Stapel, who now has 54 retractions under his belt. Dirk Smeesters might also tickle the neurons.

But a look at our psychology category shows that psychology retractions are an international phenomenon. (Remember Marc Hauser?) And a new paper in the Proceedings of the National Academy of Sciences (PNAS) suggests that it’s behavioral science researchers in the U.S. who are more likely to exaggerate or cherry-pick their findings.

1,174 primary outcomes appearing in 82 metaanalyses published in health-related biological and behavioral research sampled from the Web of Science categories Genetics & Heredity and Psychiatry and measured how individual results deviated from the overall summary effect size within their respective meta-analysis.

And while studies

whose outcome included behavioral parameters were generally more likely to report extreme effects, and those with a corresponding author based in the US were more likely to deviate in the direction predicted by their experimental hypotheses, particularly when their outcome did not include additional biological parameters.

But they didn’t find the same to be true for non-behavioral studies.

Although this latter finding could be interpreted as a publication bias against non-US authors, the US effect observed in behavioral research is unlikely to be generated by editorial biases. Behavioral studies have lower methodological consensus and higher noise, making US researchers potentially more likely to express an underlying propensity to report strong and significant findings.

So where might this predisposition come from, ask the authors?

A complete explanation would probably invoke a combination of cultural, economic, psychological, and historical factors, which at this stage are largely speculative. Our preferred hypothesis is derived from the fact that researchers in the United States have been exposed for a longer time than those in other countries to an unfortunate combination of pressures to publish and winner-takes-all system of rewards (20, 22). This condition is believed to push researchers into either producing many results and then only publishing the most impressive ones, or to make the best of what they got by making them seem as important as possible, through post hoc analyses, rehypothesizing, and other more or less questionable practices (e.g., 10, 13, 22, 26). Such a pattern of modulating forces may gradually become more prevalent also in other countries currently and in the near future (18, 20, 21).

We asked Fanelli whether the combination of his findings and Stapel et al suggest that US behavioral scientists are more likely to exaggerate, while some EU behavioral scientists are more likely to just make it up? He said it was “an interesting hypothesis” that “could eventually be tested.”

But I currently wouldn’t think so. Evidence from surveys and other sources suggests that fabrication and falsification are just the extremes of a continuum, and that the bulk of the problem lies in questionable and/or unconscious research choices. Behind the US effect there might be some misconduct and some intentional bias, although I think that for the most part this phenomenon escapes the conscious awareness of any individual researcher. The implication, if the above is true, is that misconduct might actually be slightly higher in the US compared to other countries.

However, the point here is more general. Whether the US effect is voluntary or not, the end result of these choices is the same: a higher rate of false positives and exaggerated findings, which in the future might need corrections and in some cases retractions.

And Fanelli was also quick to point out that this kind of exaggeration doesn’t seem to be exclusive to the U.S.

The US are an ideal subject because they are relatively homogeneous and yet very big and scientifically productive, so it was easy for us to compare the US to the rest of the world. And of course the US-effect was especially interesting, since it helped us exclude classic explanations, such as editorial biases and simple file-drawer effects. But we suspect that with higher statistical power we would observe specific biases in other countries, in Europe and elsewhere, possibly limited to specific fields and periods in time.

Before opening the floor to what we hope will be a robust discussion, we’ll close with lovely description of science that opens the paper:

Science is a struggle for truth against methodological, psychological, and sociological obstacles.

We desperately need specialists from every field of study to come forward and to start quantifying the fraud, the misconduct and the shenanigans by scientists and by publishers through formal, open access publications like this one. Hats off to fanelli for speaking the truth and for quantifying his statements.

For some reason Stapel is too kind here.
There are thousands of research publications in the field of social psychology that we cannot trust because of the possibility of exaggeration and publication bias. The stack should be cleaned up.
The journal Psychological Science published last year an insane paper by Lewandowski’s paper on climate science skeptics. This should be a far bigger scandal than “Stapel”, but it is not. Honest scientists please stand up, otherwise your field should not be taken seriously.

The key has to be that “peer review” starts when you submit a paper and is continues after its publication. PubPeer (www.Pubpeer.com), an early step in developing this process, seems to be having some success in this respect.