We were amazed and impressed to learn today that statistician Andrew Gelman had a sense that something was up with the article soon after it was published. In a December comment in the Washington Post, Gelman was flabbergasted by the size of the claimed result:

What stunned me about these results was not just the effect itself—although I agree that it’s interesting in any case—but the size of the observed differences. They’re huge: an immediate effect of 0.4 on a five-point scale and, after nine months, an effect of 0.8.

A difference of 0.8 on a five-point scale . . . wow! You rarely see this sort of thing. Just do the math. On a 1-5 scale, the maximum theoretically possible change would be 4. But, considering that lots of people are already at “4” or “5” on the scale, it’s hard to imagine an average change of more than 2. And that would be massive. So we’re talking about a causal effect that’s a full 40% of what is pretty much the maximum change imaginable. Wow, indeed.

and

And this got me wondering, how could this happen? After all, it’s hard to change people’s opinions, even if you try really hard. And then these canvassers were getting such amazing results, just by telling a personal story?

and

I say all this not to “debunk” or dismiss LaCour and Green’s work: I think their experiment is really cool, and it’s amazing they found such strong and consistent effects. What I’m trying to do here is understand these findings in light of all the other things we know about public opinion.

That said, Gelman didn’t cry foul. He accepted the result as real and tried to come up with reasons why it might have happened. But he sensed something was unusual, even back in December, and we’re quite impressed by that.

I hope next time something like this happens, whoever is commenting remembers to add “I would love to have a closer look at the dataset”. That doesn’t have to imply any form of accusation. And apparently, in the LaCour case, the dataset is a smoking gun; this will normally be the case with fabrication.