“The Dark Side of Power Posing”

Shravan points us to this post from Jay Van Bavel a couple years ago. It’s an interesting example because Bavel expresses skepticism about the “power pose” hype but he makes the same general mistake of Carney, Cuddy, Yap, and other researchers in this area in that he overreacts to every bit of noise that’s been p-hacked and published.

Here’s Bavel:

Some of the new studies used different analysis strategies than the original paper . . . but they did find that the effects of power posing were replicable, if troubling. People who assume high-power poses were more likely to steal money, cheat on a test and commit traffic violations in a driving simulation. In one study, they even took to the streets of New York City and found that automobiles with more expansive driver’s seats were more likely to be illegally parked. . . .

Dr. Brinol [sic] and his colleagues found that power posing increased self-confidence, but only among participants who already had positive self-thoughts. In contrast, power posing had exactly the opposite effect on people who had negative self-thoughts. . . .

In two studies, Joe Cesario and Melissa McDonald found that power poses only increased power when they were made in a context that indicated dominance. Whereas people who held a power pose while they imagined standing at an executive desk overlooking a worksite engaged in powerful behavior, those who held a power pose while they imagined being frisked by the police actually engaged in less powerful behavior. . . .

In a way I like all this because it shows how the capitalize-on-noise strategy which worked so well for the original power pose authors can also be used to dismantle the whole idea. So that’s cool. But from a scientific point of view, I think there’s so much noise here that any of these interactions could well go in the opposite direction. Not to mention all the unstudied interactions and all the interactions that happened not to be statistically significant in these particular small samples.

I’m not trying to slam Bavel here. The above-linked post was published in 2013, before we were all fully aware of how easy it was for researchers to get statistical significance from noise, even without having to try. Now we know better: just cos some correlation or interaction appears in a sample, we don’t have to think it represents anything in the larger population.