This website use cookies and similar technologies to improve the site and to provide customised content and advertising. By using this site, you agree to this use. To learn more, including how to change your cookie settings, please view our Cookie Policy

P-Hacker Confessions: Daryl Bem and Me

Stuart Vyse is a psychologist and author of Believing in Magic: The Psychology of Superstition, which won the William James Book Award of the American Psychological Association. He is a fellow of the Committee for Skeptical Inquiry.

Cornell University psychologist Daryl Bem and I have something in common. Yes, we are both research psychologists, but that’s not what I mean.

For me, it started when I was just a young graduate student. Statistics courses are a standard part of graduate training in psychology, because statistical methods are still the coin of the realm in psychological research. Most graduate students are required to conduct empirical research as part of their doctoral dissertations, and if they go on to academic positions, they often continue to do quantitative studies throughout their careers. Training in statistics is important because statistical number crunching techniques are the way we determine whether our results mean anything or not. Most of my graduate school cohort hated anything that looked like math, but—to my surprise—I discovered that I liked statistics courses. I took more of them than were required, and my relatively strong background in stats was an important factor in landing an academic position. (Let that be a lesson to any psychology students who might be reading this.) In graduate school, I coached my math-phobic friends on how to enter data into the computer and analyze it, and in my academic life, I did the same with students and colleagues.

With all this background, I got to be pretty good at statistical consulting, and as a result, needy researchers often came knocking. Publishing trends are gradually changing, but even now, most studies need to report statistically significant results to have any chance of getting published. Journal editors are much less interested in studies in which nothing happened, so everyone is on a quest to achieve the vaunted p (for probability) < .05 that indicates the findings are unlikely to have happened by chance. When a friend’s research or my own appeared to have come up short, I was pretty good at salvaging something from the rubble. I might suggest altering the design of the study by combining data from previously separated groups of participants, or massaging the numbers in some way. These were techniques I’d learned at my mentors’ knees, and although we had some inkling that we were fudging the results a bit, we consoled ourselves by openly reporting the steps we’d gone through and supplying some plausible-sounding justification for each manipulation. We didn’t think we were doing anything wrong.

About Skeptical Inquirer

Politicization of Scientific Issues:
Looking through Galileo’s Lens or through the
Imaginary Looking Glass
Bigfoot as Big Myth:
Seven Phases of Mythmaking
The Fallacy Fork
Why It’s Time to Get Rid of Fallacy Theory
The Fakery of
Electrodermal Screening