David DiSalvo is the author of "Brain Changer: How Harnessing Your Brain’s Power to Adapt Can Change Your Life" and the best-selling "What Makes Your Brain Happy and Why You Should Do the Opposite", which has been published in 10 languages. His work has appeared in Scientific American Mind, Forbes, Time, Psychology Today, The Wall Street Journal, Slate, Salon, Esquire, Mental Floss and other publications, and he’s the writer behind the widely read science and technology blogs “Neuropsyched” at Forbes and “Neuronarrative” at Psychology Today. He can be found on Twitter @neuronarrative and at his website, daviddisalvo.org. Contact him at: disalvowrites [at] gmail.com.

How One Flawed Study Spawned a Decade of Lies

In 2001, Dr. Robert L. Spitzer, psychiatrist and professor emeritus of Columbia University, presented a paper at a meeting of the American Psychiatric Association about something called “reparative therapy” for gay men and women. By undergoing reparative therapy, the paper claimed, gay men and women could change their sexual orientation. Spitzer had interviewed 200 allegedly former-homosexual men and women that he claimed had shown varying degrees of such change; all of the participants provided Spitzer with self reports of their experience with the therapy.

Spitzer, now 79 years old, was no stranger to the controversy surrounding his chosen subject. Thirty years earlier, he had played a leading role in removing homosexuality from the list of mental disorders in the association’s diagnostic manual. Clearly, his interest in the topic was more than a passing academic curiosity – indeed, it wouldn’t be a stretch to say he seemed invested in demonstrating that homosexuality was changeable, not unlike quitting smoking or giving up ice cream.

Fast forward to 2012, and Spitzer is of quite a different mind. Last month he told a reporter with The American Prospect that he regretted the 2001 study and the effect it had on the gay community, and that he owed the community an apology. And this month he sent a letter to the Archives of Sexual Behavior, which published his work in 2003, asking that the journal retract his paper.

Spitzer’s mission to clean the slate is commendable, but the effects of his work have been coursing through the homosexual community like acid since it made headlines a decade ago. His study was seized upon by anti-homosexual activists and therapists who held up Spitzer’s paper as proof that they could “cure” patients of their sexual orientation.

Spitzer didn’t invent reparative therapy, and he isn’t the only researcher to have conducted studies claiming that it works, but as an influential psychiatrist from a prestigious university, his words carried a lot of weight.

In his recantation of the study, he says that it contained at least two fatal flaws: the self reports from those he surveyed were not verifiable, and he didn’t include a control group of men and women who didn’t undergo the therapy for comparison. Self reports are notoriously unreliable, and though they are used in hundreds of studies every year, they are generally regarded as thin evidence at best. Lacking a control group is a fundamental no-no in social science research across the board. The conclusion is inescapable — Spitzer’s study was simply bad science.

What’s remarkable is that this classic example of bad science was approved for presentation at a conference of the leading psychiatric association, and was subsequently published in a peer-reviewed journal of the profession. Spitzer now looks back with regret and critically dismantles his work, but the truth is that his study wasn’t credible from the beginning. It only assumed a veneer of credibility because it was stamped with the imprimatur of his profession.

Why this occurred is a bit more complicated than a mere case of professional cronyism. For many years before his paper on reparative therapy, Spitzer had conductedstudies that evaluated the efficacy of self-reporting as a tool to assess a variety of personality disorders and depression. He was a noted expert on the development of diagnostic questionnaires and other assessment measures, and his work was influential in determining whether an assessment method was valuable or should be discarded.

Little wonder, then, that his paper on reparative therapy–which used an interview method that Spitzer recognized as reliable–was accepted by the profession. This wasn’t just anyone claiming that the self reports were valid, it was one of the most highly regarded diagnostic assessment experts in the world.

Reading the study now, I’m sure Spitzer is embarrassed by its flaws. Not only did he rely on self reports, but he conducted the participant interviews by phone, which escalates unreliability to the doesn’t-pass-the-laugh-test level. By phone, researchers aren’t able to evaluate essential non-verbal cues that might cast doubts on verbal responses. Phone interviews, along with written interviews, carry too much guesswork baggage to be valuable in a scientific study, and Spitzer certainly knew that.

The object lesson worth drawing from this story is that just one instance of bad science given the blessing of recognized experts can lead to years of damaging lies that snowball out of control. Spitzer cannot be held solely responsible for what happened after his paper was published, but he’d probably agree now that the study should never have been presented in the first place. At the very least, his example may help prevent future episodes of the same.

Post Your Comment

Post Your Reply

Forbes writers have the ability to call out member comments they find particularly interesting. Called-out comments are highlighted across the Forbes network. You'll be notified if your comment is called out.

Actually, it would seem Dr. Spitzer was not invested in the viewpoint that homosexuality was treatable, based on his involvement with removing it from the list of mental disorders. Unless you have your reporting backwards, in order for that to be some sort of “proof” for your take, Dr. Spitzer would have had to been AGAINST removing homosexuality from the list, not for it.

I’m struck by the similarities between this paper and The Lancet’s publication of Andrew Wakefield’s original paper linking vaccines and autism (which was arguably even more flawed compared to the standards of its field.) But given the pressure on journals to raise their impact scores, are we better or worse off in this regard? Is this more or less likely to happen again?

You’re right. Science is fooey and one bad report is evidence enough to dismiss anything a scientist says. Lets not be independently skeptical on a case-by-case basis, lets just go for an all-or-nothing approach. Lets meet to discuss this over some book burning and blood letting.