Are we removing the human from psychology?

What is science to you?

To me, science is where truth stands tall, looming over all petty human skirmishes and biases. Scientists test each other’s methods, making sure each experiment produces the same results no matter who does it, where, and when. But rigorous checking only propels them to do better. In an ideal world, science is the focus — not the scientists.

————

At some point this year, I had to choose between majoring in Neuroscience or Psychology. I’ve always had a problem with psychology. So much of it is based on theory upon theory, and who said what. There is little agreement on who is actually right — only the pros and cons of each idea. To me, the facts are lacking.

A similar realisation swept the world six years ago. The concept of power poses, describing how certain postures giving rise to higher confidence, had just been born. Around this time, a variety of bizarre findings were coming to light — mainly in the field of psychology. You may remember headlines like: Ovulating women are more likely to wear red! Male college students with fat arms probably have particular political stances! CHOCOLATE MAKES YOU LOSE WEIGHT!

Everything changed when three dissatisfied psychologists released False-Positive Psychology. They showed just how easy it was for researchers to achieve results that were unlikely to be true. They called this p-hacking, a widespread abuse of methodologies including the look-elsewhere effect and cherry picking. In 2015, another Science article revealed only 36% of 100 studies published in psychology journals were replicable — as opposed to 50% in medicine journals.

Why is this important? When researchers obtain results, they need to consider whether their results are true and show a phenomenon that actually exists. For example, proving that certain poses make people feel more powerful could be an effect that doesn’t exist for everyone, but only appeared because of a badly designed study: a false positive.

It’s like getting some friends to laugh at your bad joke then claiming that your joke is universally funny. Or if a doctor did a routine checkup on your uncle and told him he was pregnant. If other scientists carry out the same study, but cannot get the same results, then it could also point towards a “false positive”.

“Well, uncle, you see, it’s because you have back pain and have trouble walking. Your belly is pretty huge too. It just makes sense that you’re pregnant.” – An example of a false positive. Credit: Flickr (Spyros Papaspyropoulos)

Before this point, new discoveries in psychology outpaced the validation of those findings — but statistics has always influenced psychology, ever since the lady tasting tea test. It was previously bad practice to make inferences from the lack of results. For example, had power poses yielded no effect, we could not explain why for certain. Now, we can make sound conclusions from research that shows no results through Bayesian statistics. Projects like the Retraction Watch and Data Colada also keep studies in check. It would be unfair to say that psychology hasn’t progressed extensively since Freud.

Then, as the complicated oxymorons that humans tend to be, we took one step forward and two steps back. The moment statistics revolutionised the field of psychology for the better, scientists became the target of science.

This is how Amy Cuddy, co-author of the power posing study, toppled from grace. She had followed similar methodologies to other psychologists at the time. But because she gave one of the most watched TED talks ever, she fell under global scrutiny for failures to replicate her study after False-Positive Psychology was published.

Cuddy became an icon for bad science,condemned by the rising wave of statisticians eager to showcase the power of their analyses. Her co-author renounced the findings of the power pose study. Fellow academics distanced themselves, silent and afraid.

In an ideal world, science is the focus — not the scientists.

How do we encourage others to pursue academia if scientists are quick to attack one another instead of the science itself? The problems with psychology lie as much within the research as with the researchers involved. In a field detailing the complexity of human behaviour, it’s not surprising that its results are just as messy — especially when the scientists themselves are prone to this complexity.

It seems that as long as humans are involved, perhaps science is inevitably personal.

But when all is said and done, psychology is a science — verifying the results that come forth is the first step of many. It’s up to us to separate the scientist from the science, to recognise the extent to which criticism should be personal, and ultimately strive towards doing better science together.

Passionate about how to do good science? Read this and this. Join PsychMAP for lively discussions about psychological methodologies too!
This post was inspired by this New York Times article.

11 Responses to “Are we removing the human from psychology?”

Hi James, for sure! The most important issue here seems to be about whether research with more serious implications has been published despite engaging in “p-hacking”. It’s encouraging that more and more scientists are checking each other’s work. And yes – I usually do too, no matter how hopeful I am!

Hi Jamie,
Nice discussion! If true, I find it very concerning that 50% of articles in medical journals are reproducible. Hopefully in the majority of cases this doesn’t translate to adverse outcomes, but as we’ve seen with vaccinations, the findings from research reaching or supporting an incorrect conclusion can have implications in society and on public opinion, especially when reported in the media, and we are seeing outbreaks of preventable infections despite the research now being regarded fraudulent. Thankfully, widespread belief in the effects of the power pose would be much more benign.

Also, it seems that observational studies are more prone to reproducibility issues than experimental studies. I usually react with resigned scepticism upon hearing that chocolate causes weight loss.

Hey Matt! That’s an intriguing question. The writers of False Positive Psychology proposed a “p-curve” that would, at a glance, show whether a study was reliable. They’ve used it in their website Data Colada to evaluate several studies and “expose” scientists guilty of p-hacking, whether intentional or unintentional. This quantifiable measure of a false positive could deter scientists from engaging in bad scientific practices. Otherwise, projects like Retraction Watch keep tabs on which scientists have published consistently unreliable papers – a sort of blacklist, if you will. Currently, there is no legal consequence I know of for these scientists, or if they are held accountable to a national board or committee of some sort and given concrete punishments for p-hacking. Let me know if I’m mistaken! Hopefully I’ve answered your question.

I don’t think Cuddy’s intent was misaligned. I am dubious whether she committed certain malpractices like p-hacking (of which I am unsure). If she did, it would have been irrepsonsible for her to publicise this on the TED stage. It would have been right for her to believe fully in her results, but with a speech like this, it would have been on par with TV’s proclaiming that “Eating Chocolate cures cancer”.

Indeed, she was unfortuante to be made the scapegoat for bad science and I’m happy to see that she’s a huge advocate for good science.

Thanks for taking the time to read it! You’re exactly right. I think that would be best in conjunction with measures of checking reproducibility and could allow for a fairer judgement on each study as well as the scientist behind it. I can’t imagine how one would quantify that though as each methodology would have different aspects, each of which would have evolved differently. It would be highly complex.

You raise an excellent point about scientists who lack the reputation to garner the attention their work should receive. I don’t think it would be acceptable to publish anonymously, but to have each work peer-reviewed and evaluated on the basis of the study itself (without the peer-reviewers knowing who the author is) is a practical alternative for now! That being said, there are lots of flaws with the current peer-review system as well, so there is still the problem of scientists not getting attention for incredible and legitimate findings.

Indeed, as humans are not binary it would be foolish to seek a binary answer – the question is really: to what extent can we achieve objectivity? And how? It’s a rather philosophical food for thought.

Hey Jamie!
I was well in on the bandwagon for the power pose, so the story about Amy Cuddy was really sad. Back in high school the teachers showed us the ted talk just for some confidence with a oral presentation coming up, but they also talked about the reproducibility crisis. It never occurred to me that her research also couldn’t be replicated. If it is a case of changing research methods then i’d agree with you on separating science from scientists but i say that based on my judgement of ‘fairness’.
In any case what could be the deterrent for scientists trying to get false positive based fame through media? Other then losing credit as a scientist?

Interesting read. Academics of all fields spend years at a time producing research that may ultimately turn out to be false, and sensationalism can quickly blow unverified assertions out of proportion.

Perhaps a we could create grading system for how closely the methodologies used reflect the best practices current to the publication date. This could at least show that errors with the findings are not specific to the author, but are due to the inadequacy of the field at the time (As was the case with Cuddy).

This brings to mind another issue: Scientists who produce incredible findings may be completely ignored due to a lack of reputation (Zhuo-Hua Pan, arguably the inventor of the technique used to pioneer optogenics comes to mind). Removing the link between a scientist’s reputation and their work could enable a more objective evaluation of findings, but it also exposes a problem of accountability: If work has no bearing on reputation, what’s to stop one from producing sub-par results? Having your reputation at stake is good motivation to maintain a high standard.

Unfortunately, we may never be able to take a completely objective view towards science. It is far too human to judge a person by their work.

That’s a fair point! The thing about Cuddy’s story is that she was on par with the other psychologists at the time, using methodologies that most of them used (even her supervisor). However, because her research got so much attention, when the False Positive Psychology paper came out – she personally received much of the criticisms that was directed to her generation of psychologists, so to speak. Critics like Andrew Gelman even started using her name whenever they referenced bad science. But in defense of Cuddy, she believed she did the best she could at the time of publishing – and until today she has never doubted her research, sticking to the claim that the media blew it out of proportions. There are also several issues with replication as the scientists who tried to replicate her study altered the conditions. Today she’s a huge advocate for doing good science, and accepts the limitations of her study and is keen to find out more about the phenomenon. From this interview and this feature article, I believe she didn’t do her research for financial gain, otherwise she would probably have renounced her own research and moved on with her life, avoiding neverending criticism. That’s why I wrote this article from the angle of scientists who truly want to do good science but are unsuccessful in retrospect as methodologies get more and more advanced – and how it affects their career. I think you wrote an earlier blogpost from the opposite angle, of scientists who intentionally deceive for fame. Perhaps we’ve written about the same problem from different sides of a coin?

I’d like to disagree on the point of dissociating scientists from their work. Shouldn’t it be the case for the scientists’ poor scientific methodology that led them to sensationalizing their results.

Cuddy shouldn’t have publicised her findings if she had any doubt in them – and I’m sure she would have had a lot of financial gain as a result of the Ted talk. I’m thinking of other scientists who publicised their results for fame and gain, like Andrew Wakefield, like Hwang Woo-suk.

Great to hear, JT. It really isn’t, but I suppose we can’t hope to change history — I’m glad that more and more research is being dedicated to checking the reproducibility of studies. But I think how we deal with the results of reproducibility is equally important. In tandem with this trend, perhaps it would be ideal if we could develop some sort of feedback system for scientists whose studies have been found to be un-replicable as well, in a way that doesn’t damage their reputations more unfairly than it should.