Are your family trips an exercise in pleasure or comfort? Behavior economics guru Dan Ariely notes that there's a vivid difference between the two... and it may mean the difference between a fantastic vacation and...

People with disabilities shouldn't have to try and pass as able in the workplace, says writer and comedian Maysoon Zayid. But the sad reality is that America's largest minority remains invisible throughout popular...

You've Heard of OCD, But Do You Really Understa...You've Heard of OCD, But Do You Really Understand It? :: Big Thinkers on Mental Health

The Suicide Rate is Going Up. Here’s What We Ca...The Suicide Rate is Going Up. Here’s What We Can Do to Stop it.

Big Think and the Mental Health Channel are proud to launch Big Thinkers on Mental Health, a new series dedicated to open discussion of anxiety, depression, and the many other psychological disorders that affect millions worldwide.

A new paper published in Perspectives in Psychological Science (open access) suggests there is “a fundamental design flaw that potentially undermines any causal inference” in much psychology research. The paper is very accessible and well worth a read. The flowchart below describes the hypothesis:

Way back in the 1950’s a phenomenon called the Hawthorne effect (AKA the observer effect) was observed, an experiment to discover whether workers in the Hawthorne Works plant were more productive under increased lighting, found that productivity increased whenever the workers were being observed, regardless of the variable in question. The effect is a common cause of the placebo effect, which in most aspects of medicine is effectively handled by the gold standard of double blind randomised controlled trials (RCT’s). Unfortunately this is not always possible in psychology, not least because it is often impossible to blind an individual to the intervention, let alone the experimenter.

The problem is currently sometimes handled in psychology by using an active control, in which the control group are led to believe they are being experimented on too. The researchers in the new paper complain that this is not good enough, because all too often a difference remains in the participant’s expectations - an expectation effect. I’ve described studies where active controls might not be well chosen before, they are everywhere, in such cases expectation effects may be the cause of the results found, which would completely alter the conclusion of a study.

To demonstrate the severity of the problem, the researchers address video-gaming and brain-training research, not because the field is an example of poor research but because it is a rare example of psychology research that actually uses active placebos. The original research in question demonstrates a benefit in performance on vision and attention tasks following playing an action video game (Unreal Tournament) rather than Tetris or The Sims. The researchers in the new study asked people (admittedly only using Amazon’s Mechanical Turk) what improvements they would expect to see, following playing Unreal Tournament or following playing Tetris or The Sims. The result was that people expected the very same benefits as were found in the original research. More visual processing benefits were expected by participants from playing Unreal Tournament than from playing Tetris or The Sims, furthermore, more such benefits were expected by the participants from Tetris than The Sims whilst participants thought more benefits would be achieved in story-telling from The Sims and participants thought more benefits to visual rotation would be obtained from Tetris than from Unreal Tournament.

Could it be that in reality there is no real effect that can be generalised outside of the laboratory? Could it be that while they were being observed the gamers in the original experiment unconsciously let their behaviour be moulded by their own prior expectations?

If this is the case, then this is only the tip of the iceberg. Vast amounts of psychology experiments do not use active controls, let alone active controls that sufficiently avoid expectation effects. This will be no easy problem to fix, the researchers have some suggestions, many of which will likely be seen as impractical by researchers, but some of which may be taken up – the paper itself is valuable reading if you are a researcher yourself. The problem requires a major sea change in the way things are done, even with the best efforts of researchers, the problem will never go away entirely. You’d therefore be well advised to always keep the possibility of expectation effects in the back of your mind when you are reading a piece of psychology research. The question you want to ask yourself is this - these results might look very impressive on paper, but what are they compared to and will they still apply to me in my living room rather than to someone who knows they are being experimented on, and might think they know why, in a far away laboratory under the watchful gaze of a scientist in a metaphorical white coat?