For a long time, some psychologists have understood that their field has an issue with WEIRDness. That is, psychology experiments disproportionately involve participants who are Western, Educated, and hail from Industrialised, Rich Democracies, which means many findings may not generalise to other populations, such as, say, rural Samoan villagers.

In a new paper in PNAS, a team of researchers led by Mostafa Salari Rad decided to zoom in on a leading psychology journal to better understand the field’s WEIRD problem, evaluate whether things are improving, and come up with some possible changes in practice that could help spur things along.

Looking at the participant groups in the subset of the 2014 articles in which authors included demographic information, “57.76% were drawn from the US, 71.25% were drawn from English-speaking countries (including the US and UK), and 94.15% … sampled Western countries (including English-speaking countries, Europe, and Israel).” The 2017 numbers weren’t much better.

So there’s clearly a problem. But, Rad’s team added, “[p]erhaps the most disturbing aspect of our analysis was the lack of information given about WEIRDness of samples, and the lack of consideration given to issues of cultural diversity in bounding the conclusions”. That is, the articles they examined all too often omitted information that could help other researchers note WEIRDness when it occurs, and all too often explicitly over-extrapolated findings drawn from WEIRD samples. Summing up these problems in the 2014 sample, Rad and his colleagues said: “Over 72% of abstracts contained no information about the population sampled, 83% of studies did not report analysis of any effects of the diversity of their sample (e.g., gender effects), over 85% of studies neglected to discuss the possible effects of culture and context on their findings, and 84% failed to simply recommend studying the phenomena concerned in other cultures, implying that the results indicated something generalizable to humans outside specific cultural contexts.”

The authors don’t just grumble about the problem – they offer some concrete potential fixes based on their findings:

“Required Reporting of Sample Characteristics” — It’s already the norm to report the gender breakdowns of experimental samples; Rad and his colleagues think that authors should be “required to report [other characteristics including] age, SES, ethnicity, religion, and nationality,” when it is practical and realistic to do so.

“Explicitly Tie Findings to Populations” — Less “We discovered X about people”, and more “We discovered X about a small group of undergraduates with the following demographic characteristics in New Haven, Connecticut.”

“Justify the Sampled Population” — Authors should have to explain why they chose the population they chose – and sometimes, as Rad et al note, yes, the answer will be convenience. That’s fine, within reason: The problem isn’t that college students are studied sometimes, it’s that they’re studied far too often.

“Discuss generalisability of the Finding” – Similar to the point about populations above, the idea is simply that authors should explicitly discuss whether they expect a given finding will generalise beyond the experimented-upon population, and why.

“Analytical Investigation of Existing Diversity” – Even WEIRD samples often have some degree of diversity to them along certain dimensions, so here Rad and his colleagues are suggesting that authors check for the presence of diversity-related moderators, both gender (which is already reported, usually) and other characteristics like race (which often aren’t). In other words, even if your experimental sample is mostly WEIRD, it could be informative to check whether, for example, the small handful of black participants produced different data than the rest of the group.

Recommendations for editors and reviewers

“Non-WEIRD = Novel and Important” – “Journal editors should instruct reviewers to treat non-WEIRDness as a marker of the interest and importance of a paper.”

“Diversity Badges” – Some journals already award “badges” when authors pre-register or engage in other open-science best-practices. Badges for research centered on under-studied populations could be a nice little incentive-nudge.

“Diversity Targets” – It would be reasonable, argue the authors, to have at least 50 per cent of published papers analyse non-WEIRD populations. Ideally, it would be higher, but as the numbers above show the situation at the moment is pretty dire, 50 per cent would be a major improvement.

***

The above suggestions provide a solid jumping-off-point for solving the WEIRD problem: Any one of them could be debated, discussed, and potentially modified or implemented. The next step, then, will be to see whether journals – the most important deciders when it comes scientific standard – will take up the mantle.

At the risk of getting overly meta-psychological – discussing the psychological science of how psychological science is conducted – a great deal of human behaviour can be boiled down to the path of least resistance and to incentives. Often it’s not a deep-seated bias or a lack of concern about other groups that causes researchers to overlook non-WEIRD samples (though I’m sure both are sometimes factors), but rather it’s because college students are right there. As in, literally in the same buildings as the ones where most psych researchers work. You can just put up some flyers, and boom, you have a group you can experiment on! It’s all too tempting. And it’ll take some incentive-shifting – shifts to editors’ behaviour, or the possibility of earning badges, or whatever else – to get researchers out of their WEIRD rut.

Post written by Jesse Singal (@JesseSingal) for the BPS Research Digest. Jesse is a contributing writer at BPS Research Digest and New York Magazine. He is working on a book about why shoddy behavioral-science claims sometimes go viral for Farrar, Straus and Giroux.

6 thoughts on “Psychology research is still fixated on a tiny fraction of humans – here’s how to fix that”

The first person that comes to my mind is Maslow. Normal is boring. Weird sells books. We don’t have to worry about people who fall in line and follow the leader. We have to worry about people who march to the beat of a different drummer.

Here’s some additional thoughts:
(I could write a LOT about this)
1. Editors have told me my papers about learning African languages are “not that interesting to the majority of readers” (paraphrased).
2. You really cannot ask people to go and replicate or run another condition in a village in Africa.
3. On that note, the “publish quickly, publish lots” culture works massively against even “typical” non-University population studies. If you research memory and recruit e.g. minimum wage workers in a Western country, not students, it takes way longer.
4. It is assumed that the researchers doing this stuff are Western and the “observed” are not. Don’t make that assumption. Do science WITH NOT AT the countries you are interested in.
5. Train researchers in developing countries. Don’t employ fieldworkers, fund PhDs.
6. Also, don’t offer funding for researchers/lecturers in resource poor countries to go to conferences. They don’t have data. They need time (or RAs) out of their HUGELY BUSY teaching load.
7. Most Psych programmes in developing countries don’t exist or are just clinical/counselling. Set up a workshop on cognitive psychology or social psychology or developmental psychology and invite a SALT/linguistics/education/counselling psychology professor plus their students.
And my final point:
8. [redacted swear word] DON’T JUST TRANSLATE MATERIALS. Listen to your researchers (& they won’t speak up if you barge in all North American). Kids are afraid of white dolls. They don’t have stairs in their homes. Etc.