1 Answer
1

When I encounter popular reporting on scientific claims, I have a checklist that I go through to check its credibility. I do not try to evaluate their methods myself, because I cannot rigourously critique science outside of my own field.

I have read the paper, and found that the paper matches the Guardian
summary.

The paper has a lengthy literature review, which indicates that the authors have at least a basic knowledge of the field and respect for scientific norms.

The paper making the claim is peer reviewed in what appears to
be a reputable journal.

The authors, Peterson and Palmer, are professors of
political science with previous scientific publications on related topics.
They do not appear to be cranks.

All of the above is encouraging. I will take their research seriously, but science can be messy or wrong and studies can find false conclusions. I usually don't fully believe a result unless it is reproduced and there is a scientific consensus behind it. Unfortunately replication studies are pretty rare, and it is unlikely that someone else will replicate this. For now, we have to be content with seeing how other scientific authors cite this paper, which functions as a secondary peer review.

In the year and a half since this article was published, 5 other publications have cited it, however none of these appears to be a scientific journal article. I don't see any evidence that any of them are peer reviewed at all.

This one is a scientific criticism of another paper. It cites the claiming paper uncritically in passing.

This scientific-seeming mini-article doesn't appear to be formally published, and it cites the claiming paper with hedging words; it doesn't seem like an endorsement of the conclusions.

I cannot find the citations in the other of the 3 citing publications.

None of these citations gives me confidence one way or the other. It is not super surprising that there are no peer reviewed publications that have cited this paper yet. Research and peer review is a pretty slow process and it may take a few more years. If you are reading this answer in the future, the above google scholar link should update automatically.

Conclusion: This paper follows the standard scientific process, which is encouraging. However, to know if the science is solid we have to wait for either replications, which will probably never happen, or other researchers to build upon this work. Until then we can't really know if the conclusion is real.

Note that depending on the field, it may take a while before a new article is referenced by other peer-reviewed publications. Some journals in my field have a backlog of almost three years, meaning that if you submit a paper today that references a paper that was just published, it may take up to three years until my paper is published in turn. Needless to say that this state of affairs is very frustrating, and a serious road block to progress in the field. But some academic traditions seem to be super slow to change.
– SchmuddiMay 28 at 7:50

1

I don't fully agree with your (4). First, both work in political science, not psychology. Second, while they certainly don't appear to be cranks, they don't seem to be hugely prolific based on their publications. Sure, this differs between fields, and you have to consider their young academic age. But they have only 7 peer-reviewed publications under their belt, and 3 of these list both Peterson and Palmer as co-authors. To me, this suggests the authors may be still junior researchers in their field who still have to build up their academic reputation.
– SchmuddiMay 28 at 8:13

@Schmuddi First Comment: Edited to make that clear. Second Comment: Fixed my mistake about political science vs psychology. The point about them having few publications is relevant but doesn't change my wait and see conclusion. Experienced professors can use poor methods and produce false conclusions just the same as young ones.
– BobTheAverageJun 1 at 14:50

Good summary, Bob! In my experience, the most common red flags in soft science papers are (a) use of overly small sample size, (b) use of a non-representative sample to draw general conclusions and (c) excessively permissive p-values making the result of no value until robustly reproduced. Did you find any of these?
– Sklivvz♦Jun 1 at 14:54

1

@Sklivvz I am not really qualified to critique their methods. The smallness or non-representative-ness of a sample can be somewhat subjective, and I would be skeptical of any Skeptics answer that evaluated them unless they were glaringly bad.
– BobTheAverageJun 1 at 14:58