The journal retracts the 17 May 2016 article cited above. Following publication, concerns were raised regarding the scientific validity of the article. The Chief Editors subsequently concluded that aspects of the paper’s findings and assertions were not sufficiently matched by the level of verifiable evidence presented. The retraction of the article was approved by the Chief Editors of Human Neuroscience and the Editor-in-Chief of Frontiers. The authors do not agree to the retraction.

Here’s how the study worked: Researchers recruited twelve people who claimed to be able to judge whether a person is alive or dead based on their photographs. The participants looked at over 400 pictures, some of people who were still alive and some who had passed since the picture was taken, and had eight seconds to determine the current status of the photo subject. The participants were successful just over half the time (53.8%), slightly better than chance. The authors write in the abstract:

Our results support claims of individuals who report that some as-yet unknown features of the face predict mortality. The results are also compatible with claims about clairvoyance warrants further investigation.

First author Arnaud Delorme, a researcher with affiliations at the University of California San Diego and the Institute of Noetic Sciences, told us that the journal initially promoted the paper on Facebook and Twitter; a blogger for the journal even interviewed Delorme for an article. Then the journal decided not to run the blogger’s piece. And then on August 8, another blogger posted a criticism of the study — noting, for example, that the study contained no control group, and the mediums guessed incorrectly 46.2% of the time.

Next, Delorme told us:

On August 11, 2016, Dr. Gearóid Ó Faoleán, the Ethics and Integrity Manager of FHN, sent us an email stating that the article was going to be retracted (see attached letter). We replied immediately, stating that we could not agree to the retraction without being informed of the specific nature of their concerns. Dr. Faoleán did not respond to our request for more information.

Delorme says never got a specific reason for the retraction. In the attached letter he mentions above, Ó Faoleán told Delorme and his co-authors:

We have become aware of serious issues concerning the scientific soundness and methodology of your published article. Following an internal investigation by the journal Chief Editors and senior Frontiers editorial staff, it was determined that the paper does not meet the scientific standards of the journal and will shortly be retracted.

We contacted Ó Faoleán, who told us:

Concerns were raised about this article post-publication. While a subsequent investigation by the Chief Editors determined that the article should be retracted, the retraction statement serves as our public statement thereof.

We’ve got many questions about this paper and its retraction — namely, if the journal deemed the results to be so problematic, how did it pass peer review and get published in the first place? We’ve contacted the two reviewers listed on the paper, and will update if they respond.

Not surprisingly, the paper had some alternative sources of funding: the Bial Foundation, which aims to”foster the scientific study of the human being from both the physical and spiritual perspectives,” and one of Delorme’s affiliations, the Institute for Noetic Sciences, which focuses on

topics ranging from intuition, distant and ‘energy’ healing, to mind-matter interaction, transformative experiences, and the interactions between beliefs, behavior and worldviews.

This isn’t the first time Frontiers has pulled a paper after readers took a double-take: As we reported in March, Frontiers in Computational Neuroscience pulled a paper after several commenters on PubPeer raised questions about its contents, suggesting it was “gibberish” that may have been computer-generated. A large portion of the article was just statements and citations:

This is nonsense. Of course you can tell from a photograph whether a person is dead or alive with increasing probability of being correct, depending on the time of the taking of the photograph. I have seen many photographs of individuals with clear evidence of the surroundings and props being from the 19th Century or beginning of the 20th Century, in which case there is a strong likelihood that the subject no longer is alive. Even simple clues such as a black-and-white or grainy photograph may be enough. Hairstyles and clothing are other clues. What was the hypothesis? Is this another indication of the unusually low rejection rate of papers submitted to Frontiers?

Yes, those are valid points. And you mentioned hair style.
The stimuli were
1. 108 old photographs, of young individuals (1939-1941 school yearbooks), mostly dead (actual live:dead ratio not given, but we are told that “the ratio between alive and deceased individuals matched statistics of average life expectancy in the US for a given age group.”

3. 160 recent photographs, older individuals, mostly alive, possibly familiar (“photos of state politicians for about two-thirds of the images, as well as from photos accompanying obituaries of businessmen”).

So if the individual is not school age, it’s a recent photograph of someone who’s probably alive.

If you read the article (Google the title on the Internet), you will see that images were balanced along 8 different features (by 3 independent judges), including the time the photography was taken, the fact that people were smiling or not smiling, etc… It does not rule out possible bias but it certainly minimize it. I am the author of this manuscript.

These problems could have been easily solved by having a control group, i.e. peoples who don’t possess psychic abilities aka “mediumship”, predicting death/alive from the same photos. You can compare the percentages of correct answers between two groups and do a statistical analysis. Since in this study no control group was included, we don’t know whether the small but statistically significant correct predictions (overall mean accuracy on this task was 53.8%, where 50% was expected by chance (p < 0.004, two-tail.) obtained were due to “mediumship” or the subtle characteristics of photos. A Bayesian statistician would say that we must reject the finding even if we had a control group and there was a statistically significant difference in favor of “medium” group, since “mediumship” is scientifically impossible.

The null hypothesis in this study was that performing this type of image classification (live/deceased) is at chance level. Rejecting the null hypothesis for any group of subject is the main result of this article. The goal of the study was not to compare between groups of subjects on this task. I am the author of this study.

It would be very wrong to reject a statistical finding because our *interpretation* appeared scientifically impossible, because there could be other valid non-chance interpretations: in this case, a difference in individuals’ ability to assess the age of photos that correlates with their chance of claiming to be a medium….

But definitely, the reasoning that led to the null hypothesis being 50% is unsound.

As the first author of this article, my response is that I believe that this retraction violates the Committee on publication ethics, because no objective reasons were provided for the retraction – section 3.5 of COPE “Journals should have a declared mechanism for authors to appeal against editorial decisions.” Despite our repeated messages to the editor, this opportunity was not given to us. Some responses mention the lack of control group, but this is not the reason why the article was retracted. The methodology of the article was actually not put in question in the retraction statement. Instead, the editors are questioning the conclusions in vague terms. About the control group, this depends on the question you ask. If you want to test if a given group of subject has higher performance than a control group, then a control group is relevant. If you are want to test if people can reach performance higher than 50% performance, then a control group is irrelevant.

Upon being contacted by retraction watch, I declared that I believed this retraction violates the Committee on publication ethics. This opinion is not reflected in this blog piece. A longer more detailed and I believe less bias blog that details all the events leading to this retraction is available at http://noetic.org/about/press/retraction.

By all means critic a paper or study that’s normal and expected and I agree a control group would be made the study more robust. But to retract a paper without giving the author(s) an opportunity to respond as alleged by Delorme is certainly poor professionalism on the editors part.
It would certainly be possible to replicate with perhaps a within or between subject design.

Undoubtedly this paper could have been improved, but the same could be said for any published scientific paper. To retract it once published without a convincing reason for this action is surely a matter of Frontiers following a misguided policy of political correctness — and one that benefits its own vested economic interests as a journal identified as being predatory, rather than following genuine scientific practices. This incident is reminiscent of the attacks made on the Journal of Personality and Social Psychology for having published a similarly controversial paper by Daryl Bem, but that journal had the integrity to hold its ground, in contrast to Frontiers’ shameful decision to cave to pressure. There are many sincere researchers within parapsychology who are doing good scientific research, despite that the worth of their findings is hotly debated. Consequently, the post-hoc rejection of this paper without good cause should be seen as an expression of closed-minded prejudice against an entire area of scholarly inquiry.