Last year a mainstream psychology researcher called Daryl Bem published a competent academic paper, in a well respected journal, showing evidence of precognition. Instead of designing new studies to see whether people could consciously tell you about the future, he ran some classic psychology experiments backwards.
For example, in experiments on subliminal influence, participants are presented with two mirror images of the same picture, asked which they prefer, and are likely to choose the images where a subliminal negative image is flashed up for milliseconds, before they make their choice. In the Bem study, the negative images were flashed up after they made their choice, but participants were still less likely to choose the image on the side with the nasty subliminal image.

This was all pretty kosher, and statistically significant, and I wasn’t very interested, for the same reasons you weren’t: if humans really could see the future, we’d probably know about it already; and extraordinary claims require extraordinary evidence, rather than one-off findings. There’s plenty of amazing stuff, in our infinitely distracting universe, and I’ll pay attention to the cheesy precognition stuff when the evidence is good and replicated.

Now the study has been replicated. Three academics – Stuart Richie, Chris French, and Richard Wiseman – have re-run three of these backwards experiments, just as Bem ran them, and found no evidence of precognition. They submitted their negative results to the Journal of Personality and Social Psychology, which published Bem’s paper last year, and the journal rejected their paper out of hand. We never, they explained, publish studies that replicate other work.

This squabble illustrates two problems facing all of science, which have still never been adequately addressed.

The first is the problem of context: these positive results may have happened purely by chance, against a backdrop of negative results that never reached the light of day. We know that researchers and academic journals, just like newspaper journalists, are more likely to publish eyecatching positive results. We know that even if you analyse one study’s results in lots of different ways, you increase the likelihood of getting a positive finding purely by chance.

So replicating these findings was key – Bem himself said so in his paper – and keeping track of the negative replications is vital too. For clinical trials, there is a system of registering your trial before you recruit participants, to reduce the risk of negative results being buried (it’s imperfect, as I’ve written, but it exists). Outside of trials, people tend not to bother, which puts whole fields at risk of spurious positive findings: Wiseman has set up a register for people to declare that they were attempting to replicate Bem’s work.

But the second issue is how people find out about stuff. We exist in a blizzard of information, and stuff goes missing: as we saw recently, research shows that people don’t even hear about retractions of outright fraudulent work. Publishing a follow-up in the same venue that made an initial claim is one way of addressing this problem (and when the journal Science rejected the replication paper, even they said “your results would be better received and appreciated by the audience of the journal where the Daryl Bem research was published”).

It’s hard to picture many of these outlets giving equal prominence to the new negative findings that are now emerging, in the same way that newspapers so often fail to return to a debunked scare, or a not-guilty verdict after reporting the juicy witness statements.

All the most interesting problems around information today are about structure: how to cope with the overload, and find sense in the data. For some eyecatching precognition research, this stuff probably doesn’t matter. What’s interesting is that the information architectures of medicine, academia and popular culture are all broken in the exact same way.

++++++++++++++++++++++++++++++++++++++++++
If you like what I do, and you want me to do more, you can: buy my books Bad Science and Bad Pharma, give them to your friends, put them on your reading list, employ me to do a talk, or tweet this article to your friends. Thanks!
++++++++++++++++++++++++++++++++++++++++++

26 Responses

kjm2 said,

Good points well made. But on the specific Bem stuff, I do wonder why Stuart Ritchie, Chris French, and Richard Wiseman don’t just put their report up on a website somewhere, pending getting it published. That’s what Bem did, that’s what Wagenmakers et al. did (whose refutation of Bem’s conclusions was published in JPSP alongside the Bem paper), that’s what Bem did with his rejoinder to Wagenmakers, and Wagenmakers in his further come-back, and the same goes for several other critics of Bem’s work. Most of the heated discussion of Bem’s work in the media and blogosphere happened before it was published in JPSP (online on their site in January, in print in in the journal in March) — most of the links you give came out months before the JPSP online publication, and all of them before it was in print. And almost all journals, including JPSP (see www.apa.org/pubs/authors/posting.aspx) don’t let online posting get in the way of subsequent publication. So what are Stuart Ritchie, Chris French, and Richard Wiseman up to?

The reason Ritchie, French, and Wiseman want their paper published in JPSP is twofold. Generally speaking, putting a paper up on the web is perfectly feasible, but to do so means that critics can dismiss the paper out of hand for not being published in an established peer-reviewed journal. Also, as professional scientists, their performance is measured by publications and citations *in peer-reviewed journals*, so while they may indeed end up publishing on the web if all else fails, I suspect that what they are “up to” as you so kindly put it, is submitting the paper to another journal. (And I note that the link you provided applies to papers *already accepted* by JPSP. I don’t see how it is supposed to help the author of a paper already rejected.)

The second reason RF&W want the paper published in JPSP is because JPSP published the Bem paper in the first place, so it seems the incredibly obvious place to have their replication paper published. For JPSP to categorically state that they do not publish replication studies is a fundamental failure of scientific ethos. Replication is one of the cornerstones of science. For any editor to say that their journal won’t publish any replication studies is a terrible, terrible betrayal of the scientific method. I would also note that it is particularly troubling coming from one of the official journals of the American Psychological Association (whose mission statement includes “to advance the creation, communication and application of psychological knowledge to benefit society and improve people’s lives”.)

To make matters worse, the submission guidelines for JPSP make no mention whatsoever of a policy of not accepting replication studies, and a quick search on Google Scholar shows that JPSP has published plenty of replication studies before…including a paper with it right in the title: Rotton and Conn’s “Violence is a curvilinear function of temperature in Dallas: A replication” in 2000.

To make matters worse again, the original paper on precognition does not really fit the journal’s field well either, since it is about neither personality nor social psychology. Perhaps this is unnecessarily cynical of me, but it looks to me like an editor pushed through the Bem paper despite it not being on topic for the journal and is now coming up with post-hoc rationalisations for not accepting a contrary finding.

In fact, kjm2, given your unusual awareness on the minutiae of the publication of Bem’s paper combined with your strange lack of awareness of the implications of APA journal policy, your oddly phrased suspicion of RF&W’s motives, and your anonymity, may I ask if you happen to be the editor in question?

Jut said,

Lunarnaut said,

What disturbs me alsmot more so than the editors absurd dismissal of replicate results is that anyone has the willpower to read Daryl Bem’s paper at all. The publication is riddled with convoluted descriptions, unexplained decisions, ambiguous language, and awkward phrasing. I’m glad some people have the patience to review this work with a critical eye. I made it 2/3s of the way through and became so frustrated just trying to translate Bem’s chaos into some kind coherent concept that I simply gave up.

PsyPro said,

2. Contrary to Ben’s benign characterization of Bem’s report of the research, Alcock has done a fine deconstruction:

Bem’s response is here:

And, Alcock’s response to Bem is here:

Bem has long been one of my heros: his self-perception approach long-preceded but encompassed the now fashionable embedded and embodied cognition. But, one is left seriously wondering what has happened to or with one of our former champions of clear thought and rationality. I know, recent years has seen him slipping more and more into this sad domain. It is sad. One wonders which claim is worse: a fully-aware acceptance of nonsense or a sad decline associated with dementia.

PsyPro said,

msf said,

For those of you quickly judging the journal for disregarding the paper out of hand, keep in mind the possibility that the authors may have not adequately represented the journals response, or that something was simply lost in translation. One’s always upset when a paper is rejected and that makes it easy to oversimplify when interpreting these comments. So before passing judgment we should either wait for a formal statement from the editor, or see the actual rejection letter.

PsyPro said,

I think a more likely explanation is that JPSP is embarrassed at having accepted Bem’s flawed article, and, having allowed for the simultaneous publication of one contrary piece is just hoping to move on, burying the whole mistake in the past. JPSP has nothing to gain (and much to lose) by continuing the (false) controversy.

@kjm2: Still no reply? I guess it’s Easter. I await your response with interest.

@msf: I am more than willing to entertain the possibility that the journal editors have been perfectly professional; should that eventuate, I shall turn the full strength of my vitriol onto those who misrepresented their conduct. Your point is well-taken, though, and I would like to know more.

@PsyPro: It would be a novel tactic to distract readers from a false controversy by creating a much bigger real one.

@ PsyPro: The article is not “flawed,” at least by the rules set up in [our?] field. We have set a certain level (.05) of risk for “finding” false positives and then the rules of statistical inference and peer review processed as they may. This is not the first false positive result ever published, and probably not the only one published in that particular issue of JPSP.

In essence: don’t hate the player, hate the game. The problem is the discipline’s accepted alpha (p-value) levels, the use of null hypothesis testing over Bayesian analyses, and the biasing of positive results over null results for selective publication… the problem is not with Bem or anything he did in this study.

kjm2 said,

I was not suggesting that RF&W shouldn’t try to get the paper into JPSP, or if the journal doesn’t change its mind, into somewhere else respectable. On the whole I agree with Ben’s important points in the original piece. I think JPSP should take it, subject of course to some proper peer review. I just don’t see why the rest of us should wait till that process is completed.

There’s nothing in the APA’s rules (not that I can find anyway) saying you can’t put something on the Web, either in advance of publication of an accepted paper, or before a paper has been accepted for publication, as long as you make its status clear. The first paragraph in the link I gave is about unpublished papers, not about those accepted for publication. As far as I’m aware that’s generally the position — just posting something on the web (as opposed to in a web-only journal, for instance) isn’t treated by journals as a prior publication and will not get in the way of later journal publication. Indeed in my own academic area (statistics) it’s probably now the exception for things not to appear in advance on the web in this way. It’s actually the web equivalent of circulation of draft papers and of departmental technical reports that’s always gone on. The discussion that ensues is usually thought improve the quality of the published paper when it appears. I can’t believe this doesn’t happen in psychology too.

Several of the writings in this controversy were put on the Web before acceptance anywhere. That goes, for instance, for the Bem, Utts and Johnson rejoinder to Wagenmakers et al, and to the Wagenmakers et al reply to that. It applies to the criticism of the original Bem paper by Rouder and Morey (which, personally, I thought better than the Wagenmakers et al. paper).

I don’t know whether Bem posted his original paper before acceptance because I hadn’t heard about it then. But he posted it on the web in or before November last year, and it didn’t appear in the print edition of JPSP till March this year (and not online till January this year, and they don’t post everything early online). The initial media reports on the Bem paper date back to November 2010, and they made it clear the paper was already accepted by then. So, allowing time for peer review, revision (because nearly everything is sent back for some sort of revision) and re-review, the delay between acceptance and publication for JPSP must be over 6 months — I suspect it’s well over 6 months. (Many journals take considerably longer than that.)

So if RF&W are intending to keep quiet about what their replication paper says until it’s published, we won’t see it for months (or possibly years, if they have to trail it round a few journals, which I hope doesn’t happen). Their paper is presumably finished (or it couldn’t have been rejected yet). So why can’t they post it so we can all see it now? That’s all I’m asking.

efctony said,

phayes said,

I disagree. Even working within the “scandalously” still prevalent orthodox inference framework I think it is (or should be) easy to see that retrocausality psych. experiments are, like homeopathy CTs, hopelessly flawed. Wagenmakers et al’s Bayesian critique was sufficient in this case but it was incomplete (I’m assuming this is also true of the Rouder and Morey one?). No mention was made in that critique of the logical consequences of trying to test a hypothesis which is less plausible than the particular spectrum of error/deception hypotheses associated with the experiment(s). But surely even non-Bayesian/Jaynesian researchers ought to be able to see that others will justifiably interpret any genuinely positive results they might get in experiments like Bem’s as evidence of some uncontrolled-for causal influence rather than a retrocausal one?

degroot97a said,

In the natural sciences, most journals have the option to submit a comment on a published paper. This also allows the original authors to Reply to the comment. This seems to me like a adequate solution to this ‘problem’.
See for example prb.aps.org/info/polprocb.html#short

Some Guy said,

“The article is not “flawed,” at least by the rules set up in [our?] field. We have set a certain level (.05) of risk for “finding” false positives and then the rules of statistical inference and peer review processed as they may.”

And if the p-value that Bem came up with was something like 0.049 then that would be the end of it. But it wasn’t. He came up with an overall p-value of 1.34*10^-11. It’s hard to believe that you could get that kind of result without doing something very very wrong.

phayes said,

“It’s hard to believe that you could get that kind of result without doing something very very wrong.”

Depends what you mean by “something very very wrong”. Taking that overall figure at face value, the illustrative skeptical prior in Wagenmakers et al gets updated to a posterior plausibility for retrocausality of ≈ 10⁻⁹. A few more results like that and we’ll have to kiss goodbye to our causal, orientable spacetime home and most of the stuff within it…

/o\

…or, we could realise that P(“something very very wrong”) ≫ P(“retrocausal absurdity”) and do our inferences and our science properly in the first place.

bf said,

I note most people here assume it’s obvious that backwards causation is logically impossible. This is not the case. Even the definition of causation is an extremely difficult *and unsolved* problem – in that there is no consensus solution among those who study it (philosophers of science/metaphysics).

Some proposed solutions have cause preceding effect by definition, thus making backwards causation impossible, but some don’t. The latter would entail either that backwards causation is (a) physically impossible but not logically impossible (i.e. is inconsistent with the laws of physics, but they are merely contingent), or (b) that it does happen but is rare or is not obvious (e.g. is easily misidentified as forwards causation).

(b) is perfectly plausible, not least because the laws of physics are not fully solved, and there are particular problems of causation in particle physics, a key area which is unsolved.

phayes said,

If most people here are assuming it’s a logical impossibility they’ll be relieved to know that it’s an unnecessarily strong assumption and they don’t have to worry about the philosophers’ difficulties. Some weak assumptions about the causal structure, applicable physics and initial data on various achronal subsets of interest of a spacetime patch containing (the description of) one of Bem’s experiments are really all that’s needed. Using proper inference¹ (instead of the cartoonish formulaic pseudo-reasoning apparently deeply entrenched in the ‘soft’ sciences), we conclude that there are better explanations for the strange results than that the experiments have probed the fundamental nature of physical reality and revealed something more surprising than anything the LHC is ever likely to.

LancasterT said,

I think what you’re battling against, and in general what you will always fight against with this subject, is the difference between precognition and highly developed intuition. The study itself doesn’t totally control for this variation to only allow one or the other to be the case.
I think we’re waiting for a starship troopers type of “guessing the cards” level of precognition before mainstream will jump on board HCG diet