NeuroChambers

Monday, 30 March 2015

It's a slightly sad day for me. As I explained to Damian Pattinson in an email, I remain as much a supporter of the PLOS ONE mission as when I joined the editorial board over two years ago. PLOS ONE has done more than any other journal to combat publication bias and to normalise open data practices. Sure, the PLOS ONE mechanism doesn't always work perfectly, but in terms of philosophy it is light years ahead of most other journals in the social and life sciences.The reason I'm leaving PLOS ONE isn't because they did anything wrong (although it must be said that the volume of editorial requests is unfeasibly high). Instead, the Registered Reports initiative is really starting to gain traction and I am increasingly finding myself helping other journals launching the initiative or even serving on editorial boards that are offering the format. So I have decided to focus my efforts on editing for journals that offer, or plan to offer, Registered Reports. For now, at least, PLOS ONE isn't willing or able to do so. Meanwhile, the list of adopting journals continues to grow; the latest exciting addition is Royal Society Open Science, which will be launching Registered Reports across all sciences later this year.In the interests of transparency I should say that, for the same reason that I am leaving PLOS ONE, I also declined last week to join the editorial board of Nature Scientific Reports. Upon being invited to join their editorial board, I responded that I would be happy to do so if they would consider offering Registered Reports, and that I would be delighted to help them set up the format. I had hoped their response might be positive given the stated mission of the journal to avoid setting "a threshold of perceived importance to the papers that it publishes; rather, it publishes all papers that are judged to be technically valid." Unfortunately they responded: "We have considered venturing into the world of registered reports, but it isn’t something we’re able to get involved with right now."

Fair enough, but then I'm afraid I can't (in good conscience) join your editorial board. A growing number of journals claim to celebrate scientific validity and transparency above the standard values (like "novelty" and "impact" of findings) -- in fact, the banner of transparency could almost be said to be in vogue right now -- but I find that the real litmus test is whether such journals are willing to accept papers before the results are known. If not then some small part of them still wants to selectively publish "good results". There is no room for fine print on the transparency banner.

What I have recently done is join the editorial board of Collabra, an interesting new open access journal being launched by the University of California Press. Collabra have agreed to offering Registered Reports and we will be updating the Open Science Framework information hub for Registered Reports as soon as there is further news.

So - my thanks and a fond farewell to PLOS ONE. And my message to any other journals: if you want Chris Chambers on your editorial board (not that anyone really should of course!) then you need to either offer Registered Reports or plan to do so in the future.

Trust me, you won't regret it.*___________* Well, you won't regret offering Registered Reports. I, on the other hand, am an entirely different matter...

At the bottom of the post I've included a full list of journals offering Registered Reports and related initiatives. Enjoy!

Question 1: What would
you say to critics who argue that pre-registration puts "science
in chains"? Are their concerns justified?

Professor
Dorothy Bishop, University of Oxford

I think
there's a widespread misunderstanding of pre-registration. It's main function is
to distinguish hypothesis-testing analyses from exploratory analyses. It should
not stop exploratory research, but should make it clear what is exploratory and
what is not. Most of the statistical methods that we use make basic assumptions
that are valid only in a hypothesis-testing context. If we explore a multidimensional
dataset, decide on that basis what is interesting, and then apply statistical
analysis, we run a high risk of obtaining spurious 'significant' findings.
Currently science is not so much in chains as bogged down in a mire of
non-replicable findings, and we need to find ways to deal with this. I
increasingly find myself reading papers and wondering just what I can believe -
particularly in areas of neuroscience where there are huge multidimensional
datasets and multiple researcher degrees of freedom in choosing how to analyse
findings. I would not insist that pre-registration is mandatory, but I think
it's great to have that option and I hope that as the new generation of
scientists learn more about it, they will come to embrace it as a way of clarifying
scientific findings and achieving better replicability of research.

Professor Tom
Johnstone, University of Reading

I think the
concern that scientists have of being "put in chains" is
understandable. We've all probably had the frustrating experience of confronting
a reviewer or editor who believes there's one way, and one way only, to collect
data or perform analysis, for example. Creativity and adaptive thinking and
problem solving are very much a part of science, and mustn't be stifled.

Yet the
solution is to make sure that the move towards pre-registration is accompanied
by an expansion of the ways in which researchers can openly report innovative
exploratory research, and the iterative development of new methods. As you've
pointed out, if we didn't try to shoehorn all of our research into the
hypothesis-testing model, then we'd relieve a lot of the pressure for people to
engage in post hoc hypothesis creation.

Dr
Daniël Lakens, Eindhoven University of Technology

Science is
like a sonnet. There is a structure within which scientists work, but that does
not have to limit our creativity. As Goethe remarked: ‘In der Beschränkung
zeigt sich erst der Meister’ - Mastery is seen most clearly when constrained.

Dr
Brendan Nyhan, Dartmouth College

I think the
idea that pre-registration will put “science in chains” is attacking a straw
man. No one is proposing that it should be the only way to conduct research.
There will still be every opportunity to pursue unanticipated findings. The
widespread availability of pre-registered journal articles will more clearly
distinguish between true hypothesis-testing and exploratory research. For
instance, a researcher might observe an unanticipated result and then
pre-register a replication study to test the effect more systematically.

Professor
Dan Simons, University of Illinois

Frankly,
this criticism is nonsense. Pre-registration just eliminates the ability to
fool yourself into thinking some post-hoc decision was actually an a-priori
one. Specifying a plan in advance just means that you actually did plan your
"planned" analyses. As psychologists, we should know how easily we
can convince ourselves that the analysis that worked was the logical one to do,
after the one we first thought to try didn't work. If your theory makes a prediction,
you should be able to specify it in advance and you should be able to specify
what outcomes would support it. Yes, it takes more work up front to
pre-register a plan. But, if you truly are conducting planned analyses, all you
are doing is shifting when you do that work, not what you're doing.

Nothing
about pre-registration prevents a researcher from conducting additional
exploratory analyses that were not part of the registered plan.
Pre-registration just makes clear which analyses were planned and which ones
were exploratory. How does that constrain science in any way?

Question 2: Do you think pre-registration will
influence the future of publishing in psychology, neuroscience and beyond?

Professor Tom Johnstone,
University of Reading

I do think
that the move towards registered studies will be of benefit to science, not
only because it will encourage better research practice, but also because it
will lessen the file-drawer problem by ensuring that "null" results
are published. It will also hopefully catalyse a shift towards more informative
statistics than standard NHST. That's not to say there won't be problems;
undoubtedly there will be (concerns about research timelines especially for
junior researchers need to be tackled head-on, for example).

Dr
Daniël Lakens, Eindhoven University of Technology

It will
complement the way we work in important ways. Especially in ‘hot’ research
areas, which are at a higher risk of increased Type 1 errors (Ioannides, 2005),
pre-registration will greatly facilitate our understanding of how likely it is
things are true.

Dr
Brendan Nyhan, Dartmouth College

Pre-registration
could transform the future of publishing if funders, government agencies,
reviewers, editors, and tenure and promotion committees demand it. The movement
will only succeed if it changes expectations about research credibility among a
wider group of scholars and stakeholders than its most devoted advocates. It
should also take further steps to broaden its appeal to researchers - most
notably, by encouraging journals to adopt formats like Registered Reports that
reduce risk to scholars concerned about their ability to publish pre-registered
null results given the publication biases in scientific journals.

Professor
Dan Simons, University of Illinois

Pre-registration
effectively eliminates hypothesizing after the results are known. It keeps us
from convincing ourselves that an exploratory analysis was a planned one. It is
perhaps the best way to keep yourself from inadvertent p-hacking and to
convince others that your hypotheses predicted rather than followed from your
results. Ideally, more journals will begin reviewing the registered plans as
the basis for publication decisions. Doing so would effectively eliminate the
file drawer problem. If a study is well designed, its results should be
published.

Question 3: Why do you think psychology and
neuroscience are spearheading these initiatives, rather than other sciences?

Professor
Dorothy Bishop, University of Oxford

I think
there are two reasons. First, most psychologists (though not neuroscientists in
general) get a good grounding in statistics at undergraduate level, so they
have been quicker to appreciate the problems that are inherent in 'false
positive psychology'. Second, psychologists study how people think and are
aware of how easy it is to deceive yourself at all kinds of levels: after all,
one of the first things that many students learn about is the Muller-Lyer
visual illusion, where you are convinced that two lines are different lengths
when in fact they are the same. That should make us more vigilant about always
questioning whether our findings are correct; we are taught to look for
counter-evidence rather than just confirming our pre-conceptions.

Professor Tom
Johnstone, University of Reading

As to why
this is being lead by psych/neuro, hard to say. Probably a case of the right
combination of factors coinciding (e.g.recent high-profile spotlight on QRP and
fraud in social psychology, links to medical research and associated ethics, in
which registration has been recently enforced, a few people willing to actively
push this forward), plus peculiarities of psych research compared to some other
disciplines (for example, speaking with my physics training hat on, the almost
complete reliance on NHST in psychology and neuroscience, rather than accurate
quantitative description of effects, and the almost total lack of replication).
There is, I think, a research culture difference here. That will be difficult
to change, but one has to start somewhere.

Dr
Daniël Lakens, Eindhoven University of Technology

According
to Parker (1989), ‘psychology is in a continuous crisis’. Psychology has a
tradition of self-criticism. It is sometimes remarked that psychology’s
greatest contribution is methodology (e.g., Scarr, 1997), so it is not
surprising we are on the forefront of methodological improvements in the
current debate about ways to improve our science.

Dr Brian Nosek, University of
Virginia

The reproducibility challenges facing science are
strongly influenced by the incentives and social context that shape scientists'
behavior. Understanding and altering incentives, motivations, and social
context are psychological challenges. Psychologists are ahead because
they are just applying their domain expertise on themselves.

-----------One of the things I find
most fascinating about cognitive neuroscience is the way it is shaping our
understanding of unconscious sensory processing: brain activity and behaviour caused
by imperceptible stimuli. Lurking below the surface of awareness is an army of highly organised activity that influences our thoughts and actions.

Unconscious systems are, by definition,
invisible to our own introspection but that doesn’t make them invisible to
science. One simple way to unmask them is to gradually weaken an image on a
computer screen until a person reports seeing nothing. Then, when the stimulus is
imperceptible, you ask the person to guess what type of stimulus it is, for instance, whether it is “<” or “>”.
What you find is that people are remarkably good at telling the difference. They’ll
insist they see nothing yet correctly discriminate invisible stimuli much
higher than predicted by chance – often at 70-80% correct. It’s really quite head-scratching.

Back in the 1970s, a psychologist named Larry Weiskrantz found that this contrast between conscious and
unconscious processing was thrown into sharp relief following damage to a part
of the brain called the primary visual cortex (V1). Weiskrantz (and later
others) found that patients with damage to V1 would report being blind to one
part of their visual field, yet, when push came to shove, they could
discriminate stimuli above chance or even navigate successfully
around invisible objects in a room. He coined this intriguing phenomenon “blindsight”.

Since then, blindsight has drawn
the attention of psychologists, neurologists and philosophers. One of the major
debates in the literature has centred on the neurophysiology of the phenomenon:
how, exactly, is this unconscious vision achieved? Blindsight proved that information was somehow
influencing behaviour without being processed by V1.

Two schools of thought took shape.
One argued that, during blindsight, unconscious information reached higher
brain systems by activating spared islands of cortex near the damaged V1. An
opposing school argued that the information was taking a different road
altogether: an ancient reptilian route known as the retinotectal pathway, which
bypasses visual cortex to reach frontal and parietal regions.

Knocking out conscious
awareness with TMS was one thing – and apparently doable – but how could we
tell which brain pathways were responsible for whatever visual ability was left
over? Fortunately I’d recently moved to Cardiff University where Petroc
Sumner is based. Some years earlier, Petroc had developed a clever technique
to isolate the role of
different visual pathways by manipulating colour. When presented under
specific conditions, these coloured stimuli activated a type of cell on the retina
that has no colour-opponent projections to the superior colliculus. These
stimuli, known as “s-cone stimuli”, were invisible
to the retinotectal pathway (1). We teamed up with Petroc, and Chris set about learning how to generate these
stimuli.

Now that we had a technique
for dissociating conscious and unconscious vision (TMS), and a type of stimulus
that bypassed the retinotectal pathway, we could bring them together to
contrast the competing theories of blindsight. Our logic was this: if the
retinotectal pathway is a source of unconscious vision then blindsight should not
be possible for s-cone stimuli because, for these stimuli, the retinotectal
pathway isn’t available. On the other hand, if blindsight arises via cortical routes
at (or near) V1 then blocking the retinotectal route should be inconsequential:
we should find the same level of blindsight for s-cone stimuli as for normal
stimuli (2).

There were other aspects to
the study too (including an examination of the timecourse of TMS interference),
but our main result is summarised in the figure below. When we delivered TMS to visual cortex about a tenth of a second after the onset of a normal stimulus, we found textbook blindsight: TMS reduced awareness
of the stimuli while leaving unaffected the ability to discriminate them on ‘unaware’
trials. Crucially, we found the same thing for s-cone
stimuli: blindsight occurred even for these specially coloured stimuli that bypass
the retinotectal route. Since blindsight occurred for stimuli that weren’t processed by the retinotectal pathway, our results allow us to reject the
retinotectal hypothesis in favour of the cortical hypothesis. This suggests that
blindsight in our study arose from unperturbed cortical systems rather than the
reptilian route.

Our key results. The upper plot shows conscious detection performance when TMS was applied to visual cortex at 90-130 milliseconds after a stimulus appeared. Compared to "sham" (the control TMS condition), active TMS reduced conscious detection for both the normal stimuli and S-cone stimuli that bypass the retinotectal pathway. The lower plot shows the corresponding results for discrimination of unaware stimuli; that is, how accurately people could distinguish "<" from ">" when also reporting that they didn't see anything. For for both normal stimuli and S-cone, this unconscious ability was unaffected by the TMS. And because this TMS-induced blindsight was found for stimuli that bypass the retinotectal route, we can conclude that the retinotectal pathway isn't crucial for blindsight found here.

While the results are quite clear
there are nevertheless several caveats to this work. There is evidence from
other sources that the retinotectal pathway can be important and our results
don’t explain all of the discrepancies in the literature. What we do show is
that blindsight can arise in the
absence of afferent retinotectal processing, which disconfirms a strong version
of the retinotectal hypothesis.Also, we don’t know whether
the results will translate to blindsight in patients following permanent injury.
TMS is a far cry from a brain lesion – unlike brain damage, it is transient, safe and reversible, which of course makes it highly attractive for this kind
of research but also distances it from work in clinical patients. Furthermore, even
though we can rule out a role of the retinotectal pathway in producing
blindsight as shown here, we don’t
know which cortical pathways did produce the effect. Finally, our paper reports
a single experiment that has yet to be replicated – so appropriate caution is
warranted as always.

Still, I’m rather proud of
this study. I take little of the intellectual credit, which belongs chiefly to
Chris Allen. Chris brought together the ideas and tackled the technical
challenges with a degree of thoroughness and dedication that he’s become well
known for in Cardiff. This paper – his first as primary author – is a nice way
to kick off a career in cognitive neuroscience.

1. By “afferent” I mean the initial “feedforward”
flow of information from the retina. It’s entirely possible (and likely) that
s-cone stimuli activate retinotectal structures such as the superior colliculus
after being processed by the visual cortex and then feeding down into the
midbrain. What’s important here is that s-cone stimuli are invisible to the
retinotectal pathway in that initial forward sweep.

2. Stats nerds will note that we are
attempting to prove a version of the null hypothesis. To enable us to show strong evidence for the null hypothesis, we used
Bayesian statistical techniques developed by Zoltan Dienes that assess the relative likelihood of H0 and H1.

Thursday, 16 January 2014

Let me get this out of the way at the beginning so I
don’t come across as a total curmudgeon. I think fMRI is great. My lab uses
it. We have grants that include it. We publish papers about it. We combine it
with TMS, and we’ve worked on methods to make that combination better. It’s the
most spatially precise technique for localizing neural function in healthy
humans. The physics (and sheer ingenuity) that makes fMRI possible is
astonishing.

There is a lot we can do. We got ourselves into this
mess. Only we can get ourselves out. But it will require concerted effort and
determination from researchers and the positioning of key incentives by
journals and funders.

The tl;dr version of my proposed solutions: work in
larger research teams to tackle bigger questions, raise the profile of a
priori statistical power, pre-register study protocols and offer
journal-based pre-registration formats, stop judging the merit of science by
the journal brand, and mandate sharing of data and materials.

Problem 1: Expense. The technique
is expensive compared to other methods. In the UK it costs about £500 per hour
of scanner time, sometimes even more.

Solution in brief: Work in
larger research teams to divide the cost.

Solution in detail: It’s hard to
make the technique cheaper. The real solution is to
think big. What do other sciences do when working with expensive techniques?
They group together and tackle big questions. Cognitive neuroscience is
littered with petty fiefdoms doing one small study after another – making
small, noisy advances. The IMAGEN fMRI consortium is a beautiful example
of how things could be if we worked together.

Solution in brief: Again, work
in larger teams, combining data across centres to furnish large sample sizes. We need to get serious about
statistical power, taking some of the energy that goes into methods development
and channeling it into developing a priori power analysis techniques.

Solution in detail: Anyone who
uses null hypothesis significance testing (NHST) needs to care about
statistical power. Yet if we take psychology and cognitive neuroscience as a
whole, how many studies motivate their sample size according to a priori
power analysis? Very few, and you could count the number of basic fMRI studies that do
this on the head of a pin. There seem to be two reasons why fMRI researchers
don’t care about power. The first is cultural: to get published, the most
important thing is for authors to push a corrected p value below .05.
With enough data mining, statistical significance is guaranteed (regardless of truth) so why would a career-minded scientist bother
about power? The second is technical: there are so many moving parts to an fMRI
experiment, and so many little differences in the way different scanners operate, that power analysis itself is very challenging. But
think about it this way: if these problems make power analysis difficult then
they necessarily make the interpretation of p values just as difficult.
Yet the fMRI community happily embraces this double standard because it is p<.05,
not power, that gets you published.

Problem 3: Researcher ‘degrees of freedom’.
Even the simplest fMRI experiment will involve dozens of analytic options, each which could be considered legal and justifiable. These researcher degrees of freedom provide an ambiguous decision space for analysts to try different approaches and see what “works” best in
producing results that are attractive, statistically significant, or fit
with prior expectations. Typically only the outcome that "worked" is then published. Exploiting these degrees of freedom also enables
researchers to present “hypotheses” derived from the data as though they were a
priori, a questionable practice known as HARKing. It’s ironic that the fMRI community
has put so much effort into developing methods that correct for multiple
comparisons while completely ignoring the inflation of Type I error caused by
undisclosed analytic flexibility. It’s the same problem in different form.

Solution in brief: Pre-registration
of research protocols so that readers can distinguish hypothesis testing from
hypothesis generation, and thus confirmation from exploration.

Solution in detail: By
pre-specifying our hypotheses and analysis protocol we protect the outcome of
experiments from our own bias. It’s a delusion to pretend that we aren’t
biased, that each of us is somehow a paragon of objectivity and integrity. That
is self-serving nonsense. To incentivize pre-registration, all journals should
offer pre-registered article formats, such as Registered Reports at Cortex. This includes prominent journals like Nature and Science, which have a vital role to play
in driving better science. At a minimum, fMRI researchers should be encouraged to pre-register their designs on the Open Science Framework. It’s not hard to do.
Here’s an fMRI
pre-registration from our group.

Arguments for pre-registration should not be seen as arguments against exploration in
science – instead they are a call for researchers to care more about the
distinction between hypothesis testing (confirmation) and hypothesis generation
(exploration). And to those critics who object to pre-registration, please
don’t try to tell me that fMRI is necessarily “exploratory” and “observational”
and that “science needs to be free, dude” while in same breath submitting
papers that state hypotheses or present p values. You can't have it both ways.

Problem 4: Pressure to publish. In
our increasingly chickens-go-in-pies-come-out culture of academia,
“productivity” is crucial. What exactly that means or why it should be
important in science isn’t clear – far less proven. Peter Higgs made one of the
most important discoveries in physics yet would have been marked as unproductive and sacked in the current system.
As long as we value the quantity of science that academics produce we will
necessarily devalue quality. It’s a see saw. This problem is compounded in fMRI
because of the problems above: it’s expensive, the studies are underpowered,
and researchers face enormous pressure to convert experiments into positive,
publishable results. This can only encourage questionable practices and fraud.

Problem 5: Lack of data sharing.
fMRI research is shrouded in secrecy. Data sharing is unusual, and the rare
cases where it does happen are often made useless by researchers carelessly
dumping raw data without any guidance notes or consideration of readers.
Sharing of data is critical to safeguard research integrity – failure to share
makes it easier to get away with fraud.

Solution in brief: Share and we
all benefit. Any journal that publishes fMRI should mandate the sharing of raw
data, processed data, analysis scripts, and guidance notes. Every grant agency
that funds fMRI studies should do likewise.

Solution in detail: Public data
sharing has manifold benefits. It discourages and helps unmask fraud, it
encourages researchers to take greater care in their analyses and conclusions,
and it allows for fine-grained meta-analysis. So why isn’t it already standard
practice? One reason is that we’re simply too lazy. We write sloppy analysis
scripts that we’d be embarrassed for our friends to see (let alone strangers);
we don’t keep good records of the analyses we’ve done (why bother when the goal
is p<.05?); we whine about the extra work involved in making our
analyses transparent and repeatable by others. Well, diddums, and fuck us – we
need to do better.

Another objection is the fear that others will “steal”
our data, publishing it without authorization and benefiting from our hard
work. This is disingenuous and tinged by dickishness. Is your data really a
matter of national security? Oh, sorry, did I forget how important you are? My
bad.

It pays to remember that data can be cited in exactly
the same way papers can – once in the public domain others can cite your data
and you can cite theirs. Funnily enough, we already have a system in science
for using the work of others while still giving them credit. Yet the vigor with which some people object to data sharing for
fear of having their soul stolen would have you think that the concept of
“citation” is a radical idea.

To help motivate data sharing, journals should mandate
sharing of raw data, and crucially, processed data and analysis scripts,
together with basic guidance notes on how to repeat analyses. It’s not enough
just to share the raw MR images – the Journal of Cognitive Neuroscience tried
that some years ago and it fell flat. Giving someone the raw data alone is like
handing them a few lumps of marble and expecting them to recreate Michelangelo’s
David.

---

What happens when you add all of these problems
together? Bad practice. It begins with questionable research practices such as p-hacking
and HARKing. It ends in fraud, not necessarily by moustache-twirling villains,
but by desperate young scientists who give up on truth. Journals and funding
agencies add to the problem by failing to create the incentives for best practice.

Let me finish by saying that I feel enormously sorry for anyone whose lab has been
struck by fraud. It's the ultimate betrayal of trust and loss of purpose. If it
ever happens to my lab, I will know that yes the fraudster is of course
responsible for their actions and is accountable. But I will also know that the
fMRI research environment is a damp unlit bathroom, and fraud is just an
aggressive form of mould.

Saturday, 16 November 2013

Earlier this year I
committed my research group to pre-registering all studies in our recent BBSRC
grant, which includes fMRI, TMS and TMS-fMRI studies of human cognitive
control. We will also publicly share our raw data and analysis scripts,
consistent with the principles of open science. As part of
this commitment I’m glad to report that we have just published our first pre-registered
study protocol at the Open Science Framework.

For those unfamiliar with
study pre-registration, the rationale is simply this: that to prevent different
forms of human bias creeping into hypothesis-testing we need to decide before starting our research what our hypotheses are and how we plan to test them. The best way to
achieve this is to publicly state the research questions, hypotheses, outcome
measures, and planned analyses in advance, accepting that anything we add or
change after inspecting our data is by
definition exploratory rather than pre-planned.

To many scientists (and
non-scientists) this may seem like the bleeding obvious, but the truth is
that the life sciences are suffering a crisis in which research that is purely
exploratory and non-hypothesis-driven masquerades as hypothetico-deductive.
That’s not to say that confirmatory (hypothesis-driven) research is necessarily
worth any more than exploratory (non-hypothesis driven) research. The point is
that we need to be able to distinguish one from the other, otherwise we build a false certainty in the theories we produce. Psychology and cognitive
neuroscience are woeful at making this distinction clear, in part because they
ascribe such a low priority to purely exploratory research.

Pre-registration helps
solve a number of specific problems inherent in our publishing culture,
including p-hacking
(mining data covertly for statistical significance) and HARKing
(reinventing hypotheses to predict unexpected results). These practices are common
in psychology because it is difficult to publish anything in ‘top journals’
where the main outcome was p >.05
or isn’t based on a clear hypothesis.

Evidence of such
practices can be found in
the literature and all around us. Just last week at the Society for
Neuroscience conference in San Diego, I had at least three conversations where
presenters at posters would say something like: “Look at this cool effect. We
tested 8 subjects and it looked interesting so we added another 8 and it became
significant”. Violation of stopping rules is just one example of how we think
like Bayesians while being tied to frequentist statistical
methods that don’t allow us to do so. This bad marriage between thought and
action endangers our ability to draw unbiased inferences and, without appropriate Type I
correction, elevates the rate of false discoveries.

In May, the journal Cortex launched a new
format of article that attempts to solve these problems by incentivising pre-registration.
Unlike conventional publishing models, Registered Reports are peer reviewed
before authors conduct their experiments and the journal offers provisional
acceptance of final papers based solely on the proposed protocol. The model at Cortex not only prevents p-hacking and
HARKing – it also solves problems caused by low statistical
power, lack of data transparency, and publication bias. Similar
initiatives have been launched or approved by several other journals, including
Perspectives on Psychological Science,
Attention Perception & Psychophysics,
and Experimental Psychology. I’m glad
to say that 10 other journals are currently considering similar formats, and so
far no journal to my knowledge has decided against offering pre-registration.

In June, I wrote an open
letter to the Guardian with Marcus
Munafò and >80 of our colleagues who sit on editorial boards. Together we
called for all journals in the life sciences to offer pre-registered article
formats. The response to the article was overall
neutral or positive, but as expected not
everyone agreed. One of the most striking features of the negative
responses to pre-registration was how the critics targeted a version of pre-registration
we did not propose. For instance, some felt that the Cortex model would prevent publication of serendipitous findings or
exploratory analyses (it doesn't), that authors would be “locked” into
publishing with Cortex (they aren’t),
or that the model we proposed was suggested as mandatory or universal (it is
explicitly neither). I would ask those who responded negatively to reconsider the
details of the Cortex initiative
because we don’t disagree nearly as much as it seems. In regular seminars
I give on Registered Reports at Cortex
I include a 19-point list of FAQs and response to these points, which you can
read here.
I will regularly update this link as new FAQs are added.

I believe we are in the early stages of a revolution in the way we do research – one not driven
by pre-registration per se, and
certainly not by me, but by the combination of converging future-oriented
approaches, including emphasis on replication (and replicability), open
science, open access publishing, and pre-registration. The pace of evolution in
scientific practices has shifted up a gear. Clause 35 of the revised
Declaration of Helsinki now explicitly requires some form of study
pre-registration for medical research involving human participants. Although
much work in psychology and cognitive neuroscience isn’t classed as ‘medical’,
many of the major journals that publish basic research also ask authors to adhere to the Declaration, including
the Journal
of Neuroscience, Cerebral
Cortex, and Psychological
Science.

The revised Declaration of
Helsinki has caused some concern among psychologists, and I should make it
clear that those of us promoting pre-registration as a new option for journals had
no role in formulating these revised ethical guidelines. However we shouldn’t necessarily
see them as a problem. There are many simple and non-bureaucratic ways to
pre-register research (such as the OSF),
even if the journal-based route is the only to reward authors with advance
publication.

One valid
point that has been made in this debate is that those of us who are promoting pre-registration should practice
what we preach, even when there is no journal option currently available (and
for me there isn’t another option because Cortex
– where I am section editor – is so far the only cognitive neuroscience journal
offering pre-registered articles). Some researchers, such as Marcus Munafò,
already pre-register on a routine basis and have done for some time. For my
group it is newer venture, and here is our first attempt. Our protocol describes an fMRI experiment of response
inhibition and action updating that forms the jumping off point for several upcoming
studies involving TMS and concurrent TMS-fMRI. We are registering this protocol
prior to data collection. All comments and criticisms are welcome.

Writing a protocol for an
fMRI experiment was challenging because it required us to nail down in advance our decisions and
contingencies at all stages of the analysis. The sheer number of seemingly
arbitrary decisions also reinforced my belief that many, if not most, fMRI
studies are contaminated by bias (whether conscious or unconscious) and
undisclosed analytic flexibility. I found pre-registration
rewarding because it helped us refine exactly how we would go about answering our research questions. There is much to be said for taking the time to
prepare science carefully, and time spent now will be time saved when it comes
to the analysis phase.

Most of the work in our
first pre-registration was undertaken by two extremely talented young
scientists in my team: PhD student Leah
Maizey and post-doctoral researcher Chris
Allen. Leah and Chris deserve much praise for having the courage and conviction
to take on this initiative while many of our senior colleagues 'wait and see'.

Pre-registration is now a
normal part of the culture in my lab and I hope you’ll consider making it a
part of yours too. Embracing the hypothetico-deductive method helps protect the outcome of hypothesis-driven research from our inherent weaknesses as human practitioners. It also prompts us to consider deeper questions. As a community we need to reflect on what sort of scientific culture
we want future generations to inherit. And when we look at the current status quo of questionable research practices,
it leads us to ask one simple question: Who are we serving, us or them?

About Me

I'm a psychologist and neuroscientist at the School of Psychology, Cardiff University. I created this blog after taking part in a debate about science journalism at the Royal Institution in March 2012.
The aim of my blog is give you some insights from the trenches of science. I'll talk about a range of science-related issues and may even give up a trade secret or two.
Stay tuned!
You can follow me on Twitter: @chrisdc77