Wednesday, 10 April 2013

Scientific publishing as it was meant to be

Last October I joined the editorial board of Cortex, and my first
order of business was to propose a new format of article called a Registered Report.
The essence of this new article is that experimental methods and proposed
analyses are pre-registered and peer reviewed before data is collected. This
publication model has the potential to cure a host of bad practices in science.

In November the publisher approved the new
article format and I’m delighted to announce that Registered Reports
will officially launch on May 1st. I’m especially proud that Cortex will
become the first journal in the world to adopt this publishing mechanism.

For those encountering this initiative for the
first time, here are some links to background material:

Why should we want to review papers before
data collection? The reason is simple: as reviewers and editors we are too
easily biased by the appearance of data. Rather than valuing innovative
hypotheses or careful procedures, we too often we find ourselves applauding
“impressive results” or bored by null effects. For most journals, issues such
as statistical power and technical rigour are outshone by novelty and
originality of findings.

What this does is furnish our environment with
toxic incentives. When I spoke at the Spot On conference last year, I began by
asking the audience: What is the one aspect of a scientific experiment that a
scientist should never be pressured to control? After a pause – as though it
might be a trick question – one audience member answered: the results. Correct!
But what is the one aspect of a scientific experiment that is crucial
for publishing in a high-ranking journal? Err, same answer. Novel,
ground-breaking results.

The fact that we force scientists to touch the
untouchable is unworthy of a profession that prides itself on behaving
rationally. As John Milton says in Devil’s Advocate,
it’s the goof of all time. Somehow we've created a game in which the rules are
set in opposition.

With little chance of detecting true effects,
experimentation reduces to an act of gambling. Driven by the need to publish,
researchers inevitably mine underpowered datasets for statistically significant
results. No stone is left unturned; we p-hack, cherry
pick, and even reinvent study hypotheses to "predict" unexpected
results. Strange phenomena begin appearing in the literature that can only be
explained by such practices – phenomena such as poor repeatability, prevalence
of studies that support stated hypotheses, and a
preponderance of articles in which obtained p values fall just below the significance threshold.
More worryingly, a recent
study by John et al shows that these behaviours are not the actions of a
naughty minority – they are the norm.

None of this even remotely resembles the way
we teach science in schools or undergraduate courses, or the way we dress it up
for the public. The disconnect between what we teach and what we practice is so
vast as to be overwhelming.

Registered Reports will help eliminate
these bad incentives by making the results almost irrelevant in reaching
editorial decisions. The philosophy of this approach is as old as the
scientific method itself: If our aim is to advance knowledge then editorial
decisions must be based on the rigour of the experimental design and likely
replicability of the findings – and never on how the results looked in
the end.

We know that other journals are monitoring Cortex to gauge
the success of Registered
Reports. Will the format be popular with authors? Will peer reviewers be
engaged and motivated? Will the published articles be influential? This success
depends on you. We'll need you to submit your best ideas to Cortex –
well thought-out proposals that address important questions –
and, crucially, before you’ve collected the data. We need your support to help
steer scientific publishing toward a better future.

For my part, I’m hugely excited about Registered Reports
because it offers hope that science can evolve; that we can be self-critical,
open-minded, and determined to improve our own practices. If Registered
Reports succeeds then together we can help reinvent publishing as it was
meant to be: rewarding the act of discovery rather than the art of performance.

3 comments:

Hi, I published a little rant in BJP last year basically on how journals aren't prepared to publish the reality of data if it doesn't fit with a preconceived schema of how a successful experiment will turn out. I think you're scheme is an excellent idea - disciplining not only scientists to lift our game, but also editorial boards to accept the real results of registered experiments, even if "outcome knowledge' (i.e., hindsight) invalidates the original/registered experimental plan midway (indeed, that is my only concern, because outcome knowledge so often changes how we view our experimental design, will scientists find themselves under pressure to massage their results to fit better with how things were originally planned and registered? Will it just shunt the dishonesty to another level? I'm eager to see how it goes!)

Also "posthoc storytelling: reinventng hypothesesto“predict” unexpected results is very similar to "inference to the best explanation" which has been the very heart of much of science. Without it "The Origin of the Species" would have no unifying argument.

Perhaps the real problem is not with postrationalised hypotheses, but with a lack of discipline on what counts as a good explanation of the results. David Deutch has written and spoken about the dangers of explanationless science, and on the importance of good explanations and on the importance of good criteria for what counts as a good explanation. Even the arch-empiricist Karl Popper spent a great deal of time in his writing detailing what counts as an informative hypothesis/theory and what does not. Perhaps it isn't the post-hoc hypothesizing that is the matter, but on the quality of the hypotheses as explanations, and on the lack of consensus on what logical criteria scientists should use in considering an explanatory hypothesis a good one. Back to Popper, there is a word of difference between an ad hoc "saving the phenomena" type hypothesis, and post-hoc explanation and one that increases explanatory and informative content.

Inference to best explanation as a principle in fact lurks underneath much of statistics. Many tests rely on Maximum likelyhood, which is actually a probablistic form of best explanation, and Bayesian statistics has similar assumptions. Without inference to best explanation (post-hoc explantations) science would grind to a halt.

The problem with reinventing the hypothesis of a study is that the study was not conducted again with the new hypothesis. It's fine to look at data and interpret it in different ways, but that's the start of the scientific process not the end.

About Me

I'm a psychologist and neuroscientist at the School of Psychology, Cardiff University. I created this blog after taking part in a debate about science journalism at the Royal Institution in March 2012.
The aim of my blog is give you some insights from the trenches of science. I'll talk about a range of science-related issues and may even give up a trade secret or two.
Stay tuned!
You can follow me on Twitter: @chrisdc77