Should I publish negative results or does this ruin my career in science?

Young scientists often produce negative results. All experiments were done correctly – but there was no difference between test and control. They get conflicting advice from supervisors and ethicists. Some say that publishing negative results is a waste of resources and ruins their careers. Others say that ‘not publishing negative results is unethical’ and promotes the reproducibility crisis. What should young scientists do in such a situation?

Negative results are frustrating and demotivating

In my experience as a supervisor, the biggest problem of negative findings is the demotivating effect on young scientists. During a normal PhD, most early stage researchers experience an emotional dip anyway – approximately after two years. This is half the duration of a standard PhD contract. It is a recurring pattern I have seen in many, many PhD careers. The emotional dip is substantially worsened when the students are very ambitious and work particularly hard to accomplish a great project but do not obtain any statistically significant results.

Stopping a long-term project may hurt and/or relieve

If a costly project does not deliver promising results the supervisor may come to the conclusion that the project has to be abandoned without publishing the data. In many cases, such a decision will push the young researchers in a maelstrom of dramatic feelings. They may perceive themselves as a failure and feel that they are treatedunfairly. In addition, they are probably worried about their future. On the other hand, it may also relieve the student because they may have felt for a long time that their project is going nowhere. These mixed feelings do not make the situation easier. It is the responsibility of the supervisor to develop a new project and to support the PhD student emotionally in this difficult situation.

“Publishing negative results is a waste of resources”

Many scientists develop the attitude that following up on and publishing negative results is a waste of resources. Certainly, there is some truth in this statement because the costs may be high compared to the final output parameters (such as impact factor of the journal, citations etc.) reached at the end.

Negative results are often published with a low impact factor because editors hate boring data

Editors want the exciting stuff: new mechanisms, unexpected findings and dramatic effects (“the paralyzed could walk again”) which increase citations, clicks, shares and press coverage. Unfortunately, negative results are often very boring. As a result, it is often difficult to publish negative results in an appropriate journal and after several rejections it ends up in a journal with a much lower impact factor than expected at the beginning of the experiments. Consistently, negative studies are considered less prestigious if and when they do come out.

Unexpected negative findings can be very interesting – but are often costly

High impact journals may be interested in negative studies when they destroy a long-held paradigm or when a new method is used to show that most previous studies are flawed. Following up on a negative story is always considered a risk because the research group may invest substantial resources (time, money, energy) without improving the quality of the paper and the final impact factor. Reviewers may tend to ask multiple additional controls to make sure that the negative results are not just an effect of technical mistakes. Therefore, the opportunity costs are high (“Doing this study means you are not following up on potentially more promising data”).

“Publishing negative results ruins your career”

Many supervisors are convinced that publishing negative results will ruin the career of their PhD students as well as their own. They will spend a lot of resources on the wrong project, publish with a low impact factor, and consequently get less future funding. Young scientists may get the feeling that investing in a negative study will dramatically reduce their chances on the academic job market and may even force them to give up and pursue a different career. You may correctly argue that high impact factor publications are not absolutely necessary to become a professor and might not necessarily increase job opportunities in the industry or public sector (read more here: Do I need Nature or Science papers for a successful career in science?). However, impact factors are still broadly used to evaluate the performance of single scientists, departments and institutions (read more here: 10 simple strategies to increase the impact factor of your publication). Thus, a responsible supervisor will always aim for journals with a high impact factor and will have the tendency to abandon projects without exciting results. Unfortunately, this behavior – as understandable as it is – may be one of the biggest problems in science:

“Not publishing negative results is unethical”

Many colleagues do not find it a big deal to abandon projects without promising result. “Fail faster” is a common credo which means to screen for dramatic effects (for example of a treatment, a drug, a genetic intervention etc.) and leave the less dramatic results untouched. The big problem is that this knowledge is lost because all these experiments disappear and many other scientists may repeat the same or similar experiments because these results are not documented and are not publicly available. As a consequence, a lot of time, effort and tax payer’s money is wasted due to unnecessary repetitions because negative or less-than-dramatic findings are unreported.

If you care for scientific progress publication of negative results is a must. Negative studies may challenge existing paradigms and enhance progress by stopping further investment in scientifically barren topics, decreases the use of animals in experiments, and focuses research in more fruitful areas (Boorman et al, 2015).

Publication bias and the reproducibility crisis

More than 70% of researchers in a Nature’s survey of 1 576 researchers have tried and failed to reproduce another scientist’s experiments (Baker et al, 2016). This so-called reproducibility crisis has many reasons (see reviews in Jarvis & Williams, 2016; Begley & Ioannidis, 2015). One important reason for reproducibility problems is positive-results bias a special form of publication bias. Positive-results bias is just a fancy term for the tendency described above: when authors are more likely to submit, or editors to accept, positive results than negative or inconclusive results (Sacket et al, 1979). Briefly, publishing only the positive results and filing away the negative results produces a skewed view of reality, results in unnecessary repeats of experiments already done, wastes a lot of tax-payer’s and industry money, may lead to detrimental therapies and many frustrated scientists among many other unwanted outcomes. Irreproducibility issues may even contribute to a growing skepticism regarding the integrity and relevance of all biomedical research (Jarvis & Williams, 2016).

It is not the task of young scientists to publish negative results.

After many years of struggling with this question I came to the conclusion that it is *not* the task of young scientists to publish negative results – in the current system. They still have to pursue their career and – as explained above – publishing negative results may have quite negative effects on their careers.

Controversial advice: leave it to the old guys!

The arguments listed above bring every scientist in a difficult situation. Young scientists seem to have to choose between their career and ethical behavior. Supervisors have to give good advice or leave the young scientists alone with this decision. However, right now there are no widely accepted and widely known procedures how to handle negative findings which are not used for a publication (find some suggestions below). Therefore, currently the best advice is:

Do not publish negative results as a young scientist. Leave it to the senior scientists who already have a successful career and can afford it to publish negative findings for the sake of good science!

It is very important to note that I do *not* suggest selective reporting. Selective reporting is a special case of reporting bias resulting in the incomplete publication of analyses performed in a study that leads to the over- or underestimation of treatment effects or harms. Selective reporting is scientific misconduct. In contrast, I advise young scientists not to waste their time, grant money and energy on studies with negative results.

In the meantime, let us work on better procedures and rules to avoid “scientific waste” and substantially reduce unnecessary repeats of the same unpublished experiments.

How to behave ethically and handle negative results correctly?

My advice to leave the publication of negative results to the old guys (who already have a successful career) comes with a price. We accept the publication bias for the sake of young scientist’s careers. This promotes the reproducibility crisis. The old guys cannot be trusted either to take over their responsibility to publish negative results because they do not get rewarded or even worse they even get punished by their institutions and funding agencies which are strongly focused on scientific output which is mostly quantified in impact factors, citations and grant money. In other words: The scientific community gets what it incentivized. There are many ideas to improve the current science system (see for example: Begley & Ioannidis, 2015) but most initiatives are still at an embryonic stage.

To my knowledge there are no internationally accepted rules on how to handle negative results. A non-representative survey among 10 young professors revealed that only one had a rough idea where to store unpublished data which may be of interest to other scientists. Below you find three suggestions on how to handle the problem of negative results at the level of the scientific community:

Three strategies to improve science

1. Registration of all studies *before* they start

Registration of clinical trials is a widely recognized tool for facilitating complete public reporting (Zarin & Tse, 2008, Williams et al, 2010) and to counter the conflicts of interests for example of pharmaceutical companies. Registration of any type of study (pre-clinical, observational etc.) would dramatically increase the administrative load for researchers, institutions and funders. However, it seems to be a necessary step to cure science from positive-results bias.

For clinical trials, there is a list of international registries in the Cochrane handbook and in Williams et al., 2010 (though note that some of these registries may have been lost by now). Some of these registries already allow the registration of non-clinical studies (observational, pre-clinical etc.). For social sciences and other types of research, the Open Science Framework (https://osf.io) offers the possibility to pre-register studies. It is compatible with GitHub so even the paper development, textual changes, data entry, and data analyses can be tracked throughout the process.

2. Saving inconclusive data in publicly accessible repositories to make them available to other scientists

Since publishing inconclusive data may be tedious and considered “a waste of resources” (see above) it should be very easy and a standard procedure to save the data in publicly accessible repositories. To guarantee independence these should be financed by the international scientific community via scientific societies and/or national and international funders. Some repositories already exist – see here a list of repositories on the Nature website. Another example is arxiv.org, which is a repository for documents and papers rather than data itself, including unaccepted or unsubmitted manuscripts, powerpoint presentations, etc. Unfortunately, repositories are often costly and researchers are not incentivized to use them.

3. Funders must oblige scientists to pre-register their studies and to make all data available (for example in a publicly accessible repository or in a journal of the funder)

Finally, funders should take their responsibility and provide a standardized procedure to pre-register all funded studies and to oblige the researchers to publish all their negative findings or deposit them in a public repository. Some funders have already started their own journals to publish these negative findings. A possible incentive may be to freeze the last tranche of the funds (e.g. 25%) until data are made publicly available. Without doubts, there must be very flexible regulations because some studies are published many years after the funding period has finished.

Conclusions

Take your responsibility and publish negative findings for the sake of science even with a lower impact factor but do not force the young scientists to do this.

All scientists:

Make negative findings known in reviews (which can be published with a high impact factor) and in scientific talks.

Push your institution, your scientific society and/or your funders to provide a public repository for negative results and to make it easy to make negative results publicly available without the hassle of a full publication.

Save your negative results or inconclusive data in a public repository to make them available.

Help to improve the system to incentivize the publication of negative findings and replication studies – for example talk to funders or higher education and health politicians – many listen to scientists to improve the current procedures.

If you have any other idea to improve the current system please leave a comment below.

Negative results may be just as valuable as confirmatory results or novel results. You should try to publish what you have and get as many publications as you can in order to keep your career going forward. In the future no one will know, care or remember if the results were negative or positive. The problem may be getting a high profile journal to accept a negative results paper. That depends totally on the accidental choice of referee. The chances are high that no one will notice. I have a hard time believing any of these correspondents have ever written anything.

From a philosophical standpoint I don’t agree that negative data is worth much… It just proves that you cannot replicate previous observations and there could be many reasons for this (you suck at your work, hidden parameters, etc.). However I also agree that positive findings are only truly worth something when they can be reproduced by others and when there exist a common agreement on their true objectivity. Most science today deals with problems that are immensely difficult to measure and capture. One very real fear would then be that the big groups with lots of know-how and equipment monopolizes on this “true objectivity”. However still, negative results just shows that you cannot do what others apparently have done.

Your final recommendations in the conclusions about publishing (or not) negative results is spot on, particularly the advice about saving negative results institutionally in a repository so there are available.