When I am finally assassinated by an axe-wielding electrosensitive homeopathic anti-vaccine campaigner – and that day surely cannot be far off now – I should like to be remembered, primarily, for my childishness and immaturity. Occasionally, however, I like to write about serious issues. And I don’t just mean the increase in mumps cases from 94 people in 1996 to 43,322 in 2005. No.

One thing we cover regularly in Bad Science is the way that only certain stories get media coverage. Scares about mercury fillings get double page spreads and Panorama documentaries; the follow-up research, suggesting they are safe, is ignored. Unpublished research on the dangers of MMR gets multiple headlines; published research suggesting it is safe somehow gets missed. This all seems quite normal to us now.

Strangely, the very same thing happens in the academic scientific literature, and you catch us right in the middle of doing almost nothing about it. Publication bias is the phenomenon where positive trials are more likely to get published than negative ones, and it can happen for a huge number of reasons, sinister and otherwise.

Major academic journals aren’t falling over themselves to publish studies about new drugs that don’t work. Likewise, researchers get round to writing up ground-breaking discoveries before diligently documenting the bland, negative findings, which sometimes sit forever in that third drawer down in the filing cabinet in the corridor that nobody uses any more. But it gets worse. If you do a trial for a drug company, they might – rarely – resort to the crude tactic of simply making you sit on negative results which they don’t like, and over the past few years there have been numerous systematic reviews showing that studies funded by the pharmaceutical industry are several times more likely to show favourable results than studies funded by independent sources. Most of this discrepancy will be down to cunning study design – asking the right questions for your drug – but some will be owing to Pinochet-style disappearings of unfavourable data.

And of course, just like in the mainstream media, profitable news can be puffed and inflated. Trials are often spread across many locations, so if the results are good, companies can publish different results, from different centres, at different times, in different journals. Suddenly there are lots of positive papers about their drug. Then, sometimes, results from different centres can be combined in different permutations, so the data from a single trial could get published in two different studies, twice over: more good news!

This kind of tomfoolery is hard to spot unless you are looking for it, and if you look hard you find more surprises. An elegant paper reviewing studies of the drug Ondansetron showed not just that patients were double and treble counted; more than that, when this double counting was removed from the data, the apparent efficacy of the drug went down. Apparently the patients who did better were more likely to be double counted. Interesting.

The first paper describing these shenanigans was in 1959. That’s 15 years before I was born. And there is a very simple and widely accepted solution: a compulsory international trials register. Give every trial a number, so that double counting is too embarrassingly obvious to bother with, so that trials can’t go missing in action, so that researchers can make sure they don’t needlessly duplicate, and much more. It’s not a wildly popular idea with drug companies.

Meanwhile the system is such a mess that almost nobody knows exactly what it is. The US has its own register, but only for US trials, and specifically not for clinical trials in the developing world (I leave you to imagine why companies might do their trials in Africa). The EU has a sort of register, but most people aren’t allowed to look at it, for some reason. The Medical Research Council has its own. Some companies have their own. Some research charities do too. The best register is probably Current Controlled Trials, and that’s a completely voluntary one set up by some academics a few years ago. I have a modest prize for the person with the longest list of different clinical trial registers.

And why is this news? Because people have been calling for a compulsory register for 20 years, and this month, after years of consulting, the World Health Organisation proudly announced a voluntary code, and a directory of other people’s directories of clinical trials. If it’s beyond the wit of humankind to make a compulsory register for all published trials, then we truly are lame.

++++++++++++++++++++++++++++++++++++++++++
If you like what I do, and you want me to do more, you can: buy my books Bad Science and Bad Pharma, give them to your friends, put them on your reading list, employ me to do a talk, or tweet this article to your friends. Thanks!
++++++++++++++++++++++++++++++++++++++++++

Ben,
This is a very interesting point which I guess I hadn’t really thought about before. In 25 years of research I accumulated hundreds of files with experimental data in. The vast majority were negative results and, given that they weren’t the results of drug trials, it’s not surprising to me that no journal would consider publishing my “failed” experiments, so it never crossed my mind to even try. Thus all my published papers were “interesting” (at least to someone one hopes) results which disproved a null hypothesis. So to get back to your point, in every research field there must be a vast proportion of unpublished to published data. Maybe quite a lot of interesting stuff is hidden in all that “noise”?

I think from the academic side, things will only change if the value of publishing negative results is raised: an obvious way to do this is to submit to these journals, and help them raise their profiles.

This won’t solve the problem of companies not publishing, of course, but a change in atmosphere towards negative results might help.

Robert Carnegie said,

On trials, simply registering trials may not be sufficient. Let’s assume for the sake of argument that all drugs companies will do their level best to make their trial results look favourable, and running multiple trials on the same drug is both an effective way to do this, and, in fact, necessary good practice. Well, certainly there are well conducted trials and poorly conducted trials; that’s inevitable, too, apparently. So if a trial is not producing the result that the paymasters want, then they can probably arrange to shift it into the “poorly conducted” category, introduce obvious mistakes, get the guy running the experiment hooked on drink or drugs…

On the other hand… obviously if it’s possible to estimate beforehand whether a trial will produce a favourable result or not, then drugs companies will prefer to fund the favourable ones, and that isn’t particularly sinister, only commercial – they are in the business of taking our money, principally, it’s what they get up in the morning for. Keeping us alive is incidental.

Anyway… given that sponsored trials evidently are only part of the problem, how to solve the rest of it? Well, every university scientist wants to be published, whether the results are positive or negative. If you are published then you get tangible benefits. I worked as computer technician in the Faculty of Law at Glasgow University and they asked me if I could see my way to publishing anything.

So it’s the journals that you need to work on – to establish a protocol so that they review and publish the less interesting work. Perhaps when a journal publishes something exciting, they should always take on responsibility for publishing the less exciting followups? That possibly would just create another bias, but maybe a less harmful one?

The problem with self-publication is that it is not permenent, and is also not peer reviewed. In other words, you could put any old rubbish up. Whilst peer review is far from perfect, it does provide some quality control.

On the free release of data, as a statistician I think that’s a great idea in principle: it means I could use other peoples’ data to find out some really boring stuff. There are going to be problems with people being possessive: I’ve been looking at data sets that have taken a couple of decades to collect, and I appreciate that it would be difficult for my collaborators to put these up on the web, and see them used to do what you’ve been wanting to do, but have never had the time. It’s difficulot, and I’m not sure what the solution is. Perhaps if enough people put their data up on the web, it’ll become the norm.

John A said,

True. But speaking specifically about negative results, I’m not sure how much of a problem a lack of peer review is.

More generally, one way around the permanency/peer review is to have a central site where all researchers submit stuff and to allow comments to be sent. This would allow researchers to discuss issues about the research in the same place it is published. Some sort of Wiki set up may be useful. In my mind this would be a good place for poeple to make points reviewers may have missed – I’m sure we’ve all read papers with glaring holes reviewers have missed…

Delster said,

one additional advantage to publishing negative results would be a reduction in time spent re-doing that which has already been done. After all if somebody has an idea for a research project it would be nice to be able to find out if it’s already been tried.

Obviously if a new technique / drug / approach is devised then re trials could be worth it.

Teek said,

Delster: “one additional advantage to publishing negative results would be a reduction in time spent re-doing that which has already been done. After all if somebody has an idea for a research project it would be nice to be able to find out if itâ€™s already been tried.”

very, very good point. from my own (perhaps limited) experience ‘negative results’ often arise because of some biological reason, i.e. compound/protein/molecule X doesn’t have a chance in hell of treating condition Y because substance Z stops it from doing so, something that maybe laboratory A has already found out but lab B spends 10k and six months finding out for itself. if lab A publishes it’s findings, even tho it’s kinda negative (i.e. X does NOT treat Y), B wouldn’t have gone to the effort.

this brings us onto whether there are two types of negative results – Type I, where for instance a drug/treatment/process has no therapeutic effect where it was expected to do so (like my convoluted example,above ), and Type II, where instead of/as well as a therapeutic effect, the drug/treatment/whatever produces negative consequences and/or deleterious side effects. currently journals do publish Type II negative results, because they are useful in warning others of unwanted consequences to potential drugs etc.

Type I needs more attention as most of us here seem to be saying, perhaps in the kind of journals that specifically set out to pubilsh negative (or rather ‘not positive) results – like those in Bob O’H’s list)

One a positive result is printed, null results about the same hypothesis are no longer uninteresting. At the least, they are sensitivity tests that indicate that minor changes in test design will reverse the results, and in some cases they can catch dumb one-in-a-hundred luck with the p-values.

In a healthy system, twenty people do a study, and only one of them finds that the variable in question is significant with p>0.05. That one publishes, then the other nineteen publish articles building upon the pioneering work of the guy who got lucky, then a metastudy is written finding that the preponderance of evidence indicates against rejecting the null. Since you’re no stranger to the metastudies, I imagine you could give a few examples of a story like this yourself.

It’s a slow and potentially inefficient system, and the positive result always gets to go first, and people who don’t know how to work the “show works that cited this article” button on their lit database think that the positive result is unrefuted, but in the long run it seems to generally work OK.

FlammableFlower said,

Negative results would be bloody helpful to researchers. As a synthetic organic chemist you spend a lot of time trying to make X from Y, and even though it looks easy on paper, it goes and sticks two figners up at you. Occasionally you see an angry footnote or offhand comment in an article. To see a paper on the lines of “we tried this, it didn’t work so if you want to do the same, think of another way..” is very rare. I did like one blokes take on this problem:

stephenh said,

Bookfeller said,

Please don’t deride groups for failing to get their act together. To say that if we can’t set up a single register of clinical trials then “we truly are lame”, is to imply that social and political problems are “easy-peasy”, so failure in these fields should be a matter for shame.

But in the real world, politics is not easy. It is actually much more difficult than rocket science. Scientists often make the mistake of thinking that because science is technically difficult, all other problems are relatively easy. With a doctorate in physics I used to make that mistake myself. It made me feel ever so clever to assume that people who couldn’t solve political problems were stupid jerks. Then I started trying to work on such problems and found they really aren’t at all easy.

I enjoy the “bad science” column, and I think some corrective to the sloppy use of science in advertising and journalism is very much needed. However, I have a sneaking worry that the impact on non-scientists may be negative. It probably makes them feel they truly are lame, and that’s not a nice feeling.

Can we support people who want to think straight without making them feel lame?

Agema said,

I’d expect a homeopathist would need an axe. It’s not like they’d be much use as a poisoner with their dilutions.

The current system of science needs publications, and publications in good papers, to justify research grants, with no small pressure involved. Nor is it a trivial job to write a paper. To produce one that had minimal impact in the Journal Of Near-Irrelevance is not a good use of the effort that paper writing can involve. There’s also a certain amount of ego: no one likes to admit they went through 20 different things to find something which didn’t work. It’s close to saying: “Sorry, I’m s**t, and here’s a record of my failure.”

It’s not like the empirical truth is not emerging. It’s more like what you see is the refined truth rather than the crude mass of data. And I agree science would be better if negative results could be readily presented; it would probably save lots of waste, and give you far more to show that you have actually spent the last 3 years working. But under the current system, don’t hold your breath.

Raging Potato said,

With regard to constructing online databases of negative experimental data, Cornell University has established a browseable, searchable database of real experimental data. Users can search, analyse, download and submit data from neurophysiological experiments. The idea is that somebody else might be able to use your data to examine a hypothesis, saving them the expense and time of actually doing the experiment themselves, and thus conforming to 2 of the three R’s of ethical experimentation with animals (reduction, refinement, replacement).

It’s still at the developmental stage, and I haven’t got it to work for me, but the principle is sound; often (particularly in neurophysiology) the difficult part is actually getting the data. There are many times in which it would be relatively simple to extract useful information from experiments that examined something else.

If you’re interested, it’s at neurodatabase.org (I read about it in Nature Neuroscience).

Delster said,

I think what would be better than a database of negative results would be a database of all results.

ie Say i have the idea of checking if controlled ragweed pollen extract exposure would actually reduce allergy symptoms. I can then go away and check the database using those as keywords.

If this is done i would find that someone has actually done this but using grass pollen as opposed to ragweed. (i know… i was one of the guinea pigs )

I could then study their results to see how their study was designed and hence design mine to cover any gaps i might feel warrented attention or persue their results in another direction.

This way research would actually complement prior studies and build upon results rather than being another stand alone project.

Think how this would apply in the world of cancer research where you have lord knows how many individual charities supporting large numbers of studies into various cancer types. I’d be willing to bet that 50% of the money that actually gets into research is wasted on double efforts.

Raging Potato said,

The kind of information you desire is contained within the normal scientific literature; you go to pubmed, type in your keywords (ragweed AND allergy) and you get links to the relevant publications. Release of the ‘raw data’ from this kind of study would not be terribly beneficial to the scientific community because the interpretation of such work is very straightforward; you expose the patient, measure the response (i.e. bronchial constriction/wheal and flare/subjective itchyness scores or whatever). There’s not much new data that can be gleened from such a database (as oppossed to reanalysis of old data).

On the other hand, more technical studies (i.e. recording the behaviour of parts of the brain in response to a stimulus or whatever) contain vast amounts of information, normally as a result of simultaneous recordings of various parameters (e.g. heart rate + blood pressure + nerve activity). The results from a particular study may focus on one subset of the data (i.e. what happens to nerve activity in the 30s after you do proceedure A), when such a subset only represents perhaps 10% of the whole dataset. In these instances, there may be other researchers interested in what the relationship between one parameter and another at basal levels (e.g. is nerve activity modulated by heart rate?). Public access to such results should be encouraged and could be greatly beneficial.

I’d be willing to bet that much less than 50% of the money that gets into research is wasted on double efforts; I know this because I’m a researcher and I’ve had to run the grant gauntlet. Research that tries to reproduce previous results is simply not funded. There are more important things to fund, and not enough money. The people that decide who the cash goes are experts in the field, recruited by the grant-funding bodies for said expertise. Only about 10-15% of applications are successful, and the people that write the successful grants are the people with the track record of high impact novel findings (with the odd exception). That’s not to say that nobody tries to replicate previous findings – existing models are frequently used to develop new techniques, and sometimes contentious findings need to be reviewed by other laboratories before they’re accepted as fact. This is definitely a minority however.

Delster said,

I know that pubmed contains this kind of information on trials. how ever it does not contain information on all trials conducted. Only ones that have been published, and probably not all of those.

What we’re talking about is combining both the positive result trials and the negative result trials into a database that would provide a better way of cross checking for previous investigations into what ever subject.

even if only the authors, paper or trial name and key words were entered into this database it would be an excellent research tool.

To give an example. Say i’ve done a trial, written it up etc but that the trial was a negative result. I may well chose not to publish to the general science community but just pass my results along to those paying or who had to be reported to. What i could then do in this instance is to enter the result into the hypothetical database.

If somebody then requested the results based upon their search i could direct them to the study results or send along the abstract etc.

As for your statement that the results of a study of this kind not being advantageous, i’d have to disagree.

The study in question was not simple exposure to pollen. What was done was an extract of the pollen was made and administered to the patients on a daily basis over a period of 18 months. This was done through a solution held under the tongue for 60 seconds then swallowed. unfortunatly i was only able to do 6 months of this trial due to sever injuries and i don;t know what the end results were.

Personal experience seems to indicate a good reduction in symptoms for around 4-5 years from my partial treatment (anecdotal i know). However this study would have produced data showing either a positive result (which may indicate a good delivery method to further researchers and encourage further trials with other pollen types) or a negative sugesting the opposite.

Either way, more knowledge = good, less = bad

Robert Carnegie said,

Agema: I think the point not that research may be done badly and still should be published, nor that a scientist could lose face by publishing or releasing a series of scientifically valid but commercially unrewarding results. If you’re responsible for picking the experiments that you do, and all that you add to scientific knowledge is “That doesn’t work, and that doesn’t work”, it isn’t much. But you may be employed to carry out exactly those experiments. Perhaps that in itself is not a position of prestige…

ikhovablovrmeenkaku said,

[…] 10. Does freely available, peer-reviewed scientific literature – which should in theory remove concerns aboutÂ conflicts of interest on the part of the writers – confuse journalists more used to hunting out hidden agendas? A common theme appears to be “this researcher has in the past received funding from Drug Company A; therefore we should distrustÂ his research purporting to show the efficacy of Drug Company A’s new product”. Is this unfair or is there reason to doubt it? Goldacre: www.badscience.net/?p=251Â ”…over the past few years there have been numerous systematic reviews showing that studies funded by the pharmaceutical industry are several times more likely to show favourable results than studies funded by independent sources”. Sinister? […]

[…] you probably won’t be bothered with reading these boring results. Yes, we joyfully embrace publication bias. But in science, knowing something isn’t, or that it is routinely, are really valuable things to […]