Editorial

If there are no randomised controlled trials, do we always need more research?

Karianne Thune Hammerstrøm & Arild Bjørndal14 March 2011

We have recently, partly by chance and partly through a blog called Science-based Medicine,[1] come across Cochrane Reviews in which evaluative research other than randomised controlled trials (RCTs), as well as logic and basic science, seem to have been largely ignored. Hence, the review questions cannot be answered in the manner that the authors set out to do, the implication being that more research is needed.

The Cochrane Review on Laetrile treatment for cancer may serve as an example.[2] Laetrile is a derivative of amygdalin, a substance extracted from apricots and other fruits and nuts. It contains cyanide and is used illegally as a treatment for cancer. The review's inclusion criteria limit the search to RCTs, but no RCTs were identified. In the discussion the authors describe various rationales for the possible effects of Laetrile, but none are deemed plausible. The authors state that: "This is systematic review found no evidence for Laetrile to be effective as [an] anti-cancer agent. The claim that Laetrile has anti-cancer effects is not supported by data from controlled clinical trials. The potentially relevant studies identified were case series that do not provide good quality evidence as they do not include a comparison group. Therefore, Laetrile cannot at present be recommended as an anti-cancer treatment.”

But the review abstract concludes that:"This systematic review has clearly identified the need for randomised or controlled clinical trials assessing the effectiveness of Laetrile or amygdalin for cancer treatment."

One of the case series that the review authors identified, but excluded, looks at 178 patients with cancer who were treated with Laetrile plus a "metabolic therapy". The authors of the case series conclude: "No substantive benefit was observed in terms of cure, improvement, or stabilization of cancer, improvement of symptoms related to cancer, or extension of life span. The hazards of amygdalin therapy were evidenced in several patients by symptoms of cyanide toxicity or by blood cyanide levels approaching the lethal range."[3]

We question whether there is a need for RCTs to assess the effectiveness of Laetrile for cancer treatment. We also doubt whether such a study would be approved by any ethics committee. To suggest a need for RCTs (that will never be organised) to answer this question seems both illogical and unethical. It leaves us at a standstill. The authors have implicitly disapproved of the existing evidence and have given room to even more speculation.

Similar problems are encountered in the Cochrane Review of the MMR vaccine.[4] The objective of the effectiveness part of the review is: "To review the existing evidence on the absolute effectiveness of MMR vaccine in children (by the effect of the vaccine on the incidence of clinical cases of measles, mumps and rubella)."

The inclusion criteria are: "Vaccination with any combined MMR vaccine given independently, in any dose, preparation or time schedule compared with do-nothing or placebo."

The primary outcome is: "Clinical cases: measles, mumps or rubella.” By using this outcome the authors exclude studies that assess antibody response to the vaccine as a measure of vaccine effectiveness. The question of whether or not antibody response is a good indicator of immunity (and if there is any reason to doubt the practice of measuring antibody response in vaccine studies) is not raised. The authors conclude that: "As MMR vaccine is universally recommended, recent studies are constrained by the lack of a non-exposed control group. This is a methodologically difficulty which is likely to be encountered in all comparative studies of established childhood vaccines."

Nevertheless, they go on to state that:"We were disappointed by our inability to identify effectiveness studies with population or clinical outcomes."

And, in the abstract:"We could not identify studies assessing the effectiveness of MMR that fulfilled our inclusion criteria even though the impact of mass immunisation on the elimination of the diseases has been largely demonstrated."

This is less dramatic than the Laetrile example, but still, in our opinion, not satisfying. As we will (hopefully) never have proper control groups for the MMR vaccine, the review's conclusions lead to a paradox: how can the effect of the MMR vaccine be proven through population or clinical outcomes (i.e. incidence of disease) when there is no non-exposed control group?

We can see the same disregard for non-RCT evidence in Cochrane Reviews of homeopathy. Homeopathy is, in essence, a placebo in itself, with no plausible mechanism of action.[5] Reviews typically have conclusions such as:

• "There is insufficient evidence to recommend the use of homoeopathy as a method of induction [of labour] ... Rigorous evaluations of individualised homeopathic therapies for induction of labour are needed."[6]• "In view of the absence of evidence it is not possible to comment on the use of homeopathy in treating dementia."[7]These are, in our opinion, disturbing views. Is it really not possible to comment on the use of homeopathy in dementia without an RCT?

We acknowledge that there are many examples where findings from RCTs have led to necessary changes in established practice that had been based on logic and less rigorous evidence. Hence, we do fully agree that potentially (moderately) effective interventions should be evaluated, if possible, in experimental settings.

However, we need to discuss and refine our stance on what to think and what to broadcast when it is not possible to do so. When there are clear indications (e.g. from case series) that an intervention is dangerous, we need to let some hypotheses go and move on. Likewise, although empirical data trump theory, we must continue to think. And when something does not make sense at all (albeit within our belief system), we must have the courage to say so.

In our opinion, we need to be clearer about implications for policy, practice and research, when RCTs are lacking and unlikely to be done. On the other hand, it is understandable that authors (who often have a particular interest in a given intervention) sometimes find it difficult to declare something as 'not working' and move on. We think some guidance is needed for editors and authors in cases like this, and we would be interested to see what others have to say on the matter.

Declarations of interest

The authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available upon request) and declare (1) no receipt of payment or support in kind for any aspect of the article; (2) no financial relationships with any entities that have an interest related to the submitted work; (3) that the author/spouse/partner/children have no financial relationships with entities that have an interest in the content of the article; and (4) that there are no other relationships or activities that could be perceived as having influenced, or giving the appearance of potentially influencing, what was written in the submitted work.

Image credit

AJ Photo/Science Photo Library

Comments on this editorial

Comments on Editorial by Hammerstrøm & Bjørndal

[1]23rd March, 2011 I think the authors of this editorial are making a useful comment about randomised controlled trials for interventions. All too often, authors sign off a review by saying something about the need for randomised controlled trials. I am not sure I agree with everything else.

Part of the problem is that reviews should only be carried out in the first place if:

There has been at least one study undertaken to test the question

Or, the intervention is being used currently (preferrably widespread use), in the absence of any studies

Or, there is another reason why efficacy of a treatment is of interest (e.g. in the case of MMR, because of a once-credible, if later discredited, potential risk).

Negative results should perhaps be expressed thus:

"...that this treatment should not be used except in experimental studies after satisfying an ethics committee that there is new suggestive empirical or mechanistic evidence of treatment efficacy..."

Empty reviews might say something like:

"...despite [reason(s) as above], we found no empirical evidence to refute or support this treatment..."

[2] 23rd March, 2011 It is good to have this debate. It could be argued that Cochrane Reviews are most effective when they stick to evidence.

With respect to the second quote cited from the review of homeopathy for dementia, we agree that instead of saying: "In view of the absence of evidence it is not possible to comment on the use of homeopathy in treating dementia." it would be better to say: "In view of the absence of evidence, there is no justification for using homeopathy in treating dementia."

However, the question is really to what extent Cochrane Review editors should interfere with the way conclusions are expressed (as long as they are consistent with the results).The whole issue of the form of words adopted in conclusions is much wider than the homeopathy issue. Guidance on this clearly is needed.

The options when it comes to homeopathy (or similar) are:

(a) Dismiss it as absurd and not worthy of research

(b) Insist it be subjected to the same standard of research as conventional therapies.

Option (a) will not make homeopathy go away. If option (b), then finding an absence of evidence of efficacy should lead to the comment that there is an absence of evidence of efficacy and not be an indirect route to saying it wasn't worth investigating anyway.

It is probably not the best use of time for the Cochrane Collaboration to be reviewing all fringe therapies anyone has ever suggested for dementia, but, rightly or wrongly, homeopathy has become relatively mainstream and has a history of significant funding from the NHS, which is under review in Parliament. Hence a systematic review may have policy implications.

We would certainly think it reasonable to question the priority that should be given to researching homeopathy for dementia. The question about whether homeopathy research should be funded at all is a bigger issue than should be addressed in a Cochrane Review of a single subject.

[3] 24th March, 2011 Hammerstrøm and Bjørndal provide examples of unreasonable recommendations in Cochrane Reviews to do more research, when further research is not really needed. The evidence that further research is not needed comes from the beyond the set of the studies included in the Cochrane Review. This is an old problem with Cochrane Reviews. In 2004, I estimated that about 90% of Cochrane Reviews concluded with a recommendation to do more research.[1] Most of these recommendations were not supported by an assessment of the likelihood that further research would change the current understanding of intervention's usefulness, and were not specific; the recommendations did not give enough detail on the design and outcomes to be used in the proposed additional research.

I think the practice of the almost ubiquitous recommendation of more research is a relic of the early days of Cochrane Reviews, when they were of the style 'intervention-for-condition-to-change-specific-outcome'. The balance of outcomes was not systematically addressed and the style of the reviews was in some sense 'defensive'. Nowadays, Cochrane Reviews are more informative, and review authors have more possibilities to formulate their recommendations for further research - positive or negative - depending on the results of the review and wider research on the subject. Review authors are in a better position than most to evaluate the state of the research on the subject they are reviewing, and their constructive recommendations may be of great benefit to researchers.

I agree that Cochrane Review authors need advice from their editors on formulating the implications for research.