Although you can enjoy much of this website without javascript, we highly recommend that you enable it
in order to experience all available features.

Cookies on What Doctors Don't Tell You

We set cookies so you can manage your account and navigate the site, and to remember your cookie preferences so that you don't keep getting this message. To accept cookies, just keep browsing,
otherwise use the links on the right to adjust your cookie settings or find out more.

Bristol centre: the facts

About the author:&nbsp

Last September, banner headlines proclaimed that patients following alternative cancer regimes were most likely to die

Last September, banner headlines proclaimed that patients following alternative cancer regimes were most likely to die. The stories came in the wake of a Lancet study published in the 8 September l990 edition, entitled "Survival of Patients with bre

The startling results were splashed over every newspaper in the land. According to the study, women who attended Bristol were twice as likely to relapse and die as those receiving conventional treatment (ie, chemotherapy). The Guardian's front page announced that women attending Bristol had a "doubled risk" of cancer; the Times saw the study as a major blow to holistic centres and alternative medicine; and many used the study for renewed attacks on the Centre's low fat vegan diet. Images of voo doo medicine were conjured up. Some of the Bristol Centre's more arcane treatments like coffee enemas were highlighted. Meditation, the Times asserted, was now seen as "irresponsible".

The study results were all the more disappointing because Bristol itself had commissioned the study, to aid its acceptance by the medical establishment. Bristol's approach, a combination of diet, exercise, therapy and meditation, had reached a certain wary acceptance among some NHS hospitals. Hammersmith hospital for one, in west London, had asked Bristol therapists to demonstrate how counselling, and relaxation and massage techniques might help patients cope with the strain of powerful chemotherapy or radiation treatment. All they needed was evidence that the benefits they had seen in individual patients actually had some basis in scientific fact.

Not surprisingly, the Lancet study had just the opposite effect. Bristol suffered through two months of a devastating pasting by the print media and television, which resulted in a sharp drop off in patients and funding from charity drives. Then in November, you may have noticed (if your eyesight is particularly good) five paragraph stories buried in the centre of some of the quality papers. A number of epidemiologists from The London School of Hygiene and Tropical Medicine criticized the study as flawed. Their criticism and many others writing in to the Lancet finally resulted in a retraction by the Cancer Research Fund and, in effect, by the authors of the study, including Claire Chilvers of Queen's Medical Centre in Nottingham, Felicity Bagenal of the Institute of Cancer Research, and T. J. McElwain of the Institute of Cancer Research and the Royal Marsden Hospital in Sutton. McElwain, who has since died, went on television to say that he regretted any damage done to the Bristol Centre, which is in financial difficulty as a result of the bad publicity.

While newspapers were interested in the original story, not nearly so many were interested in the details of how the study went wrong. The following is our attempt to set the record straight, both as to what the study actually found and what the Bristol programme attempts to achieve.

The Bristol Cancer Help Centre was founded in l979 to offer alternative treatments for patients with cancer. The basic premise of the centre is that a patient can and should take an active, positive role in his healing process. Besides the attitudinal approach, the staple of the centre is its diet, which used to be a stringent vegan diet of raw and lightly steamed vegetables and vegetable proteins, but is now individually tailored. There is also counselling, meditation and visualization workshops and alternative therapies if desired. (See box, p 3, for full details of the programme).

The Lancet study, begun in l986, followed 334 women with breast cancer attending the centre for the first time between June l986 and October l987. Their outcomes were set against 46l control women with breast cancer attending either two specialist cancer hospitals or a district general hospital. In both the cases and the controls, the diagnoses were obtained from case notes.

According to the study, for those patients where cancer had not spread at entry to the centre, survival was significantly poorer than in the controls (nearly three times the relapse rate). As for those who had already relapsed when they first went to the centre, the BCHC women appeared nearly twice as likely to die as their counterparts.

Although the study mentioned the possibility of bias, on a number of counts subtleties which did not make it into the newspaper stories the inevitable conclusion was the following:

"While it is possible that BCHC attenders may, in some subtle way, have worse disease than our control series, the possibility that some aspect of the BCHC regimen is responsible for their decreased survival must be faced. For example, does radical adherence to a stringent diet shorten life in patients whose survival is already threatened by cancer? Our study certainly shows that patients choosing to attend the BCHC do not gain any substantial survival benefit. Whether quality of life is enhanced is yet to be answered. Other alternative practitioners should have the courage to submit their work to this type of stringent assessment."

Here's what the study didn't mention, according to the numerous people who wrote in to complain about the conclusions drawn from it.

Like was not compared with like. As any scientist or doctor will tell you, the most acceptable scientific study is "randomized" double blind and controlled. That means that subjects are randomly allocated as either the case subjects (as in the Bristol Cancer women) or the controls (those that don't receive the treatment under scrutiny). Randomization is important because it makes sure that the case study patients are not different in some fundamental way from those of the controls. It attempts to eliminate bias on the part of the study doctors because pure chance dictates who will be a study subject and who a control.

In the case of the Bristol study, randomization wasn't employed. According to Dr Timothy Sheard, of the Bristol centre, it simply couldn't be done. It would have meant turning away people with cancer who believed that the Bristol programme worked and telling them that they could only employ conventional treatment. "Even if we had been willing to be that brutal, they probably would have sought the same type of regime elsewhere," says Sheard, making true randomization impossible..

The women in the Bristol study had worse cancer than their counterparts on initiating the study. Professor Peter Smith of the London School of Hygiene and Tropical Medicine, one of three epidemiologists from the London School to write into the Lancet, says it was clear, after analyzing the study results, that the case women were far sicker than the controls.

Although there were only minor differences between the two groups when they were first diagnosed, Smith says there were major differences by the time they were enrolled in the study. For one thing, even before going to the BCHC, 42 per cent of the women already had cancer which had spread to other parts of the body. In the controls, only 29 per cent relapsed (ie, got cancer again) at any point during the study.

Smith also points out that 7 of the case women had a second confirmed primary breast cancer, a condition not shared by any of the controls.

"Thus, the previous disease history of cases when they entered the Bristol centre was considerably worse than that of the controls at the same time after primary diagnosis," wrote Smith in the Lancet (l0 November). "This could be explained by a greater tendency for women who had relapsed to seek complementary therapy."

No one ever checked to make sure that the study group complied with the programme and the controls did not. In most studies of a new drug, the case subjects take the drug and the controls don't; doctors in the study ensure that the subjects all comply. But in the Bristol study, the no one checked to ensure that the subjects actually followed the treatment. "We know from our experience," says Sheard, "that a high proportion more than half don't come back to the centre."

Furthermore, there were no controls to ensure that the control group didn't employ anything complementary including counselling, acupuncture, visualization or massage. "However, more than half the controls were patients of the Royal Marsden Hospital, which offers complementary therapies similar to those used at BCHC," writes Sheard in the Lancet.

No one measured attitudes or lifestyle factors. . A fundamental principle of this kind of therapy is the active involvement and motivation of the patient. Their belief is that if a patient is pessimistic and passive about his illness he is less likely to get better than one who is positive and involves in actively fighting the disease. Furthermore, the researchers didn't examine such psychological and social factors as mood, life events, marital status all factors that have been demonstrated to affect survival rate.

The study made no distinction between breast cancer and the extent of secondary cancer (that is, cancer anywhere other than in the breast or armpit). It also didn't check whether women had had local recurrences of cancer in the breast or armpit. But nearly half had had a local recurrence, by the time they'd arrived at Bristol (four had had two separate recurrences and l three), and l9 per cent had incurable conditions, despite all being classified as free of cancer when the study was launched! Of these supposedly disease free women, three had weeping wounds in the chest, two had incurable lymph nodes in the neck, and one had bone "secondaries". "It's no surprise the women went on to die," says Sheard. "Many of them were pretty sick indeed."

The study "played down" two of the three methological approaches which showed very little difference between the two groups.

It's very likely that the case group had bigger tumours than recorded. Even though the Bristol women were significantly younger than the controls (85 per cent under 55, compared to 73 per cent of controls) they had a higher rate of mastectomy (43 per cent versus 36 per cent). Breast cancer before age 55 usually has a worse prognosis than after. In Sheard's view, this strongly suggests "down staging" that is, the size of tumours was larger than assumed, particularly as tumour size wasn't recorded.

By the time all these factors were taken into account and the numbers adjusted, the differences between the case and control groups all but vanished. As Drs Nicolas James (The Ludwig Institute for Cancer Research and Alison Reed (Institute of Psychiatry) noted in the Lancet, the magnitude of the disparity between survival and relapse rate was "intrinsically implausible". Remember, these were all women who had undergone conventional treatments (surgery or chemotherapy) before arriving at Bristol and were only followed up for a little over a year. "In cancer treatment, it's the equivalent of a man running a two minute mile and people believing it, " says Sheard.

One inherent flaw in concluding that there is no survival benefit in attending Bristol is that the study wasn't long enough and needed to be at least 5 to l0 years. Many of the study's critics point to a randomized prospective study of women with breast cancer than had spread who had undergone "psychosocial intervention". The study showed a near doubling of survival rates a rate that only began to emerge l8 months into the study. (Spiegel, et al, Lancet l989, pp888-9l).

The authors of the study agree with their critics. Chilvers, Easton, Bagenal, Harris and McElwain all wrote into the Lancet saying that "We broadly agree with much of the comment generated by our report. . . In our view it is much more likely that the differences could be explained by increased severity of disease in BCHC attenders."

Perhaps even more significantly, Walter Bodmer, Director of Research of the Imperial Cancer Research Fund, published in the same issue of the Lancet what amounted to a retraction of the study claims:

"It is clearly very difficult without randomization to obtain unbaised evidence of the effects of the Bristol Cancer Help Centre therapies. Our own evaluation is that the study's results can be explained by the fact that women going to Bristol had more severe disease than control women. In particularly, they had a much high rate of local recurrence."

Even though the study has been virtually retracted by its authors, the damage to complementary cancer therapies may be long lasting. "My impression is that the medical profession probably thought that going to an alternative centre is fine if it makes patients 'feel' better, but it's unlikely to actually make them better. But here was a study that pointed to the fact that alternative therapies were actually doing harm," says Professor Smith. This is despite the fact that, as he says, Bristol centre was intended as a complementary programme, not an alternative one. "If it were offered as a real alternative, then one would be much more worried that it might be making people sicker."

Smith's view is that the study's sensational reception reflects the general bias in medicine against this kind of body mind approach. "If the results had shown that the Bristol programme did good, I doubt that the medical profession would have pounced on it with the same force."

What is to be learned from the Bristol experience? For one thing, august journals like the Lancet which pride themselves on being "peer reviewed" ought to check with true peers (ie, epidemiologists with no ideological axe to grind) before publishing. There is also a case to be made for study designs being developed for certain complementary therapies which resist the drugs and surgery study model. Finally, publications like this have a certain culpability in making these sorts of published errors. Besides the actual damage done to Bristol's pocketbook, the Lancet study also has to contend with the knowledge that it might have hastened the deaths of many patients attending the centre at the time. Surgeon Bernie Siegel, author of Love, Medicine and Miracles, is not the first to point out that a sense of hopelessness can affect a patient's outcome.