This is the second installment analysis of a three (and now 4) part series of articles on effects of homeopathy on childhood diarrhea. This second installment elaborates on our findings on data from the second clinical trial in Nicaragua. (1)

I should first explain the title. In order for homeopathy to operate as a base or operating system for medicine “for the 21st century,” the entire system of measurement and of course all physical laws would have to be changed. In analogous political terms, it would be similar to – but more massive a change than – changing a nation from a democracy to a completely different system such as a theocracy with completely different laws and behavior expectations. So…well, it was the best I could think up at the time.

Last time I recounted how the Jacobs ll trial setup was incoherent and unable to produce results that could prove efficacy – unless the differences between treatment and controls were quite large, greater than just barely significant. Most patients were treated differently from others, with multiple preparations (that were in reality the same: pill filler) at differing times during the illness, with each preparation selected according to symptoms that likely varied by the hour, and influenced by memory, well known to be faulty in medical studies.

In fact, given the lack of homogeneity in the trial diagnoses and treatments, outcomes should not have made sense at all. Now I must admit that the thought did not occur to us at the time we undertook the review, nor during the review. If it had, our job would have been easier and the paper shorter.
First, we saw that children with severe dehydration were hospitalized, placed in IV fluids, and not entered in the trial. Milder cases were continued on diet and given standard oral rehydration therapy. They were assessed by homeopathic interviews regarding stool quality, general condition, and emotional states, then randomized to treatment and control groups. The major outcome was stated to have been determined beforehand as the mean number of days to fewer then 3 unformed stools on two consecutive days.

We looked at tables and graphs and saw that the treatment-control differences seemed to be significant and consistent at all days measured.

The primary outcome was: treated, mean of 3.0 days, and for untreated, 3.8 days (P<.05). And, there was a difference at 24, 48, and 72 hours after starting treatment Four other diarrhea indicators showed P=.048, .036, .054, .037.

But the results were actually illustrations of the difference between statistical significance and clinical significance. At 72 hrs, the point of maximum difference (P<0.5,) the difference was between 3 stools per day and 2 per day. The differences on the other days were all less than 1 per day.

We looked further and found recording and computation errors in several tables. None alone seemed large enough to account for the statistical findings, but suggested that enough other errors might lie hidden in the record to account for the possibility.

The most significant finding was the result of culturing the stools. All subjects had stools cultured. Children in the homeopathy treatment group had half again as many positive stool cultures for bacterial pathogens as did the controls. This led us to the possibility that all or many of the children might have been given antibiotics, either through adulterated homeopathic preparations, or at some time outside of the researchers’ observations. A few such patients could have accounted for the difference.

The data were presented as the mean values without data points for all patients. In such presentation, there is a possibility that one or two outliers might have existed in either group or both to have resulted in the differing means. Without the raw data, we don’t know.

Then, as mentioned last time, the data were the results of heterogeneous entry data to which were applied statistics that assume data have some degree of homogeneity. Another problem casting doubt on the meaning of the outcome.

Added to all the irregularities and lack of clinical significance were the claims and language of the authors. The authors referred for authority of homeopathic validity to the discredited Benveniste study in Nature (1998.) That study, if it were valid and authentic, actually disproved the major thesis of homeopathy and showed the selection of dilutions to be arbitrary and effects unpredictable.(2)

The Jacobs paper authors inflated the significance of their findings with phrasing like: “Acute diarrhea is the leading cause of pediatric morbidity and mortality world-wide. In the developing world there are 1.3 billion episodes of diarrhea and 5 million deaths each years from this illness..” But a public health impact is not suggested by this study. Children with severe diarrhea – the ones likely to die – were not included, and sent to a hospital. There was nothing about homeopathy in children with severe diarrhea.

Then they stated, “ Acute childhood diarrhea [is] ideal for …homeopathic trial because …no standard allopathic [sic] treatment would have to be withheld and … the public health importance … is great.” But the trial used children with mild diarrhea. It is hard to imagine a great public health importance for a method that shortens by less than a day a self-limited illness that resolves spontaneously in four days.

We ended up with enough findings to conclude that the study was devised in a way as to render its results nul, to suspect its authors of hyping an idea, and with enough errors to suspect that even more lay hidden in the raw data.

Next time I will explore its sequel, a repeat of the study in… oh, yes, Nepal. Of course. And then on to the pinnacle of the quartet, the meta-analysis.

16 thoughts on “Homeocracy II”

I’ve been seeing claims that Oscillococcinum has undergone double-blind, placebo controlled trails showing its effectiveness. Considering that some quacks are recommending that people take Oscillococcinum for the swine flu, covering that would be topical (though it would require tracking down which clinical trials they’re talking about).

Where it shows that the duck liver essence managed to reduce the length of influenza by 0.28 day. Umm… that is less than seven hours, and how does anyone know what part of the day they are over the flu?

Here is the conclusion: “Current evidence does not support a preventative effect of Oscillococcinum-like homeopathic medicines in influenza and influenza-like syndromes.”

There is another thing that I found difficult to understand in this article, and it may be due to lack of imagination on my part. How could it be a double blind controlled study if the homeopathic medications were individually prepared for the children? Who exactly was blinded in this study?

OK. Go on. He/she concocts a specific homeopathic treatment. Then what? I didn’t find an exact description of the process of blinding. I am supposed to assume that it was. I don’t feel comfortable doing that.

HCN knows (!) he is lying. He quotes the “conclusion” of the Cochrane Report on the PREVENTION of influenza, but he purposefully ignores the reference to the TREATMENT of influenza which the Cochrane Report describes as “promising.”

I never expect honesty from this site. Fundamentalism is anti-scientific even when it is dressed in scientific robes.

You folks would be better off warning the public about the real quackery like TAMIFLU…which, according to WHO, does not work against Influenza A H1N1. 95% resistance!

DUllman on 04 May 2009 at 9:42 am wrote “HCN knows (!) he is lying. He quotes the “conclusion” of the Cochrane Report on the PREVENTION of influenza, but he purposefully ignores the reference to the TREATMENT of influenza which the Cochrane Report describes as “promising.” ”

Dullman (MPH!), you overlooked that she noted that, as a treatment, it seemed to shorten the duration of the flu by 0.28 days.

Apparently, the reviews authors were, themselves, not quite certain how “promising” the data are since they also wrote: “It is open to debate whether further research is warranted on homeopathic medicines to prevent influenza and influenza-like syndromes. [A calculation shows] 1457 patients per group [are needed to fully test Osc.]. Such a trial would require significant resources, the investment of which is questionable given the equivocal nature of the current data.”

How could it be a double blind controlled study if the homeopathic medications were individually prepared for the children?

Well, they actually put placebo pills in tubes labeled as homeopathic medications, so it was double-blinded.

However, the authors made a statistical mistake that is sadly common in medical studies – they forgot to correct for multiple comparisons.

They had five (5) comparisons they made between the treatment and control groups; three of them showed p-values of slightly less than 0.05 (0.048, 0.036 and 0.037), which would have made them marginally statistically significant – if they had been the only measure used.

As it was, correcting for multiple comparisons results in none of the measures reaching a statistically significant difference.

So, this was a double-blind, placebo-controlled study of homeopathy that showed homeopathy is no better than placebo.

Oh, and Mr. Ullman is correct that one of the seasonal influenzas this past season was resistant to Tamiflu – it was apparently a random mutation, since none of the other isolates show any resistance. Fortunately, we don’t rely on Tamiflu to control influenza (vaccination is the prime controlling methods, and it still works just fine), so it is unlikely to be a problem.

It is an insight into Mr. Ullman’s thinking that he finds the fact that one strain of influenza is resistant to a drug is a reason to dismiss all use of that drug as “quackery”. Tamiflu is still effective against other strains of influenza A, including the recent H1N1 “swine” influenza – unlike Mr. Ullman’s homeopathic nostrums.

Weing: I did not elaborate in the post on the details of methods unless they were deficient. I accepted the blinding. After the each interview and exam, the findings were entered into a homeopathic data base apparently using specific wording – in a computer. Each historical quality and physical finding apparently triggers a recommendation for a specific homeopathic preparation, all previously loaded into the data base (RADAR.) Thus a specific historical and stool quantity and quality and set of physical finding calls forth one or more specific homeopathic preparations. So each subject usually was given a combination of materials, with several subjects being given at least one or two the same materials. These were apparently prepared by a homeopathic pharmacist and, as with placebos, delivered in similar looking blue tubes to the practitioners.

Prometheus: The authors stated their primary outcome was the number of days until there were at most 3 stools per day for 2 days. A bit complex, but apparently a common outcome measure and the same one they used in their pilot study (Jacobs 1: Jacobs J, Jiminez LM, Gloyd SS et al, Homeopathic treatment of acute childhood diarrhea. Brit Homeop J, 1993;82(2):83-86.) The other four end points listed were correlated, and they made no specific claims for. So, although they did not apply correction for multiple end points, I did not make a fuss about it. Until the following study, to be reported on next time.
But I agree that one should ask, why present the uncorrected values at all, if not to try to convince the reader of the validity of their conclusion?
I had not read the Jacobs 1 paper at the time of the original evaluation, so later I tried to see the pre-trial IRB application for selection of the primary end point. However, the secretary of the IRB informed me that they no longer had a record of the material. That struck me as odd, but I went no further.
I should emphasize here again that a few outliers, or extraneous administration of antibiotics to a large number of controls and subjects could have given the reported results. Or, perhaps other unknown factors.
Whatever the cause or causes of the statistical difference, one could choose to look at the results another way. If homeopathy were inactive (as it is) – then one could just as well or better conclude that the trial compared two inactive preparations, thus the results were a measure of the systematic error in the system of doing clinical trials on implausible, absurd propositions. Or, that the difference is a measure of unknown factors in the doing of bio-statistics. Take your choice.
These small but “statistically significant” differences, then, could be the baseline for range of results for negative studies. This would be consistent with Ioannidis’s proposal of why most research findings are not valid.