A study published in Alternative Therapies in Health and Medicineis being cited as evidence for the efficacy of healing touch (HT). It enrolled 237 subjects who were scheduled for coronary bypass, randomized them to receive HT, a visitor, or no treatment; and found that HT was associated with a greater decrease in anxiety and shorter hospital stays.

This study is a good example of what I have called “Tooth Fairy Science.” You can study how much money the Tooth Fairy leaves in different situations (first vs. last tooth, age of child, tooth in baggie vs. tooth wrapped in Kleenex, etc.), and your results can be replicable and statistically significant, and you can think you have learned something about the Tooth Fairy; but your results don’t mean what you think they do because you didn’t stop to find out whether the Tooth Fairy was real or whether some more mundane explanation (parents) might account for the phenomenon.

Theoretical underpinnings

According to the study’s introduction:

Healing touch is a biofield- or energy-based therapy that arose out of nursing in the early 1980s…HT aids relaxation and supports the body’s natural healing process, i.e., one’s ability to self-balance and self-heal.” This noninvasive technique involves (1) intention (such as the practitioner centering with the deep, gentle, conscious breath) and (2) placement of hands in specific patterns or sequences either on the body or above it. At its core, the theoretical basis of the work is that a human being is a multi-dimensional energy system (including consciousness) that can be affected by another to promote well-being.

They cite a number of references to theorists who support these ideas. They cite Ochsman; he wrote a book Energy Medicine: The Scientific Basiswhich I reviewed, showing that despite the book’s title, there is no credible scientific basis and the “evidence” he presents cannot be taken seriously.

They cite Candace Pert, who said in the foreword to Ochsman’s book that Dr. Oschman “pulled” some energy away from her “stagnant” liver. She said the body is “a liquid crystal under tension capable of vibrating at a number of frequencies, some in the range of visible light,” with “different emotional states, each with a predominant peptide ligand-induced ‘tone’ as an energetic pattern which propagates throughout the bodymind.” Does this even mean anything?

They even cite the PEAR study, suggesting that it is still ongoing (it isn’t) and claiming it shows that “actions in one system can potentially influence actions of another on a quantum energetic level.” (It didn’t.)

This is nothing but imaginative speculation based on a misunderstanding of quantum physics and of what physicists mean by “energy.” It is a truism that electromagnetic phenomena are widespread in the human body, but there is a giant gap between that and the idea that a nurse with intention and hand movements can influence electrical, magnetic, or any other physical processes in the body to promote healing. There is no evidence for the alleged “human biofield.”

Previous Research

They cite several randomized controlled studies of HT over the last few years. One showed “better health-related quality of life” in cancer patients. One, the Post-White study, showed no difference between HT and massage. One small study by Ziembroski et al. that I couldn’t find in PubMed apparently showed no significant difference between HT and standard care for hospice patients. One study showed that HT raised secretory IgA concentrations, lowered stress perceptions and relieved pain, and results were greater with more experienced practitioners; but it only compared HT to no treatment and didn’t use any placebo treatment.A pilot study compared 4 noetic therapies-stress relaxation, imagery, touch therapy, and prayer, and found no difference.

They cite a review of healing touch studies by Wardell and Weymouth It concluded “Over 30 studies have been conducted with healing touch as the independent variable. Although no generalizable results were found, a foundation exists for further research to test its benefits.” Wardell noted that “the question has been raised whether the field of energy research readily lends itself to traditional scientific analysis due to coexisting paradoxical findings.” This is a common excuse of true believers who find that science is not cooperative in validating their beliefs.

Study Design

237 patients undergoing first-time elective coronary artery bypass surgery were randomly assigned to one of 3 groups: an HT group, a visitor group, and a standard care group. All received the same standard care from the hospital. The HT group received preoperative HT education and 3 HT interventions. Practitioners established a relationship with their patients, assessed their energy fields, and performed a variety of HT techniques based on their assessment, including techniques that involved light touch and those that involved no touch (practitioners’ hands held above body). Sessions lasted 20 to 90 minutes; each patient had the same practitioner throughout the study. The “visitor” group patients were visited by a nurse on the same schedule. The visits consisted of general conversation or the visitor remaining quietly in the room with the patient. They mentioned that some visits were shortened at the patient’s request.

Results of the Study

The six outcome measures were postoperative length of stay, incidence of postoperative atrial fibrillation, use of anti-emetic medication, amount of narcotic pain medication, functional status, and anxiety. HT had no effect on atrial fibrillation, anti-emetics, narcotics, or functional status. The only significant differences were for anxiety scores and length of stay. The length of stay for the HT group was 6.9 days, for the visitor group 7.7 days, and for the routine care group 7.2 days, suggesting that the simple presence of a visitor made things worse(!?). Curiously, for the subgroup of inpatients, the length of stay was HT 7.4 days, visitor 7.7 days and routine care 6.8 days, which was non-significant at p=0.26 and suggested that both HT and visitor made things worse.

The mean decreases in anxiety scores were HT 6.3, visitor 5.8, and control 1.8. They said this was significant at the p=0.01 level. But the tables for results broken down by inpatient and outpatient show no significant differences (p=0.32 for outpatients and p=0.10 for inpatients). If it was not significantly different for either subgroup, how could it be significant for the combined group?

These discrepancies are confusing. They suggest that the significant differences found were due to chance rather than to any real effect of HT..

Problems with this Study

Four out of the six outcomes were negative: there was no change in the use of pain medication, anti-emetic medication, incidence of atrial fibrillation, or functional status. The only two outcomes that were significant were hospital stay and anxiety, and these results are problematic and might have other explanations.

It is impossible to interpret what the difference in length of stay means, because they did not record the reasons for delaying discharge. As far as we can tell from the paper, the doctors deciding when to discharge a patient were not blinded as to which study group the patient was in. It’s interesting that the visitor group length of stay was intermediate in the outpatient subgroup, but higher than control for the combined inpatient/outpatient group. They offer no explanation for this. I was puzzled by the bar graph showing these numbers, because the numbers on the graph don’t seem to match the numbers in the text. The numbers were manipulated: they did a logarithm transformation for length of stay “to handle the skewness of the raw data.” I don’t understand that and can’t comment. The range of hospital days is such that the confidence intervals largely overlap. In all, these data are not very robust or convincing and they raise questions.

They interpret the anxiety reduction scores (HT 6.3, visitor 5.8, and control 1.8) as showing a significant efficacy of HT, but it seems more compatible with a placebo response and a slightly better response for the more elaborate placebo.

There were fewer patients (63) in the visitor group than in the HT and control groups (87 each). This was not explained. The comparison of groups appears to show that the control group had significantly higher pre-op anxiety scores than either of the other groups, which would tend to skew the results

They didn’t use a credible control group. A visitor sitting in the room can’t be compared to a charismatic touchy-feely hand-waving practitioner. Other studies have used mock HT where the hand movements were not accompanied by healing thoughts. These researchers rejected that approach because they didn’t think it would be ethical to offer a sham procedure where the practitioner only “pretended” to help. Hmm… One could argue that they have provided no evidence that HT practitioners are ever doing anything more than pretending to help.

They don’t comment on how practitioners were able to “assess the energy fields” of their patients. Emily Rosa’s landmark study showed that practitioners who claimed to be able to sense those fields couldn’t.

The authors consist of 3 RNs (2 of them listed as healing touch therapists and presumably the ones who provided treatment in the study), a statistician with an MS, and two “directors of research” for whom no degrees are listed. The authors are clearly prejudiced in favor of HT.

They interpret this study as supporting the efficacy of HT. I don’t think it does that. I think the results are entirely compatible with a placebo response. With any made-up intervention presented with strong suggestion, one could expect to find one or two statistically significant differences when multiple endpoints are evaluated. And the magnitude of the improvement here is far from robust. This is the kind of result that tends to diminish in magnitude or vanish when better controls are used. I think the study is Tooth Fairy science, purporting to study the effects of a non-existent phenomenon, but actually only demonstrating a placebo response.

I wonder if better results might be obtained by having a patient advocate stay with the patient and offer reassurance, explanations, massage and other comfort measures – something like the doulas who have been shown to improve childbirth outcomes.

The frightening thing is that during the course of this study, patients increasingly bought into the HT belief system and refused to sign up for the study because they wanted HT and didn’t want to risk being assigned to a control group. And hospital staff bought into the belief system, were treated themselves, and became proponents of offering it to patients for other indications.

The paper ends with a rather incoherent statement one would not expect to find in a scientific medical journal: “At the very heart of this study is the movement toward recognizing that the metaphoric and physical heart are both very real, if we allow them to be.”

26 thoughts on “Healing Touch and Coronary Bypass”

At the very heart of this study is the movement toward recognizing that the metaphoric and physical heart are both very real, if we allow them to be.

My oh my. The writer of this paper apparently thinks we can ‘allow reality.’ Hey baby, reality is reality, it goes on without us allowing it or not! Geesh. What are these wooers saying? Just complete nonsense which sounds thoughtful and positive, but is just idiotic, useless, and boring blathering.

What gets my goat is that this quote is sloppily and vaguely referring to something that is known already and embraced by science-based medicine: humans are emotional beings and that aspect needs to be taken in account in treating patients. However, we can’t attribute to our emotional relationships aspects which have no evidence of existing.

Cheapness. I once paid for a massage and got something akin to healing touch. Maybe from the therapist’s point of view this was equivalent to or better than massage because she didn’t have to exert herself? And how are we to know they’re not faking it? Maybe the healing thoughts emanating from the therapist are “I get off this stupid job in about an hour and go home”.

Healing Touch strikes me as predatory, picking off vulnerable targets. Also, one size of this nonsense fits all — not very patient-centered. Why offer only HT, why not a rain dancer in full regalia? Or a pole dancer in zero regalia. Maybe roasting coffee or baking bread as aroma therapy. Why no real choices from the depths of the patient’s imagination, I thought this was alternative medicine?

Without patient volition, Healing Touch shows up unexpected in one’s existential inbox as spam. The very first time you received spam, maybe you thought for a moment, hey, this sounds pretty good; but as your perceptions about spam changed, in short order you just wanted it to go away.

Using a log transformation to make the data better fit a bell curve is an antiquated technique. Parametric statistics (e.g. the t-test) assume the data fits the standard bell curve. Their advantage is that the require very little calculating power. There is no excuse for massaging the data when computers automate nonparametric statistics, which do not assume a Gaussian distribution. Their choice of statistical techniques is highly dubious.

If health care providers associated with the hospital came into my room and waved their arms around saying they were helping me by modifying my bioenergy field, that would incentivize me to get out of there ASAP whether I was fully healed or not.

I wonder what they think would happen if they stood over a patient, waved their hands, and thought with negative intent? If HT is as powerful as they claim, I’d hate to have someone perform it on me if they were having a bad day.

…that would incentivize me to get out of there ASAP whether I was fully healed or not.

Me, too.

Their choice of statistical techniques is highly dubious.

Nothing to say of their interpretation of those highly dubious statistics, even if they were valid. Look at yesterday’s post by David G. and consider how prior probability figures in whether or not we are justified in concluding that ‘statistical significance’ implies a true hypothesis. From this paper (Table 2) we see that a P value of 0.01 for data from a study of a hypothesis whose prior probability was 1% (much higher than a reasonable prior probability of a hypothesis based on psychokinesis, by the way) will raise the posterior probability to (only) 13%. That is a far cry from from the 99% that most people, even in academic medicine, imagine to be the case.

When you add all the other design problems with this HT study (no blinding, probably no allocation concealment, no credible control group as Harriet pointed out, etc.), it is reasonable to put it in Ioannidis’s category of very high ‘bias’, which makes the result of this study entirely, well, useless–although it could be useful as a disconfirming study, but that would require greater interpretive powers than, apparently, exist in the field of ‘academic CAM.’

Given all that, by the way, here’s another topic for a blog: how the National Library of Medicine chooses ‘CAM’ journals for indexing.

“These researchers rejected that approach because they didn’t think it would be ethical to offer a sham procedure where the practitioner only “pretended” to help.”

They apparently don’t understand how blinded or double blinded RCTs work, or perhaps they consider them all unethical.

As long as it is disclosed to the study subjects that they are being randomized in to a group that may receive either real or sham/mock treatment, there is no ethical problem. It is not exactly the best ethics in the world intentionally exclude a control (mock/sham treatment) that would increase the quality of the study and its results.

If the practitioners are so convinced of the efficacy of their therapy that they feel it is unethical to offer sham treatment, why do they think it is any more ethical to withhold supposedly effective treatment at all?

Thanks for this critique. Yet another biased, error ridden study which doesn’t prove what the sCAMmers think it does.

I have a particular interest in HT, as it was an attempt to get it officially sanctioned in my little rural hospital that turned me from a closet skeptic, to an activist. Yes, we are still (officially) a woo woo free hospital.

I pointed out at the time that all they were really claiming to do was make patients feel more relaxed and less anxious, and we could accomplish the same thing by allowing by allowing pets to visit.

Of course, if they had used sham/mock HT, I think we can all guess how that would have turned out.

The mock/sham HT group would be no different from the “real” HT group, and the researchers would conclude that the mock/sham HT was also effective, possibly because the practitioners could not help willing good intentions to the subjects or stop their energy fields form positively interacting with the subjects’ energy fields due to their inherently positive intentions.

We’re missing the real problem with sham healing touch, how do you prevent the practitioner from cheating by having helpful thoughts after all? You wouldn’t be able to distinguish your treatment from your control.

I don’t have a citation or anything for you, but you may get what you want. My institution has a med student volunteer organization that brings dogs (who have undergone some sort of training/obedience course especially for the program) into the hospital.
I understand one of the med students is attempting to find funding to study if it has any actual effect beyond a social one. The med students running the group don’t seem to have any illusions about this though, they’re just happy the patients are smiling, they don’t pretend to have treated anyone.

Does anyone know how to get hold of completed NCCAM studies that haven’t been published in journals? It is my understanding (I did some research on this a few years ago) that all studies done under these grants have to be filed with NCCAM but don’t have to be published. Still, these should be public records (created with public money, no?) so should be somewhere we can see them.

Specifically, Sharon McDonough-Means and Iris Bell condicted a study on theraputic touch for stressed neonates. This, I think, if it was conducted correctly, is a brilliant strategy. Babies don’t know or care if you are non-touching them and can’t be influenced by the process. Anyway, the study, ClinicalTrials.gov identifier: NCT00034008, is completed. I can’t find where on the NCCAM site these filed reports are located. But the information on the study protocols are here:

BTW – There is “Healing Touch” and “Therapeutic Touch” and they are two different hostile camps. Dolores Krieger is the RN who theorized and developed the Therapeutic Touch methodology while another RN, Janet Mentgen, developed a parallel method called Healing Touch. They don’t like each other. There’s also an offshoot of this woo called Quantum Touch.

I can’t access the study so these are just questions. In addition to the prior probability issue, there is the multiple comparisons issue. These people ran a lot of t-tests; did they correct for multiple comparisons? The rather random distribution of significant results is typical of what one finds when there is no real relationship and no controls for multiple comparison have been employed.

Isn’t this a typical strategy in CAM “research”? Do just enough scientifically to show it works, and that’s all anyone remembers. Almost everyone here knows how to rip this study into pieces (I haven’t taken a statistics class since many of you were born, but I retain enough knowledge to understand how weak the analysis is), but we’re a self-selected group that knows how to do this. Or, at least in my case, I know where to go to find out how to rip apart this study.

Once again, if there’s no clear scientific basis for a CAM claim, I guess the existence of the Tooth Fairy, all the other “research” is just plain irrelevant. Don’t show me that something happens until you show me how.

A variety of statistical sins were committed. First of all, 6 outcome variables were evaluated. In the primary analysis and only 1 (anxiety score) was statistically significant at P=.04. This was significant ONLY because the failed to adjust the P value for multiple endpoints. If they had made the adjustment it would not have been significant.

The statistical significance of length of stay was found only when they did multiple pairwise comparisons between the 3 groups, which should require and additional adjustment of the threshold P-value.