Editor’s note: Because of Dr. Gorski’s appearance at CSICon over the weekend, he will be taking this Monday off. Fortunately, Dr. Coyne will more than ably substitute. Enjoy!

NIH is funding free training in the delivery of the Cancer to Health (C2H) intervention package, billed as “the first evidence-based behavioral intervention designed to patients newly diagnosed with cancer that is available for specialty training.” The announcement for the training claims that C2H “yielded robust and enduring gains, including reductions in patients’ emotional distress, improvements in social support, treatment adherence (chemotherapy), health behaviors (diet, smoking), and symptoms and functional status, and reduced risk for cancer recurrence.” Is this really an “empirically supported treatment” and does it reduce risk of cancer recurrence?

Apparently the NIH peer review committee thought there was sufficient evidence fund this R25 training grant. Let’s look at the level of evidence for this intervention, an exercise that will highlight some of the pseudoscience and heavy-handed professional politics in promoting psychoneuroimmunological (PNI) interventions.

The report of the single study (full article available here) evaluating the efficacy of this intervention for physical health outcomes appeared in the American Cancer Society journal Cancer in in 2008. An earlier report (full article available here) claimed to demonstrate the effects of the intervention on the “secondary outcomes” of mood, immune function, health behaviors, and adherence to cancer treatment and care.

The abstract of the 2008 Cancer article described the group intervention as a set of strategies to “reduce stress, improve mood, alter health behaviors, and maintain adherence to cancer treatment and care.” The abstract reported not only a reduced risk of cancer recurrence but proclaimed “psychological interventions as delivered and studied here can improve survival.” If this intervention indeed improved survival, it is curious that the claim was not echoed in the advertisements for this training program.

When the article first came out, I did a simple chi-square calculation on the raw recurrence and death events in a pair of 2×2 cross tabulations of outcomes for intervention versus control group. No matter how I played with the data in figure 2, group differences nowhere near approached significance. Here is the online calculator and below are the data in Table 2 so that you can experiment for yourself (click to enlarge):

My colleagues and I decided to take a close look at the reports on this trial and write a commentary to be submitted to Cancer. We took the position that claims about reducing risk of recurrence and extending the survival of breast cancer patients are medical claims that should be held to the same standards as claims about medications and medical devices improving health outcomes. These standards include consistency between the abstract and findings reported in the results section of an article, pre-specification of one or two primary outcomes and follow up period, pre-specification of analytic plan, presentation of results in a way that allowed readers to evaluate the appropriateness of choice and interpretation of statistical tests. The latter would include transparent presentation of unadjusted primary outcomes in analyses for time by treatment interactions and avoidance of substitution of secondary and subgroup analyses.

To help ensure the standards are met, most biomedical journals have embraced CONSORT as the standard for reporting results of both clinical trials and, more recently, for abstracts. Many journals also require publicly accessible preregistration of trials for the publishing of later results, i.e., that investigators declare ahead of time their intentions for sample sizes, outcomes, and analyses, before they run the first patient. These standards are enforced less consistency with psychosocial trials, and reregistration was not in place at the time this clinical trial was implemented in the mid-90s. However, by the time these papers were published in 2004 and 2008, it had already been established that not meeting the CONSORT reporting standards involved a high risk of bias and unreliability of results. And investigators do not need the coaxing of CONSORT standards for abstracts to presume that abstracts should accurately reflect the results reported in the rest of the article.

When we submitted the commentary to Cancer, it was initially rejected, with the editor citing a standing policy of not accepting critical commentaries if authors refused to respond. We asked the editor to re-evaluate the policy and reconsider the rejection of our commentary. We argued that the policy was inconsistent with the growing acceptance of the necessity of post publication peer review. Essentially the policy allowed authors to suppress criticism of their work, regardless of the validity of criticism. Furthermore, our commentary presented not only a critique of the article, it called attention to a failure in editorial review that was worthy of note in itself. We therefore requested that we be allowed to expand our commentary substantially beyond the strict word limitations of correspondence about a particular study. After a meeting of the editorial board, the editor graciously accepted our requests.

In the commentary, we pointed out that the trial did not report significant tests for unadjusted outcomes and gave no rationale for the particular follow up period of 11 years (7 to 13) in which progression or deaths were recorded. Investigators committing themselves to a particular observation period ahead of time prevents post hoc shrinking or extension of the observation period to get more favorable results based on a peeking at the data. Nonetheless, we could find no significant differences in the proportion of women experiencing recurrence or dying, despite claims of the investigators to the contrary. Furthermore, the difference in median time to recurrence, six months, or to death, was small, given the length of observation period.

How were the investigators able to claim significant effects? By relying on dubious multivariate analyses with too high a ratio of covariates to events (recurrence or death). I’ll leave much of the technical statistical arguments to the commentary, but basically, the investigators’ approach had a high risk generating spurious effects. It’s always reassuring when results for simple unadjusted primary outcomes in a randomized trial hold up after adjustments for possible confounds, although the rationale for undertaking control of any initial differences between groups is unclear because randomization is itself supposed to take care of any. When results are not obtained in simple unadjusted analyses, but then show up in multivariate analyses, the suspicion is that they are spurious, because results of multivariate analyses are often dependent on arbitrary decisions about which covariates to include and how to score them, decisions that can be made and revised based on peeking at the data. We should be particularly suspicious when, as is the case in this trial, too many covariates are entered as controls.

We went on to critically examine the earlier study of psychosocial measures, adherence, and immune function.

The abstract of this article reported testing the hypothesis “psychological intervention can reduce emotional distress, improve health behaviors and dose-intensity, and enhance immune responses.” The results presented in the abstract were uniformly positive in terms of effects on anxiety and improved dietary habits, smoking, and adherence, with no negative results mentioned.

When we examined the actual methods section, we found at least nine measures of mood, eight measures of health behavior, four measures of adherence, and at least 15 measures of immune function were assessed. There was no independent way of determining which of these measures represented the primary outcome for each domain. With so many outcomes examined, there was high risk of obtaining apparent effects by chance.

Turning to the actual results, only one of the 9 measures of mood was significant in time by treatment interactions. The intervention seemed to have a significant effect on dietary behavior (although it is unclear why the seemingly very different dietary behaviors were not analyzed) and smoking, but no effect on exercise. As is often the case with early breast cancer patients, rates of adherence to chemotherapy were too high to allow any differences between intervention and control group to emerge. In terms of immune function, results were not significant for CD3, CD4, CD8 counts cell counts, or six assays of natural killer cell lysis. If we compare this overall pattern of results to what was stated in the abstract, we see a gross confirmatory bias in the suppression of negative results and highlighting of positive ones.

Subsequent papers from this project amplified the confirmatory bias of these two papers by declaring a reduced risk of recurrence and death from breast cancer for intervention participants and gains for all secondary outcomes. These papers also cast doubt on whether the 2004 paper disclosed all of the outcome measures that were assessed. One article stated that for the subgroup of patients with elevated Center for Epidemiologic Studies-Depression (CES-D) scores, the intervention reduced depressive symptoms. This outcome is not even mentioned in earlier reports, but these subgroup analyses seem to imply that a reduction in depressive symptoms did not occur for the full sample. It is a reasonable inference that this null finding was suppressed in earlier reports. CES-D scores would seem to be the preferred primary measure of mood outcome for such studies. The CES-D has validated clinical cut points, and it is commonly believed that depression is the mood variable most strongly related to immune function. Another article referred to the Beck Depression Inventory (BDI), also an excellent candidate for a primary outcome in a study attempting to affect recurrence and survival via links between psychological variables and the immune system.

Our close reading of the results reported in these two articles suggests that the intervention is inert with respect to mood and immune function, and has no effect on progression and survival. The intervention is hardly ready for dissemination into the community. The designation of this intervention in advertisements for the free training as “the first evidence-based behavioral intervention designed to patients newly diagnosed with cancer” is premature and exaggerated. What could be meant by “evidence based”? Claims of “robust and enduring gains” in all categories of outcomes are simply wrong.

My colleagues and I gave our now familiar argument that there was lack of evidence that any psychosocial intervention could reduce risk of recurrence and improve survival. There was also a lack of evidence for possible mechanisms by which such effects could conceivably be achieved.

Cancer published our commentary without a response from the authors because they continued to refuse to provide one. Our commentary was instead accompanied by a response from Peter Kaufmann, MD. We wondered why the choice came down to Dr. Kaufmann and why he would accept the offer to reply to us. He had not written much about cancer, but he is Deputy Chief of the Clinical Applications and Prevention Branch of National Heart Lung and Blood Institute (NHLBI) and at the time his commentary was written, he was President of the Society of Behavioral Medicine.

The Cancer to Health (C2H) intervention package is based on the assumption that psychological variables have clinically significant effects on physical health via the immune system. Despite the lack of support for this idea with respect to cancer, the idea remains highly attractive and resistant to rejection because it lends prestige to psychosomatic and behavioral medicine. At the annual meeting of SBM at which Dr. Kaufmann became president, the keynote address was delivered by David Spiegel and basically involved debating in absentia skeptics and critics of the notion that psychosocial intervention could extend survival of cancer patients, including me. I complained to Dr. Kaufmann that if Spiegel wanted to debate me, I should have been invited to respond. Kaufmann indicated that I would get an invitation for keynote in the future to remedy this imbalance, but the occasion never materialized.

Subtitling his commentary “To Light a Candle,” Kaufmann conceded that my colleagues and I had raised valid criticisms about the design and interpretation of the C2H intervention trial. However, he took issue with our recommendation that clinical trials of this kind be suspended until putative mechanisms could be established by which psychological variables could influence survival. Quoting our statement that an adequately powered trial would require “huge investments of time, money, and professional and patient resources,” he nonetheless called for dropping a “preoccupation with mechanisms and secondary aims,” and instead putting the resources to increasing the sample size and quality of an intervention trial.

I remain puzzled by Kaufmann’s argument. In the absence of specification of a mechanism by which psychological variables could have such an effect, was Kaufmann nonetheless suggesting that we needed a large trial to overcome the lack of power of the moderate sized C2H trial? I cannot imagine a NIH administrator making a similar argument for a large scale study of an herbal product or coffee enemas or other intervention with a similar undocumented mechanism of influence.

Barbara Andersen, the principal investigator on both the C2H trial and the grant for trained professionals in delivering the intervention, has never responded in print to our criticisms and charges that the trial does not affect progression or survival. However, she has complained to administrators of the institutions of a number of her critics and asked that they put a stop to behavior having negative ramifications for the field of behavioral research in cancer. She also campaigned unsuccessfully to get another critique of a work that we published retracted.

It is unlikely that NIH showed favoritism in funding the training grant, relying instead on scores obtained in peer review. Reviewers must have been swayed by the consistent confirmatory bias in presentation of the results of C2H. However, there is a bias in NIH supported forums given to claims about psychosocial interventions affecting physical health outcomes. Andersen and those making similar claims regularly get invited to annual NIH sponsored symposia at professional meetings and reiterate the claims again and again. Apparently, there’s no room for critics on such panels.

The two papers presenting the outcomes of C2H have inaccurate abstracts and data analytic strategies that hide that they are basically null trials. In this respect they are not alone. Elsewhere I have documented that other psychosocial trials [1,2,] conducted by PNI investigators would be revealed to be null trials if time x treatment interactions were transparently reported for primary outcomes. Here is what to look for:

Positive spin to abstract, highlighting best of results obtained using these strategies.

Negative findings presented as if positive in subsequent publications.

PNI cancer researchers take a Texas sharpshooter approach to identifying positive effects for immunological variables. The apocryphal Texas sharpshooter drove drunk around Texas with his rifle and a can of red paint and shot up the sides of buildings. Afterwards, he would draw a bull’s-eye with some of his hits in the center, creating the impression of an expert marksman who always hit his mark. PNI researchers similarly collect numerous PNI measures, not on the basis of their known association with cancer, but based on their ease of assessment. Measures derived from saliva samples are particularly popular. Investigators then declare whatever measures prove significant as evidence that they have tapped into the PNI of cancer. Further, they claim to replicate existing studies, when existing studies obtained significant effects with different measures. Any positive result obtained with a battery of measures is declared a replication, even when it is not a precise replication.

Compared to cancer, behavioral interventions in HIV+/AIDS have the advantage of well validated mechanisms by which behavioral interventions might conceivably influence the immune system, and in turn, readily measurable assessments of any clinically significant impact. This area has attracted considerable interest from PNI researchers, who, similar to cancer PNI researchers, praise their own and each other’s success in modifying clinically relevant immunological parameters. But a meta-analysis of 35 randomized controlled trials examining the efficacy of 46 separate stress management interventions for HIV+ adults (N = 3,077) tells a different story”

To our surprise, we did not find evidence that stress reduction interventions improve immune functioning or hormonal mechanisms that could influence immunity. These findings contrast with the PNI perspective that guided our work and most of the interventions included in our review (Antoni, 2003; Robinson et al, 2000). Thus, even though chronic stressors are known to suppress both cellular and humoral markers (see Segerstrom & Miller, 2004) the short-term use of stress-management strategies does not seem to reverse these processes in patients with HIV.

PNI cancer researchers remain a self-congratulatory group with a strong confirmatory bias in their mutual citations of the field’s claimed successes. Judging by citation patterns in the incestuous journal Brain, Behavior, and Immunity, one can readily get the impression that there are never any negative studies in the PNI cancer literature.

The articles reporting results for the C2H trial continue to be highly cited, with little apparent effect of our criticism. With a lack of other positive trials that can be cited, particular importance in the PNI literature has been attached to the claims that C2H extend survival of cancer patients. There is apparently little concern about conveying unrealistic expectations to patients concerning effects of psychosocial intervention on their immune system, and these claims fit with patents’ impressions and motivations for going to peer support groups and group therapy.

Cancer patients sometimes faced difficult choices about medical interventions to manage their disease. It is unfortunate if they are provided with misinformation that all they need to do is get stress management interventions to slow progression and extend their survival. Belief that these interventions are effective can discourage them from committing themselves to more effective, but painful, fatiguing, and disfiguring medical interventions.

19 thoughts on “NIH funds training in behavioral intervention to slow progression of cancer by improving the immune system”

I fully accept the criticisms laid out in this article, but I think one important line of criticism is missing from the critical commentary. The relevant section where I think it should have been placed is this:

“Andersen et al. do not report standard, unadjusted outcomes, such as a Kaplan-Meier estimate of the survival function. Their data reveal that the proportion of women experiencing a cancer recurrence did not differ significantly between the intervention condition (25.4%) and the control condition (29.2%; odds ratio, 0.83; confidence interval [CI], 0.46-1.48; P=.525). Moreover, there was no difference between the proportion of women who died in the intervention group (21.1%) versus the control group (26.5%; odds ratio, 0.74; CI, 0.40-1.36; P=.332); similar results are obtained if only those deaths caused by breast cancer are examined. Thus, the unadjusted analyses suggest that this is a null trial with respect to disease recurrence and survival.”

Just like it is important to separate “statistical significance” (i.e. low probability of obtaining data, or more extreme data, given that the null hypothesis is true) from “practical significance” (the difference is large enough to be on clinical importance), it is worth separating “statistically non-significant” (failing a significance test) from “clinically equivalent” (sample effect sizes and the range of plausible population effect sizes are the same). Confidence intervals should play a major role in any interpretation of research results, above and beyond just using it as another way to do a significance test.

If we look at the two confidence intervals here, we do not only note that they overlap zero (i.e. not statistically significant), but also that the upper arm of the confidence interval stretches quite high (to OR 1.48 and 1.36 respectively). This means that the range of plausible values for the population parameter includes values where the treatment actually does major harm to patients compared with control and a fairly large part of the error bar is located in the OR > 1 region.

So I would argue that saying that there is no difference between the two groups is not a strong enough criticisms. The treatment itself could very well be quite actively harmful compared with control (not just harmful as a side effect of leading to blaming patients if they relapse).

“My colleagues and I gave our now familiar argument that there was lack of evidence that any psychosocial intervention could reduce risk of recurrence and improve survival. There was also a lack of evidence for possible mechanisms by which such effects could conceivably be achieved.”

It seems to me that a plausible mechanism would be if a psychosocial intervention increased adherence to evidence-based therapies and this increased adherence reduced risk of recurrence and improved survival. Are you saying that is not a plausible mechanism, or that adherence is already so high that increased adherence would either have no effect or the effect size would be so small that it would take a huge trial to demonstrate it.

I agree with you about the issue of clinical significance. However, in a highly speculative area of research that is starved for anything that can be claimed to be statistically significant, there is a tendency to settle for appearing to get that. I’ve attempted to compare differences in of immune parameters that investigators claim to be significant to what occurs in normal populations. Consistently, both intervention and control group in these PNI studies fall well within the normal range of values.

I think it’s quite easy to generate plausible mechanisms by which psychosocial interventions might influence adherence and to put them to unambiguous tests. There have been some efforts to educate postsurgical cancer patients and their caregivers concerning wound healing in the management of infections. There is some observational data that indicates that male cancer patients without partners interrupt the radiation treatment when they start to suffer side effects, something that is not observed with patients who have partners. This would seem to be a reasonable area in which to investigate the efficacy of psychosocial intervention, particularly once the reasons for the treatment interruption are established. But unfortunately PNI cancer researchers are focused and one may even say obsessed with trying to affect physical health outcomes by changing immune in hormonal parameters for which there is not well worked out mapping to these physical health outcomes.

Trick question: Someone tell me what they think the statistical test was for the part quoted by Emil Karlsson. Log-rank you say? Probably not, since a few paragraphs later “It is not possible to evaluate the claims of statistical significance made by Andersen et al. with respect to time to events without access to the data.” So we can’t even fit Kaplan-Meiers since we have no data? I admit that the failure to fit such models in the original paper is a show-stopper – I would have firmly declared it not possible to publish without that. I did not find their multivariate models or method of getting to them that strange, but in cases like this you really want to see the data yourself and make it pee in a cup, since you worry if the statistical methods are accurately described. I’m saying I am a bit concerned about the analysis, but could not find a giant offense. The design sucketh, that is certain – they are taking on way too diverse a set of patients and consequently have zillions of covariates to worry about. I also don’t like the control treatment – can’t ya take them fishing or give them some reading materials saying to watch diet, exercise and stay on your treatments. Without that the conclusion might be that doing anything to try and help may in fact help more than doing nothing, when I hoped the question is what that “anything” should be.

Someone tell me what “time by treatment interaction” means. What time? Time to event? If so, that’s usually on the left side of the model. Does it mean what year patient presented instead?

Being hopeless (and untrained at any rate) at statistics, I read this blog to get experts to interpret them as used in studies, for me.

The gist of this seems to be yet another effort to embellish cancer treatment with unproven rituals that, while they may make people “feel better”, have no scientific basis.

It is widespread in our culture to see cancer as an “enemy”, to be “battled” and “defeated”, in a “war”. Death amounts to “losing” the battle. I notice this even in commentary about Lance Armstrong’s fall from grace. Many commenters bemoan the sad situation of someone who “beat cancer” to end up being disgraced–as though it is cosmically unfair. These ideas are widespread and it is not that surprising, given the general encroachment of SCAM into medicine that these ideas would show up among doctors and other medical professionals as well.

Educating the media should remain a high priority for skeptics. Critics must remain vigilant and it is fortunate that you persevered and prevailed with yours. Getting this into the mainstream, though, seems even more difficult.

Thanks for bringing this to our attention. Brian Engler and I have just submitted a paper detailing the fall out onto the community level of the R25 Grants awarded by NIH’s Center for complementary and alternative medicine, NCCAM. These grants installed teaching of Alternative Medicine into respected Medical Schools.
It’s a shock to realize that this thinking has affected funding awarded by NCI. I checked there were two NCI awards for this project, 2011 and 2012. Both were for over $317 thousand dollars. Its surprising that our National Cancer Institute has diverted $700 thousand dollars from its research agenda to this unproven alternative medicine concept.
Eugenie Mielczarek

It is so gratifying to post my blog’s at Science Based Medicine and get responses, recall that I’m a refugee from the Psychology Today Blog, from which I bolted after they changed my title of a blog so that didn’t annoy a sorry, dvertises from Pharma. Sorry, Emil, I misattributed your smart comment to Marilynmann who had a smart comment of her own.

I shoulda complimented Coyne and the co-authors for their efforts in publishing critiques. It takes guts.

Eugenie: I think we get to blame congress, for setting up NCCAM and it’s money. That the money so ear-marked then gets spent on CAM is rather expected, though one hopes they would put it in projects that actually do hold promise ( – as the munchkin says about their decedents “if any”).

James
Thanks for your interest in our study of NCCAM R25 Grants it’s grinding it’s way through a review process. Our previous study of NCCAM research grants was published in THE SKEPTICAL INQUIRER. Unfortunately print journalism is only interested in heart warming reports of mind- body interactions which persuade persons to forgo evidence-based medicine. The media is unable to recognize bad studies.
Eugenie Mielczarek

I don’t understand why it is OK for them to throw in adherence counseling (which I thought was generally already shown to work to some degree) only into the test group along with new or experimental mind-body stuff (which might not work at all). Isn’t that basically cheating? Differing compliance rates are normally something you would consider to be a confounding factor and want to AVOID, and yet in this experiment they are actively encouraging only the test group to comply and counting that as part of their experimental treatment.

If adherence counseling is already known to improve adherence to standard care by 5% (purely guessing), how is the intentional lack of adherence counseling in the control group any better scientifically than giving equal adherence counseling, but canceling the standard care prescriptions of a portion of the control group?

The paper says they measured drug levels to assess adherence rates (and then I suppose control for that mathematically), but why even include that at all? Adherence seems to have absolutely nothing to do with the other unspecified mechanisms they are hypothesizing about stress and immune system stuff. There seems to be no legitimate reason to group them together when you would logically have to try to split them apart again before drawing any conclusions.

“it was initially rejected, with the editor citing a standing policy of not accepting critical commentaries if authors refused to respond.”

Do you think it would be worthwhile for those of us without academic positions who spot problems in papers to try to push against these sorts of policies in the way that you did? Or is this something that is going to have to come from those within the academic system?

“I cannot imagine a NIH administrator making a similar argument for a large scale study of an herbal product or coffee enemas or other intervention with a similar undocumented mechanism of influence.”

Some treatments might be instinctively dismissed, but for psychosocial interventions, even when the proposed mechanism has been debunked, it seems that spun results from RCTs can still be used to pragmatically justify the promotion of treatments as ‘evidence based’.

@Janet: “The gist of this seems to be yet another effort to embellish cancer treatment with unproven rituals that, while they may make people “feel better”, have no scientific basis.”

I think that there’s also a real danger that RCTs can show certain treatments leading to improvements in certain questionnaire scores without necessarily making patients really feel any better. eg: People can just try to be positive and polite because they know that a therapist has tried to help them; or some psychosocial interventions seem to target the way in which patients use language, or think about problems, in a way that would be expected to alter their questionnaire answers regardless of whether it had any impact upon how they ‘really’ felt.

@geo
RE Do you think it would be worthwhile for those of us without academic positions who spot problems in papers to try to push against these sorts of policies in the way that you did? Or is this something that is going to have to come from those within the academic system?

I think you use up a certain amount of social capital in pushing back and risk alienating editors….unless you succeed. I have succeeded a number of times, but it is usually because I convinced editors honoring my request to change policies or to publish my commentary is a win-win outcome for me, but also for the journal or science. But some editors do not care about science or care about it less than professional politics.

@ConspicuousCarl
I agree that throwing adherence counseling into an already complex intervention makes it difficult to interpret effects, especially when the adherence counseling increases exposure to medical treatments. Elsewhere I talk about it as introducing co-treatment confounds. But an intervention like C2H is so confused in its goals and the investigators are so desperate to have something positive to claim, they will try lots of diverse strategies at once, even if they cannot explain results.

I am learning how blogging, Twitter, and other alternative media can be effective tools for overcoming bad practices in the conventional journals, including their rejection of any post publication peer review.

@ James Coyne. Good luck with trying to spread the word and improve practices. It is easy for people to slip into a unjustified faith in things which purport to be evidence or science based, and this can have significant political and moral implications. Up until relatively recently I would tend to assume that claims being made in a paper were supported by the evidence cited, without taking the time to check – since I’ve started looking in more detail at the specifics of a lot of papers, I’ve come to see how often this is not the case.

Thank you for your well-referenced article on this topic. I think it’s very important and the issues you bring up reflect what seems to be a growing conflict under the surface between the political & career pressures and scientific honesty that permeates (bio)medical research in the US. The pressure to get grants funded which must be responsive to NIH’s priorities, to get grants funded that peer reviewers approve of, to publish as much as possible, to publish as quickly as possible, and the publication bias to positive results are factors that contribute to the persistence of these non-scientific interventions being promoted and researched. There are three areas that I’d like to comment on: 1) Perfectly legitimate areas of PNI research, 2) Growing pains and historical/political/career problems in neurovirology, and 3) Affirm the areas of great concern that you have outlined.

I attended the Psychoneuroimmunology Research Society (PNIRS) annual meeting this year (which was local to me), because I was invited to give a lecture as part of a short educational course on molecular biology. (The lecture was on the basics of microRNA biology, by the way). Much of the research that I saw being presented was derivative from Sapolsky’s work on stress, cortisol, glucocorticoids, and the immune system; along with the “glucocorticoid cascade hypothesis” of depression/mood disorder. Within this hypothesis are perfectly plausible biological mechanisms related to hypothalamic/pituitary/adrenal (HPA) axis dysfunction having physiological consequences on various organ systems, including the immune system. Cortisol, for example, is an immunosuppressive molecule. Sleep disturbance and metabolic malfunctions (as a result of stress hormone signaling) are plausible mechanisms for immune dysfunction.

(For what it’s worth, my dissertation research was on the kinetics of glucocorticoid receptor signaling and chaperone proteins in neurons; which are affected by pro-inflammatory paracrine signaling).

At this PNIRS meeting, researchers from my home institution presented posters related to HIV-infection, cognitive measurements, and correlations with pro-inflammatory cytokines measured in the cerebrospinal fluid. There is a long history of studying effects of HIV on the brain since the beginning of the epidemic because there was, in fact, HIV “encephalitis” characterized in the era before highly active antiretroviral therapy (HAART) was the norm in the US. There were also a range of opportunistic infections that affected the brain. In the early days, a prolific research community was built to characterize the neuropathology of HIV infection and understand the neurocognitive or “behavioral” outcomes in these patients. This was very useful in understanding the biology of HIV and the clinical course of end-organ disease and AIDS. A significant proportion of this research was funded by the NIMH. The result is a large contingent of academic scientists who are now at the stage of full-professor or tenure and research infrastructure around this field; in departments that mainly focus on brain & behavior; who depend on funding from NIMH, and who are attempting to continue to study HIV & and the central nervous system in this capacity. The problem (as I see it) is now that HAART is the norm for HIV patients in the US, and we can suppress viral loads to undetectable levels within months of initiating therapy, and a rebound of CD4 cells — these neuropathology and behavioral outcomes for HIV patients are becoming irrelevant. But the focus of this “old guard” is still there, and they are continuing to train and nurture young scientists in this field. Because HAART is rendering these neuropathology outcomes irrelevant, they are switching to these other NCCAM-type studies referenced in your post, the so-called psychosocial interventions and HIV clinical course outcomes. (Thankfully, none of them came from my home institute).

There is a resurgence of interest in HIV in the central nervous system currently — but in the context of identifying potential cellular reservoirs of latent provirus — with the intention of eventually developing strategies for eradication & cure. However, because of the source of funding (NIMH), and the past research focus of the existing power brokers in the field (and NIH program officers are complicit participants to this), the clinical outcomes & interventions are all these “behavioral” domains that are rarely clinically relevant, and we have to stretch very hard to even measure. I do not see an easy way out of this entrenched mess. When I propose certain ideas to one group (say, NIMH or NIDA) — it’s not interesting because I need to have a mental health outcome; if it’s proposed to another group (say, NIAID) — it involves the brain, and therefore should go to the NIMH.

All that said — I’d like to shift gears to a final comment on some of the things I saw at the PNIRS meeting. Definitely, I saw a bias toward reporting non-significant results as positive findings. I do not understand the relationship between PNI & cancer. Certainly there is a link between central nervous system, stress, and immunology — but the way that some interventions are being proposed (mental health interventions for an immunology disease) seems to ignore plausible biological mechanisms. It was also distressing to see some interventions (say Tai Chi or acupuncture) reported as having positive results on nearly anything, while ignoring placebo effect (or worse, designing the control group to not get a placebo). Some (not all, I should qualify) in the room seemed to really truly believe that fixing meridians and vital life forces and energies and such really do positively impact the clinical course of almost any disease. I think they are stretching to the immune system as a biological mediator solely for practical purposes. The assays are easy and inexpensive to do, and blood serum and cells are easily obtainable specimens from patients enrolled in a study. That’s a pragmatic view. I think that PNI research is being infiltrated by CAM simply because CAM researchers need to find or study a “mechanism” and immunology measurements are generally safe, cheap, and easy to do.

I may be overly generous to the founders of the PNI field (I am relatively young and a new investigator); but having attended their meeting, I observed that there seems to be various camps who talk past each other. In the future, I hope to have the bravery to stand up to bad science (at the risk of my career or future publication prospects).

I almost missed this article. I’m glad I didn’t. It’s a bit over my head, but I think it was worth the effort or getting what I could from it and the comments helped.

I have a laymen’s question that is tangential to this article. I often hear about the effects of emotional “stress” on the immune system, in terms of inflammation, illness, autoimmune disease, heart disease, etc, but I don’t have a good sense of how evidence based any of these claims are. It does seem well established that emotional stress can trigger migraines and flares in people with auto-immune disease, some skin reactions, but beyond that what’s up?

Conversely, is there evidence that any current stress management techniques lower occurrences of symptoms in illnesses know to be triggered by stress?

Sorry, I know this is broad, but I just thought I’d throw it out there, maybe for a future post or two.

Very good question, #mousethatroared, I think the role of stress and the immune system are much better charted in cardiovascular disease than in cancer, for instance. For instance, a generalized immune response has negative implications for artery disease. Fori instance, maybe untreated gum disease can have implications for cardiac problems.

The problem with claims about psychological interventions affecting the immune system in ways beneficial to physical health is that the changes in immune functioning that occur are modest, if they occur at all.

etatro makes some excellent points above, without being as skeptical as I am.

Efforts to affect cardiovascular outcomes by treating depression have been disappointing. Depression in cardiac patients can be treated with the same effectiveness as depression in people without cardiac problems, but there is no apparent effect on the likelihood of another heart attack or death from cardiac disease.