Imagine my surprise to open the Journal of the American Psychoanalytic Association this spring and to find an article filled with numbers and comparatively little in the way of text. Had I gotten the wrong journal by mistake? No, I think APsA is embracing research. This particular article by our friends in Germany, Gunther Klug, Johannes Zimmerman, and Dorothea Huber (2016) is the third report on a research study comparing patients who were randomly assigned (based on a diagnosis of moderate to severe depression without psychosis) to one of three treatments with therapists experienced in the approach – 35 in psychoanalysis (PA: defined as meeting two to three times a week and using the couch), 31 in psychodynamic psychotherapy (PD), and 34 in cognitive behavioral therapy (CBT). While I don’t intend to mirror the JAPA article by including lots of numbers, I actually found it helpful to know that the length of treatment varied between the groups: PA lasted on average 39 months (range 3-91) with an average of 234 sessions (range 17-370), PD averaged 34 months (range 3-108) with 88 sessions (range 12-313), and CBT lasted on average 26 months (2-78) or 45 sessions (7-100 range).

I think psychoanalysts are “suddenly” interested in outcome research because we feel pressed to demonstrate that what we do is effective. We believe – and I don’t know that it’s true – that if we have evidence that our treatments work, there will be third party payer support for those treatments. So papers like Shedler’s “The Efficacy of Psychodynamic Psychotherapy” (2010) have been welcomed with open arms. We want to be able to demonstrate, objectively, that what we do works. What is less clear is that we want research to guide the way that we do treatment. We have always relied on clinical lore, theory and intuition to guide our work – along with our native and learned communication skills. Clinical lore and theory are built on our collective wisdom, hard won, based on closely attending to the process of treatment, hypothesizing (and asking) what the active treatment elements have been, and trying to pass along that wisdom while simultaneously expanding it. To have a researcher, based on numbers, tell us how to engage in something as intimate, human and non-mathematical as a clinical interaction seems like anathema to some of us.

Well, this paper (and the other two papers in this series) straddles the fence – it both helps us make the case that what we do is valuable and it takes a peak under the hood to see what might be causing the good outcomes that we have. All three treatments resulted in a dramatic reduction in symptoms during the first six months with steady, but not so dramatic reductions continuing throughout the duration of treatment. Partly, I think, the reductions in rate of change are less dramatic after the first six months because, for instance, on the Beck Depression Inventory (BDI), symptoms have been cut in half on average in that time, and there simply aren’t enough symptoms left to continue abating at the initial rate. In any case, it is intriguing that, post termination, the symptoms of the patients treated with PA continued to decrease, something not seen with the PD or the CBT group. The differences were greatest with the CBT group – post termination PA patients had, on average, a 6 point lower endorsement of BDI symptoms.

OK, good. Not only are we as good as the competition, when we look down the road, we are doing a little better (the PA results did not differ from the PD results which did not differ from the CBT results – they were in that order, but only the PA results were significantly different from the CBT results). But this study wanted more than just to demonstrate the differences between the response to the different treatments – they wanted to demonstrate why those differences occurred – a loftier goal. We might opine that the post treatment results occur because of the internalization of the analytic process as a result of the identification with the analytic functioning of the analyst – and this was indeed the hypothesis of the experimenters (in fact they cite Freud (1937), Horney (1942) and Hoffer (1950) to support this idea) – but how would we demonstrate that to have been the case? They also point out that it could be that the patient could have learned to self-soothe or has internalized positive experiences with the analyst – rather than that it being the analyzing function that is internalized. And, frankly, it is my guess that it is a bit of all of these and more – in different proportions for different people, which makes it hard to parse out one particular element in an experimental paradigm.

The authors tested their hypothesis indirectly – they didn’t measure internalization of analyzing function, but the level of alliance that the patient felt with the treater (and the treater with the patient) as a kind of stand in for internalization, and then measured whether there was a mediating effect of alliance. That is, in those patients with greater alliance was there also a greater post treatment improvement? They also included a measure of self-love vs. self-hate. The results were tantalizing, but not significant, which was somewhat surprising as this result had been reported in three prior studies, but which was to be expected with a sample size that was this small.

Why might it be that these results are not significant? Our interventions are manifold – the interactions with our patients include much useful and not so useful stuff. We work to craft them, to help this particular patient with this particular problem and this particular mindset learn to be more open to themselves or to think differently – perhaps more broadly – about this particular issue. And we come at that from various angles over the course of a treatment. When we ask a patient or a therapist to give a global assessment of the level of alliance, that includes some sense of whether they feel that they have been on the same page as the interlocutor, but it doesn’t necessarily mean that they have “taken in” the therapists way of approaching problems and are ready to apply it on their own, but it might. On the other hand, I’m not sure that we generally ask of our patients if they have done this – though we can sometimes see them doing it.

How is it that we might have been able to divine something clinically – from a smaller sample than 25 patients – that doesn’t show up in at least this test? Many times I think it is our own experiences of our own analyses that inform us – perhaps more than we know – about what will be useful to our patients. Despite that, at least in my institute, few of us talk directly about them in unguarded ways. I think that we may attribute something to our patients that may be coming at least in part from our own experience and we need to bear in mind that our own analyses are of a particular kind – they are training analyses – and we are consciously intending to internalize the analyzing function. We want to emulate our analysts. This may privilege a component of the treatment process in our own minds – one that is likely present to some degree in all analyses. For the impact of internalization to be statistically significant with a sample size of this magnitude, given that the measure is pretty indirect – the impact of internalization – or the related strains (certainly a positive experience with the therapist would have supported the mediation hypothesis in this design) – would have to be quite high. The salience may not be as high in a group that is in treatment primarily to obtain symptom relief, not to learn to do analysis, even though the internalization of the analytic function may be occurring.

We have a result – the “most analytic” group continued to improve symptomatically post treatment. We have a likely causal agent – the internalization of the analyzing function. And we have a sophisticated design to test the relationship between the two. In so far as internalization is an important element of the improvement in functioning of our patients, it is one strand in a complex tapestry and it is likely that many things are contributing even to the post treatment improvement. It may be that we have elevated the influence on one likely factor (one that does appear from other studies and from the “promise” of this one to have some validity) and it may be that there are other sustaining factors that may not be as salient to those of us who have had a training analysis. Another point of view is that it is in some sense surprising that we are able to divine – through clinical intuition guided by and guiding theory, or through research techniques – the manifold slender threads that are inter-related that lead to good treatment outcomes – as well as to divine those things that are less helpful – indeed toxic elements – in the treatment process.

If you are interested in responding to this post, please do so below. If you have read a research article recently that you think would be of interest to an audience of clinicians and would like to write a 1000- 1500 word summary of that, please send it as an email attachment to Karl Stukenberg at stukenb@xavier.edu. Include the words “Clinicians Reading Research” in the subject line.