In a 6-month follow-up study of 119 hospitalized adolescents, Yen and colleagues found that many traditional risk factors including psychiatric diagnoses and past attempts failed to prospectively predict suicidal behavior. Other factors, which the authors called "cross-cutting" (because they cut across many disorders) were more potent.

These findings have direct clinical implications and indirect prevention implications. From a clinical perspective, clinicians must be cautious in applying population-generated risk factors to clinical risk formulation. Clinical training in risk formulation should emphasize dynamic factors over diagnoses and history and involve thoughtful synthesis of a wide range of factors and individual circumstances. From a broader prevention perspective, the study provides additional building blocks in the argument for focusing on cross-cutting constructs such as emotion self-regulation in suicide prevention (see our recent population-based study identifying emotion self-regulation as a critical construct for youth suicide prevention). This emphasis on "cross-cutting" constructs has interesting intersections with NIMH's effort, represented by the Research Domain Criteria (RDoC) to shift research away from DSM diagnostic categories toward dimensional assessment of more fundamental and biologically verifiable constructs. These findings are also congruent with (though they do not directly support) strategies that reach further "upstream" in adolescent development to build core "cross-cutting" protective factors.

The small island of Nantucket, MA has seen 3 teen suicides in a short period of time, according to the New York Times. Very sad. Statistically, three suicides in a high school of 400 represents a meaningful cluster, and a possible contagion effect. Whether it is or it isn't contagion in Nantucket (it is impossible to know for sure and the article suggests some disagreement in this case), the key thing for clinicians to know is that vulnerability to contagion has been documented in adolescents. Clinicians working with adolescents at risk at the time of a public or peer suicide should consider reassessing their clients' risk for suicide when news of a peer death becomes public.

For clinicians assessing and managing suicide risk, the fact that phones installed on a bridge have been used by individuals who went on to live is testimony to just how much ambivalence remains, even in people who have gone very far toward resolved plans and preparatory behavior.

Understanding that ambivalence is key to clinical work with suicidal individuals. When I train clinicians about assessment and response to suicide risk, I often get questions about whether it is useful or even right to assess suicide risk. I'm also asked, "What about people who have good reasons for killing themselves or who rationally decide they want to end their lives?" My answer goes something like this:

Thankfully, for health care professionals there is no practical dilemma here. If you find out about a person's suicidal thinking, then there is some degree of ambivalence. Everyone knows that psychotherapy or primary care are about health...that is life. We're not about suicide and death. So if someone is coming to us, at least some small part of them is aligned in that direction. And it's our job to understand that ambivalence and work toward health and life until such time as the ambivalence is resolved in one direction or the other.

That line of thinking can apply to any person, really--not just healthcare professionals. Except in some rare circumstance that you'd have to work hard to construct, the fact that someone is still alive and letting someone know by words or action about suicidality reflects ambivalence.

The fact that people read signs and use phones on bridges also discourages a fatalistic stance on the part of clinicians. We can't simplify the matter by saying "If someone really wants to kill themselves they will, so what's the point of screening or assessing?" That question misses the point. We assess because people don't want to kill themselves. Some just don't see options for life and, under the wrong circumstances (like under the influence of substances or after a particularly deep emotional wound), they overcome their ambivalence just long enough to do the unthinkable. We need to have deep compassion for the amount of pain that must be, and nurture the life-embracing side of the ambivalence until the person can see options again.

Dr. Lang and her colleagues learned a great deal from their pilot. As someone developing clinician training in risk assessment, I was especially interested in what they discovered about the range of clinician reactions to the idea of screening for suicide risk:

Many clinicians shared the popular myth that asking about suicide might make it more likely.

There was more resistance to the screening than the implementation team anticipated.

Reactions, both positive and negative toward the program were strong.

There were many other lessons, and I look forward to reading the process papers that will come out of the experience.

Reflections: Many of the experiences Dr. Lang shared point to how difficult and loaded the topic of suicide is for clinicians--even the most experienced ones. As a trainer, this highlights to me the need to find predictable and replicable ways to create a safe learning environments, where clinicians feel understood and where their current practice patterns are honored. This can be hard to do when you are suggesting a change in practice. Dr. Lang and her colleagues made huge efforts to support clinicians, yet still encountered challenges.

Making clinicians feel safe enough in a training that they'll consider changing practice patterns involves the tone and stance, as well as the content of a training. In reviewing training curricula, I've discovered that tone, stance, and conceptual starting points are often not explicitly developed. Contrast this with the way people develop treatment interventions and manuals. For example, in the first chapters of Marsha Linehan's highly successful intervention manual, Linehan lays out an entire dialectical worldview that undergirds her intervention program. That kind of elaboration is rare in developing educational interventions. A recent conversation I had with DeQuincy Lezine, Ph.D. underscored this point for me--he advocated for using "logic models" to examine the assumptions and mechanisms behind any community or training program.

Here are a few ideas about tone, stance, and starting points that I'd like develop further:

Drawing on Marsha Linehan's work again, clinician training in suicide assessment requires a balance in the "dialectic" between unconditional acceptance and push for change. Why is this balance so important (and difficult) when it comes to suicide? Perhaps Linehan's concept of "invalidating environments" may apply more than we'd like here, as well. Many of the administrative and legal systems in which we work are invalidating and blaming! Furthermore, one's work vis-à-vis suicide is so personal and fundamental that the suggestion of need for improvement can be hard to take in.

Another way of considering the stance and tone needed for effective clinical training in this area from a stages of change (transtheoretical) perspective, i.e. that training needs to have a motivational interviewing stance. The trainer must have an awareness of the ambivalence toward change, and present change tentatively and in a way that draws upon the internal motivation clinicians have to improve their practice in this regard. In my trainings, I've found that one way to do that is to talk about the unspoken dissatisfaction I carried for years about the experience of working with suicidal patients--I share with participants that I always found the experience unrewarding and that I had a vague pre-verbal sense that the way I approached suicide was probably not that helpful to the individuals I worked with. In addition to being genuine, that kind of stance may stoke clinician motivation in a way that the public health arguments do not.

In addition to these considerations regarding the pedagogical stance, there are also content emphases that might reduce clinician resistance. As I have noted in almost every post on teaching and training, I feel training in this area should begin with what and how clinicians think and that many efforts in clinician training have the wrong starting point-i.e. they begin with the question: "what do experts say clinicians know about suicide or suicide risk assessment" rather than "what do clinicians want to know." In my experience, clinicians are most hungry for how to document their work and decisions so that they can feel less anxious and can focus on doing what is best for the patient. If that's the case (and this remains an empirical question), then documentation should be a starting point...through which other content (including what experts would say clinicians should know) can be delivered.

Thanks again to Dr. Lang for an informative, stimulating, and enjoyable conversation. She is doing good and interesting work with the State. I look forward to reading the papers that come out of her most recent project, and about the next stages of it development.

I've gotten a few questions from colleagues and trainees lately about using the SADPERSONS screen. Most recently, a colleague pointed me to an article in Psychiatric Times titled, "APA: Simple Screen Improves Suicide Risk Assessment." The topic seems worthy of a post to think through both the appeal and risks of the SADPERSONS scale.

For those who are not aware of SAD PERSONS, it is a 10-item scale to purports to screen for suicide risk. An individual is given one point for each item for which he or she screens positive:

The word "simple" in headline of this Psychiatric Times article linked above captures what makes the tool sound appealing, especially for the thousands of health care systems that need a quick way to respond to the JCAHO patient safety goal 15 and 15A: "The organization identifies safety risk inherent in its client populations" and "The organization identifies clients at risk for suicide" (see this .pdf for explication of these goals).

From one perspective, there is nothing wrong with using acronym like this. It can remind clinicians (assuming they can remember what all the letters stand for!) of some of the risk factors and warning signs of suicide. Who can argue with that? However, from a training and clinical perspective, there are a few problems with this approach, especially when the screen is put forward as a scored scale. Let me summarize a few of these. Note that my thinking about some of these concerns is strongly influenced by concerns articulated by my senior (and very brilliant) colleagues in email exchanges we have had about this. I don't claim originality here, just summary:

The "scale" assigns risk level on the basis of a point system: A score of 1 or 2 points indicates low risk, 3-5 points indicates moderate risk, and 7-10 indicates high risk. This approach works under the assumption that these factors are equally weighted. A separated, 46-year old male with diabetes with no depression would have a higher risk level (score=4, moderate), than 40 year-old married woman with chronic depression, current hopelessness who was just released from a psychiatric hospital after a near-hanging. (score=2, low risk).

Having a risk "score" creates conditions for clinicians to rely on a number instead of developing an informed clinical formulation of risk.

The suggestion that risk for suicide can be boiled down to a single number--even for screening purposes--presents a misleading picture of the complexity phenomenon and how to think about it as a clinician.

The evidence that the linked article gathered does not correspond with the alluring headline, "Simple Screen Improves Suicide Risk Assessment." Evidence reported by those who conducted the study was that, after using the computerized screen, nurses tested showed more knowledge about risk factors for suicide. Of course, knowledge about factors is a long way from demonstrating improved assessment. Obviously, the physicians who reported their study at APA the study did not write the headline. The semantic overreach of the headline speaks to the understandable desire to find easy ways of doing hard things.

Finally, from a training perspective, I find acronyms longer that 3 letters almost impossible to remember! SAD PERSONS particularly clumsy, and, IMHO a bit forced. "O" stands for "Organized plan or serious attempt" whereas I would probably make plan a "P" if I were trying to remember it, but of course that's already taken by "P" for "Previous." That often ends up being the problem with trying to make these things fit into an acronym. In a way, this gets back to the theme I've been harping on lately in my posts about teaching and training about needing a basic-science base about how clinicians learn, remember, and use principles or practices we learn. I'd imagine an expert in human memory could graph the inverse relationship between recall rate and number of letters in an acronym--add to that the need to recall these letters that signify words or concepts with high emotional impact.

In summary, while SAD PERSONS may be helpful to some people as a tool for remembering risk factors, it has some serious limitations as risk assessment "scale" and probably as a mnemonic.