Are Hospital Readmissions Really a Bad Thing?

Consider four psychiatric patients, all discharged from an inpatient unit on the same day following stabilization of an acute psychotic episode. A week later, the following events takes place:

Arnold’s symptoms return and in despair he commits suicide.

Barbara’s symptoms return and she goes on a cocaine binge, fueling her aggressive tendencies to the point where she punches a cop, landing herself in jail.

Carlos’s symptoms return and he becomes convinced that his apartment is full of listening devices. He moves to living under a bridge far from town.

Derrick’s symptoms return, and, having learned about his illness in the hospital, he recognizes the problem and returns to his site of care. He is admitted for 24 hours, re-stabilizes, and is then maintained as an outpatient in the community.

So, why would some powerful players in our health care system consider Derrick to have had the worst outcome? Because he and not the others was re-admitted to care within 30 days of discharge.

This situation is not unique to psychiatry. Last week, I went to a meeting of cardiologists who are grappling with the same reality. Medicare ratings, consumer groups and an increasing number of insurers are pressuring cardiologists to have shorter lengths of stay and fewer rapid (i.e., within 30 days) readmissions. The desired outcome has become a measure of health care utilization rather than health. As this tail increasingly wags the dog, hospitals face some perverse incentives. If you aggressively monitor your patients after discharge, you are more likely to catch a symptom that warrants re-admission (Presuming you have this funny idea in your head that the health care system should try to save people’s lives). Likewise, if the hospital is in a location that provides easy access and its admission procedure poses minimal burearaucratic barriers — normally things we would cherish — re-admission is more likely and the hospital’s rating and level of reimbursement may go down.

If you follow the logic of the anti-readmission crowd out, you arrive at the conclusion that the best hospitals are those that close and those that kill every patient on the surgical table, because both types of facilities have a re-admission rate of zero.

Well, for heart conditions, I would say 1-year survival is a better proxy than 30-day readmission. For mental health issues (what Thomas Szasz would rightly call “problems with living”), the proxy is less clear, but readmission is such a bad proxy for actual assistance that it calls to mind the drunk looking for his keys by the lamppost. A badly chosen measurement is worse than no measurement at all.

This echoes the debate over high-stakes testing in schools. Of course measurement is essential to management. And of course a bad measurement system with perverse incentives can be worse than no measurement system at all. If we can come up with something to measure that’s better than nothing, fine, lets do so. But the people who point out that a given system is worse than nothing do not have the burden of inventing a better system; the burden is on the bean-counters to find some beans worth counting.

David’s implicit argument is that we need to measure something (lest the Measurement God be offended), so the incumbent measurement system must stand unless and until someone invents something better. This embodies the fallacy inSir Humphrey’s Politician’s Syllogism:

– We must do something.
– This is something.
– Therefore, we must do this.

My colleague Ken Weingardt and I found no relationship between inpatient readmission and health outcomes in a sample of over 3,000 psychiatric patients across a range of readmission measures and a range of outcome measures….readmission was an independent phenomenon, not a proxy.

Agree firmly on the parallel to K-12 education. To clarify, my position is not “to let the measurement system stand unless and until someone invents something better.”

Instead it has two elements:

[1] If your position is that the measurement system is “worse then no measurement system at all” let’s see some actual evidence of that. I agree that it “can” be worse, but an abstract example with four hypothetical patients only convinces me that it can be worse, not that it actually is worse. If this sounds like measuring the measurement system, that’s actually what is being advocated.

[2] Let’s focus scarce resources on making the measurement system better, since “measurement is essential to management,” rather than arguing about whether or not we should have a measurement system. Fundamentally, I see us making the perfect the enemy of the good, which tends to lead not just to mistakes but mistakes that are difficult to correct.

The idea that people would be happy with the long term consequences of abandoning measurement is a delusion that has done more damage to public education than Prop 13.

That was an excellent paper, it is clear, concise, and contains a number of insights.

It did leave me with the sense that there is a much broader issue than readmission or measurement: we don’t really know what drives outcomes.

I realize that I know nil about the field, so I apologize in advance if I’m overextending with that last statement, but my hunch is that if we knew what drove outcomes we would all agree to measure it. And arguing about measurement seems like a mis-allocation of resources relative to understanding outcome drivers.

David: Thanks for this thoughtful reply, I don’t find your statement over-extends much at all what we (don’t) know. It is extrardinarily hard to predict post-treatment outcomes based on treatment process/health care utilization measures. A good example is Werner and Bradlow’s JAMA paper on acute MI, heart failure and pneumonia which you can see here

What it shows is that even a hospital that made a heroic improvement in process performance measure scores (say, went from the bottom 25th percentile up a full quartile, or even two quartiles, in a year) would have only a very modest improvement in mortality rates.
The parallel is care for addiction is that we have spent tens of millions on improving process measure performance (and I am chagrined to say I oversaw some of that work) and we still lack compelling evidence that these improvements predicted better outcomes after treatment.

If we cannot explain the variance in outcomes through available variables, it seems like there are either [a] subtle variables that are not captured but have surprisingly high explanatory power, or [b] that certain studied variables work together in combinations that defy straightforward statistics like single-variable regressions.

Perhaps one possible vector forward is bifurcation of the research into two components into [1] data gathering and [2] data analysis. It seems there is a massive silo-ification of outcomes research data, with the consequence that researchers can only analyze the data that they gather, and once they gather some data they cannot easily get the benefit of other researchers analyzing their datasets in potentially novel ways.

Does a data repository like this exist?

And as a thought experiment, does anyone see significant obstacles to someone or a group of people of sufficient motivation, perseverance, and technical ability building it, other than the cold-start problem of getting people to contribute to, and use it, when in its infancy?

I bridled at Keith’s post, but was reassured by Keith’s (and others’) comments.

The post was written in the personalized and particularized rhetoric of journalism or politics. But the readers of this blog are mostly dessicated policy types, and we’ve gotten accustomed to bad policies being defended by misleading anecdotes. Some of this earlier comments raised this possibility, and the later comments effectively buried it.

So–thanks to the comments–I’ve learned something useful about health policy. Lesson: if your audience consists of nerds, don’t personalize. They’ll think you’re picking their pockets, even if you have something useful to say.

David: There have long been calls to put data on line, and there has been some limited progress for example with NIH funded studies. Part of the challenge within health care systems is that for private health care organizations the data are proprietary; they don’t want to share and I don’t know of any courts having made them do so.

I think part of the challenge in terms of incentivizing quality in health care is that for every minute that passes since the patient leaves the hospital, the relationship between quality of care and outcome weakens. If someone is for example going back to a stressful family, or they are socially isolated, or they are homeless etc. that tends to grab the outcome variance post-hospital whether the care was good or bad. Health care professionals thus often fear — not entirely without reason — that they are being held responsible for things they do not control.

Some readmissions are because of inadequate care the first time around and some aren’t. No one disagrees with this.

There is going to be some baseline level of readmissions that just plain happens, and no one claims the baseline is zero.

What is claimed is that differences in readmission rate reflect in large part relative adequacy of care, not that readmissions should be zero, or even that there are no other factors affecting differences in readmission rates.

I am with you on the propriety of the data, and on the the idea that there are social factors exogenous to the quality of care that drive outcomes.

Perhaps if social factors are the primary empirical drivers of outcomes in many medical situations then we, as a society, are substantially over-allocating resources to providing high quality care. This would definitely be something both health care professionals and private health care organizations would have non-trivial economic incentives to obfuscate.

Dave: Although there have been some notable successes in quality improvement (e.g., surgical safety, infection prevention) in my view disappointment has been the more common experience, particularly if the standard set is improvements in health outcome after treatment. I have seen occasionally, I am sorry to say, some people not being straightforward about this situtaion (including trying to suppress results that show failures of health care quality improvement efforts), not because of venality because they have invested so much of effort into it that they can’t accept in their own mind that it may not work.

Another possibility is that the correlation between health care quality and outcome is low because of the relationship is non-linear, i.e., really bad care (e.g., a hospital swimming in MRSA due to grossly inadequate infection control) produces bad outcomes, but once you reach a threshold of adequacy further improvements in quality don’t matter much for outcome prediction. If this were so (and I am only speculating), the health policy implication would be to take a “bad apples” approach rather than a “good to great” approach in quality improvement.

The request (demand) for both shorter stays and fewer readmissions poses a problem, especially for surgical patients. People are increasingly having the experience of being sent home before they are stable (sorry, in a hurry, no figures to cite) and being readmitted, when a day or two more in hospital would literally have prevented readmission.

“There is going to be some baseline level of readmissions that just plain happens, and no one claims the baseline is zero.

What is claimed is that differences in readmission rate reflect in large part relative adequacy of care, not that readmissions should be zero, or even that there are no other factors affecting differences in readmission rates.”

I won’t claim to be an expert in psychiatry, so can’t say if readmission rates is a valid metric here. I’m not aware that the govt is considering any penalties for high readmission rates among psychiatry patients. To my knowledge the focus (so far) is on heart failure, heart attack, pneumonia, COPD – all conditions with numerous studies showing that better care coordination, discharge process changes, and post-discharge follow-up can significantly reduce readmission rates.

In my mind the question is, could similar changes that have proved effective with other conditions also work for psychiatry patients? The readmission rate can never be zero, but if by making adjustments some UNNECESSARY readmissions are prevented – this would be good for both the patient and the system, no?