The blog of Ashish Jha — physician, health policy researcher, and advocate for the notion that an ounce of data is worth a thousand pounds of opinion.

Profits, Quality, and U.S. Hospitals

The recent articles in the New York Times about the Hospital Corporation of America (HCA) have once again raised important questions about the role of for-profit hospitals in the U.S. healthcare system. For-profits make up about 20% of all hospitals and many of them are part of large chains (such as HCA). Critics of for-profit hospitals have argued that these institutions sacrifice good patient care in their search for better financial returns. Supporters argue that there is little evidence that their behavior differs substantially from non-profit institutions or that their care is meaningfully worse.

To me, this is essentially an empirical question. Yet, as I read through the articles, I was struck by the dearth of data provided on the quality of care at these hospitals. Based on the comments that followed the stories, it was clear that many readers came away thinking that these hospitals had sacrificed quality in order to maximize profits. Here, I thought an ounce of evidence might be helpful.

Measuring quality:

There is no perfect way to measure the quality of a hospital. However, the science of quality measurement has made huge progress over the past decade. There is increasing consensus around a set of metrics, many of which are now publicly reported by the government and even are part of pay-for-performance schemes. While one can criticize every one of these metrics as imperfect, taken together, they paint a relatively good, broad picture of the quality of care in an institution. We focused on five metrics with widespread acceptance:

Mortality rates (Proportion of people who die within 30 days of hospitalization, taking into account the “sickness” of the patient)

Readmission rates (Proportion of people who are readmitted within 30 days of discharge, taking into account the “sickness” of the patient)

Hospital Safety Score (a measure of how effective a hospital likely is at preventing medically-induced harm to patients).

An important caveat: The NY Times article highlighted terrible, unethical practices by some physicians at HCA hospitals who appear to put in cardiac stents when there was no clinical indication. We don’t have the data to examine whether this practice occurs more often at HCA hospitals than at other institutions. Therefore, I’ve decided to focus more broadly at hospital quality. Most of the metrics above have been approved by the National Quality Forum, are widely regarded by “experts” as being good, and are used by Medicare to judge and pay for quality.

How we analyzed the data:

We examined all U.S. hospitals in four groups: Privately-owned non-profit hospitals, government-owned public hospitals, for-profit hospitals that were not part of the HCA chain, and HCA hospitals. In our analysis we “adjusted” for characteristics that are beyond the hospital’s control such as size, teaching status, urban versus rural location, and region of the country (adjusting is important: imagine that all the for-profit hospitals were large and large hospitals generally had better quality. Without adjustment, we’d say for-profit hospitals were better and therefore, we should encourage more for-profit hospitals. With adjustment, we’d be able to hold size differences constant and examine the actual relationship between quality and the profit status of the hospital).

What we found:

In the table below, we use “non-profit” hospitals as the reference group because it’s the largest group of hospitals. All the scores that are statistically different (at p-value <0.05) are highlighted either in red if they are significantly worse or in green if they are significantly better.

Interpretation

The best part of looking at data is that you get to draw your own conclusions. Here are mine. Public hospitals are struggling on nearly every metric. For-profit hospitals outside of the HCA are a mixed bag – they do worse on patient experience (as we’ve found before), better on processes measures, and somewhat worse on mortality and readmission rates. They are about average on the Leapfrog safety score.

However, HCA hospitals look pretty good. They tend to have good patient experience scores, really excellent process quality (adherence to guidelines) and are average or above average on mortality and readmissions (pneumonia mortality does appear to be high, though not statistically significant). They do very well on the Leapfrog safety score* (nearly half got an “A”).

My takeaway is that although which hospital you go to has a profound impact on whether you live or die, whether the hospital is “for-profit” or “not for profit” has very little to do with it. What really matters is leadership, focus on quality, and a dedication to improvement. That appears to happen equally well (or badly, depending on your perspective) in both for-profit and non-profit hospitals.

So, when it comes to quality, it’s time to stop thinking about it as an issue of “for-profit versus non-profit” hospitals. Instead, it’s time to start talking about the large number of relatively poor performing hospitals where patients are being hurt or killed un-necessarily. Those hospitals come in all sizes, shapes, and yes, ownership structures, and we have to figure out how to make them better.

Finally, these analyses were run by Sidney T. Le, a terrific young analyst in our group. You should follow him on twitter (@sidtle) although his love of Stanford sports can be a challenge. Consider yourself warned.

*The Leapfrog safety score was developed by a group of experts (full disclosure: I was on that panel – but don’t worry, there were many people much smarter than me on the panel).

11 thoughts on “Profits, Quality, and U.S. Hospitals”

Thank you for your thorough analysis. As you point out in your caveat, the point is not necessarily that for-profit hospitals have worse quality than non-profit hospitals (at least not from my reading), but rather that they may be engaged in practices that are not optimal for a patient’s health. If this is the problem, none of the metrics you mentioned will necessarily check this: patients may give the hospital a good score due to asymmetric information in receiving the diagnosis, and while there might be risk downstream for mortality, it’s not clear it would be captured in the short-term. Thus, while I agree with your conclusion regarding the “quality” of for-profit hospitals, I would like to hear how these metrics could be improved to penalize hospitals that engage in practices such as this.

Erik — this is a terrific point. The measures of quality are incomplete and obviously, hospitals can engage in behavior that is harmful to patients but would not be picked up by these measures.

What’s not clear to me is whether those kinds of practices, which are unfortunately more common than they should be (how many patients get un-necessary cardiac stents?) are more common at HCA or other for-profit hospitals. Have seen no data to suggest that they are.

The New York Times article also accuses HCA hospitals of aggressively screening incoming patients at the ER. It seems to me that this could lead to selection bias if non-HCA hospitals have less control over their patient pool. The more I think about it, the less certain I am about which direction that would bias your results, but in any case I’m curious to know how you accounted for that in your analysis.

“Risk-adjusted” means that the measure calculations take into account how sick patients were when they went in for their initial hospital stay. When rates are risk-adjusted, it means that hospitals that usually take care of sicker patients won’t have a worse rate just because their patients were sicker when they arrived at the hospital. When rates are risk-adjusted, it helps make comparisons fair and meaningful.

Great article Dr. Jha! One question: is there any way of knowing how well risk-adjustment reflects the reality of different patient mixes? Does it account for socioeconomic status and insurance type in the adjustment?
Thanks!

Hugo — thanks for your comment. While we try to use standard techniques to adjust for differences in patient population, it doesn’t do a very good job accounting for SES. We could re-run the analyses adjusting for SES — but I suspect it might make many of the for-profit hospitals (especially those in the south) look better since many of these institutions care for minority and poor patients. Its a great question and I appreciate your raising them.

It’s amazing when you start looking at the empirical evidence without an agenda. Perhaps a class in the scientific method should be a required course for journalism students, especially if they plan on working at the New York Times.

As Dr. gawande implies in his recent New Yorker article (Cheesecake Factory), there may be some degree of economy (or quality) of scale at play here. The significant institutional resources that HCA has to bear on coordinating care for their patients may be one important factor leading to the improved outcomes.

There is no shortage of evidence that volume is associated with better care. Large hospital systems have can bring a lot more resources to quality problems and indeed, that may be at play. It certainly doesn’t seem like a for-profit vs. non-profit issue to me.

As many have pointed out, it’s difficult to achieve change in the US health care system, even with incontrovertible evidence–at least on the clinical side. The kind of evidence you present in this very helpful post may meet with resistance. Of course, more study and on-going evaluation are needed. How do you move this particular ounce of evidence into the conversation about best practices, outcomes, value and access to care?

Noel — thank you for your comment and for a very good question. No easy answers, but for contentious issues like for-profit vs. non-profit hospitals, we first need to look at the evidence with an open mind. My goal is to get enough people like you and others to evaluate the evidence and make decisions for yourself. Fundamentally, your question as I see it is — how do we get evidence into the broader conversation? By every means possible — blogs, tweets, word-of-mouth, peer reviewed papers, etc. Open to other suggestions as well.