Post navigation

Last week, Marshall Allan and Olga Pierce, two journalists at ProPublica, published a surgeon report card detailing complication rates of 17,000 individual surgeons from across the nation. A product of many years of work, it benefitted from the input of a large number of experts (as well as folks like me). The report card has received a lot of attention … and a lot of criticism. Why the attention? Because people want information about how to pick a good surgeon. Why the criticism? Because the report card has plenty of limitations.

As soon as the report was out, so were the scalpels. Smart people on Twitter and blogs took the ProPublica team to task for all sorts of reasonable and even necessary concerns. For example, it only covered Medicare beneficiaries, which means that for many surgeries, it missed a large chunk of patients. Worse, it failed to examine many surgeries altogether. But there was more.

The report card used readmissions as a marker of complications, which has important limitations. The best data suggest that while a large proportion of surgical readmissions are due to a complication, readmissions are also affected by other factors, such as how sick the patient was prior to surgery (the ProPublica team tried to account for this), his or her race, ethnicity, social supports—and even the education and poverty level of their community. I have writtenextensivelyabout the problems of using readmissions after medical conditions as a quality measure. Surgical readmissions are clearly better but hardly perfect. They even narrowed the causes of readmissions using expert input to improve the measure, but even so, it’s hardly ideal. ProPublica produced an imperfect report.

How to choose a surgeon

So what to do if you need a surgeon? Should you use the ProPublica report card? You might consider doing what I did when I needed a surgeon after a shoulder injury two years ago: ask colleagues. After getting input about lots of clinicians, I honed in on two orthopedists who specialized in shoulders. I then called surgeons who had operated with these guys and got their opinions. Both were good, I was told, but one was better. Yelp? I passed. Looking them up on the Massachusetts Registry of Medicine? Seriously? Never crossed my mind.

But what if, just by chance, you are not a physician? What if you are one of the 99.7% of Americans who didn’t go to medical school? What do you do? If your insurance covers a broad network and your primary care physician is diligent and knows a large number of surgeons, you may get referred to someone right for you. Or, you could rely on word of mouth, which means relying on a sample size of one or two.

So what do patients actually do? They cross their fingers, pray, and hope that the system will take care of them. How good is that system at taking care of them? It turns out, not as good as it should be. We know that mortality rates vary three-fold across hospitals. Even within the same hospital, some surgeons are terrific, while others? Not so much. Which is why I needed to work hard to find the right orthopedist. Physicians can figure out how to navigate the system. But what about everyone else?

I was on service recently and took care of a guy, Bobby Johnson (name changed, but a real guy), who was admitted yet again for an ongoing complication from his lung surgery. He had missed key events because of his complications—including his daughter’s wedding—because he was in the hospital with a recurrent infection. He wondered if he would have done better with a different hospital or a different surgeon. I didn’t know how to advise him.

And that’s where ProPublica comes into play. The journalists spent years on their effort, getting input from methodologists, surgeons, and policy experts. In the end, they produced a report with a lot of strengths, but no shortage of weaknesses. But despite the weaknesses, I never heard them question whether the endeavor was worth it at all. I’m glad they never did.

Because the choice wasn’t between building the perfect report card and building the one they did. The choice was between building their imperfect report card and leaving folks like Bobby with nothing. In that light, the report card looks pretty good. Maybe not for experts, but for Bobby.

A step towards intended consequences

Colleagues and friends that I admire, including the brilliant Lisa Rosenbaum, have written about the unintended consequences of report cards. And they are right. All report cards have unintended consequences. This report card will have unintended consequences. It might even make, in the words of a recent blog, “some Morbidity Hunters become Cherry Pickers” (a smart, witty, but quite critical piece on the ProPublica Report Card). But asking whether this report card will have unintended consequences isn’t the right question. The right question is – will it leave Bobby better off? I think it will. Instead of choosing based on a sample size of one (his buddy who also had lung surgery), he might choose based on sample size of 40 or 60 or 80. Not perfect. Large confidence intervals? Sure. Lots of noise? Yup. Inadequate risk-adjustment? Absolutely. But, better than nothing? Yes. A lot better.

All of this gets at a bigger point raised by Paul Levy: is this really the best we can do? The answer, of course, is no. We can do much better, but we have chosen not to. We have this tool—it’s called the National Surgical Quality Improvement Program (NSQIP). It uses clinical data to carefully track complications across a large range of surgeries and it’s been around for about twenty years. Close to 600 hospitals use it (and about 3,000 hospitals choose not to). And no hospital that I’m aware of makes its NSQIP data publicly available in a way that is accessible and usable to patients. A few put summary data on Hospital Compare, but it’s inadequate for choosing a good surgeon. Why are the NSQIP data not collected routinely and made widely available? Because it’s hard to get hospitals to agree to mandatory data collection and public reporting. Obviously those with the power of the purse—Medicare, for instance—could make it happen. They haven’t.

Disruptive innovation, a phrase coined by Clay Christensen, is usually a new product that, to experts, looks inadequate. Because it is. These innovations are not, initially, as good as what the experts use (in this case, their network of surgeons). They initially dismiss the disrupter as being of poor quality. But disruptive innovation takes hold because, for a large chunk of consumers (i.e. patients looking for surgeons), the innovation is both affordable and better than the alternative. And once it takes hold, it starts to get better. And as it does, its unintended consequences will become dwarfed by its intended consequences: making the system better. That’s what ProPublica has produced. And that’s worth celebrating.

Share:

Like this:

A few months ago, the Centers for Medicare and Medicaid Services (CMS) put out its latest year of data on the Hospital Readmissions Reduction Program (HRRP). As a quick refresher – HRRP is the program within the Affordable Care Act (ACA) that penalizes hospitals for higher than expected readmission rates. We are now three years into the program and I thought a quick summary of where we are might be in order.

I was initially quite unenthusiastic about the HRRP (primarily feeling like we had bigger fish to fry), but over time, have come to appreciate that as a utilization measure, it has value. Anecdotally, HRRP has gotten some hospitals to think more creatively, focusing greater attention on the discharge process and ensuring that as patients transition out of the hospital, key elements of their care are managed effectively. These institutions are thinking more carefully about what happens to their patients after they leave the hospital. That is undoubtedly a good thing. Of course, there are countervailing anecdotes as well – about pressure to avoid admitting a patient who comes to the ER within 30 days of being discharged, or admitting them to “observation” status, which does not count as a readmission. All in all, a few years into the program, the evidence seems to be that the program is working – readmissions in the Medicare fee-for-service program are down about 1.1 percentage points nationally. To the extent that the drop comes from better care, we should be pleased.

HRRP penalties began 3 years ago by focusing on three medical conditions: acute myocardial infarction, congestive heart failure, and pneumonia. Hospitals that had high rates of patients coming back to the hospital after discharge for these three conditions were eligible for penalties. And the penalties in the first year (fiscal year 2013) went disproportionately to safety-net hospitals and academic institutions (note that throughout this blog, when I refer to years of penalties, I mean the fiscal years of payments to which penalties are applied. Fiscal year 2013, the first year of HRRP penalties, refers to the period beginning October 1, 2012 and ending September 30, 2013). Why? Because we know that when it comes to readmissions after medical discharges such as these, major contributors are the severity of the underlying illness and the socioeconomic status of the patient. The readmissions measure tries to adjust for severity, but the risk-adjustment for this measure is not very good. And let’s not even talk about SES. The evidencethatSES mattersfor readmissions is overwhelming – and CMS has somehow become convinced that if a wayward hospital discriminates by providing lousy care to poor people, SES adjustment would somehow give them a pass. It wouldn’t. As I’ve written before, SES adjustment, if done right, won’t give hospitals credit for providing particularly bad care to poor folks. Instead, it’ll just ensure that we don’t penalize a hospital simply because they care for more poor patients.

Surgical readmissions appear to be different. A fewpapersnow have shown, quite convincingly, that the primary driver of surgical readmissions is complications. Hospitals that do a better job with the surgery and the post-operative care have fewer complications and therefore, fewer readmissions. Clinically, this makes sense. Therefore, surgical readmissions are a pretty reasonable proxy for surgical quality.

All of this gets us to year 3 of the HRRP. In year 3, CMS expanded the conditions for which hospitals were being penalized to include COPD as well as surgical readmissions, specifically knee and hip replacements. This is an important shift, because the addition of surgical readmissions should be helpful to good hospitals that provide high quality surgical care. Therefore, I would suspect that teaching hospitals, for instance, would do better now that the program also includes surgical readmissions than when the program did not. But, we don’t know.

So, with the release of year 3 data on readmissions penalties by individual hospital, we were interested in answering three questions: first, how many hospitals have managed to sustain penalties across all three years? Second, who are the hospitals who have gotten consistently penalized (all three years) versus not? And finally, do the penalties appear to be targeting a different group of hospitals in year 3 (when CMS included surgical readmissions) than they did in year 1 (when CMS just focused on medical conditions)?

Our Approach

We began with the CMS data released in October 2014, which lists, for each individual eligible hospital, the penalties it received for each of the three years of the penalty program. We linked these data to several databases that have detailed information about hospital characteristics, including size, teaching status, Disproportionate Share Hospital (DSH) Index – our proxy for safety net status — ownership, region of the country, etc. We ran both bivariate models as well as multivariable models. We show bivariate models because from a policy point of view, that’s the most salient (i.e. who got the penalties versus who didn’t).

Our Findings

Here’s what we found:

About 80% of eligible U.S. hospitals received a penalty for fiscal year 2015 and 57% of U.S. hospitals eligible for the penalties were penalized each of the three years. The penalties were not evenly distributed. While 41% of small hospitals received penalties in each of the three years, more than 70% of large hospitals did. There were large variations in likelihood of getting penalized every year based on region: 72% of hospitals in the Northeast versus 27% in the West. Teaching hospitals and safety-net hospitals were far more likely to be penalized consistently, as were the hospitals with the lowest financial margins (Table 1).

Consistent with our hypothesis, while penalties went up across the board for all hospitals, we found a shift in the relative level of penalties between 2013 (when the HRRP only included medical readmissions) versus 2015 (when the program included both medical and surgical readmissions). This really comes out in the data on major teaching hospitals: In 2013, the average penalty for teaching hospitals was 0.38% (compared to 0.25% for minor teaching or 0.29% for non-teaching). By 2015, that gap is gone: the average penalty for teaching hospitals was 0.44% versus 0.54% for non-teaching hospitals. Teaching hospitals got lower readmission penalties in 2015, presumably because of the addition of the surgical readmission measures, which tend to favor high quality hospitals. In the same way, we see the gap in terms of the penalty level between safety-net hospitals and other institutions narrowed between 2013 and 2015 (Figure).

Figure: Average Medicare payment penalty for excessive readmissions in 2013 and 2015

Note that “Safety-net” refers to hospitals in the highest quartile of disproportionate share index, and “Low DSH” refers to hospitals in the lowest quartile of disproportionate share index.

Interpretation

Your interpretation of these results may differ from mine, but here’s my take. Most hospitals got penalties in 2015 and a majority have been penalized all three years. Who is getting penalized seems to be shifting – away from a program that primarily targets teaching and safety-net hospitals towards one where the penalties are more broadly distributed, although the gap between safety-net and other hospitals remains sizeable. It is possible that this reflects teaching hospitals and safety-net hospitals improving more rapidly than others, but I suspect that the surgical readmissions, which benefit high quality (i.e. low mortality) hospitals are balancing out the medical readmissions, which, at least for some conditions such as heart failure, tends to favor lower quality (higher mortality) hospitals. Safety-net hospitals are still getting bigger penalties, presumably because they care for more poor patients (who are more likely to come back to the hospital) but the gap has narrowed. This is good news. If we can move forward on actually adjusting the readmissions penalty for SES (I like the way MedPAC has suggested) and continue to make headway on improving risk-adjustment for medical readmissions, we can then evaluate and penalize hospitals on how well they care for their patients. And that would be a very good thing indeed.

Share:

Like this:

Now we’re giving star ratings to hospitals? Does anyone think this is a good idea? Actually, I do. Hospital ratings schemes have cropped up all over the place, and sorting out what’s important and what isn’t is difficult and time consuming. The Centers for Medicare & Medicaid Services (CMS) runs the best known and most comprehensive hospital rating website, Hospital Compare. But, unlike most “rating” systems, Hospital Compare simply reports data on a large number of performance measures – from processes of care (did the patient get the antibiotics in time) to outcomes (did the patient die) to patient experience (was the patient treated with dignity and respect?). The measures they focus on are important, generally valid, and usually endorsed by the National Quality Forum. The one big problem with Hospital Compare? It isn’t particularly consumer friendly. With the large number of data points, it might take consumers hours to sort through all the information and figure out which hospitals are good and which ones are not on which set of measures.

To address this problem, CMS just released a new star rating system, initially focusing on patient experience measures. It takes a hospital’s scores on a series of validated patient experience measures and converts them into a single star rating (rating each hospital 1 star to 5 stars). I like it. Yes, it’s simplistic – but it is far more useful than the large number of individual measures that are hard to follow. There was no evidence that patients and consumers were using any of the data that were out there. I’m not sure that they will start using this one – but at least there’s a chance. And, with excellent coverage of this rating system from journalists like Jordan Rau of Kaiser Health News, the word is getting out to consumers.

Our analysis

In order to understand the rating system a little bit better, I asked our team’s chief analyst, Jie Zheng, to help us better understand who did well, and who did badly on the star rating systems. We linked the hospital rating data to the American Hospital Association annual survey, which has data on structural characteristics of hospitals. She then ran both bivariate and multivariable analyses looking at a set of hospital characteristics and whether they predict receiving 5 stars. Given that for patients, the bivariate analyses are most straightforward and useful, we only present those data here.

Our results

What did we find? We found that large, non-profit, teaching, safety-net hospitals located in the northeastern or western parts of the country were far less likely to be rated highly (i.e. receiving 5 stars) than small, for-profit, non-teaching, non-safety-net hospitals located in the South or Midwest. The differences were big. There were 213 small hospitals (those with fewer than 100 beds) that received a 5-star rating. Number of large hospitals with a 5 star rating? Zero. Similarly, there were 212 non-teaching hospitals that received a 5-star rating. The number of major teaching hospitals (those that are a part of the Council of Teaching Hospitals)? Just two – the branches of the Mayo Clinic located in Jacksonville and Phoenix. And safety net hospitals? Only 7 of the 800 hospitals (less than 1%) with the highest proportion of poor patients received a 5-star rating, while 106 of the 800 hospitals with the fewest poor patients did. That’s a 15-fold difference. Finally, another important predictor? Hospital margin – high margin hospitals were about 50% more likely to receive a 5-star rating than hospitals with the lowest financial margin.

Here are the data:

Interpretation

There are two important points worth considering in interpreting the results. First, these differences are sizeable. Huge, actually. In most studies, we are delighted to see 10% or 20% differences in structural characteristics between high and low performing hospitals. Because of the approach of the star ratings, especially with the use of cut-points, we are seeing differences as great as 1500% (on the safety-net status, for instance).

The second point is that this is only a problem if you think it’s a problem. The patient surveys, known as HCAHPS, are validated, useful measures of patient experience and important outcomes unto themselves. I like them. They also tend to correlate well with other measures of quality, such as process measures and patient outcomes. The star ratings nicely encapsulate which types of hospitals do well on patient experience, and which ones do less well. One could criticize the methodology for the cut-points that CMS used for determining how many stars to award for which scores. I don’t think this is a big issue. Any time you use cut-points, there will be organizations right on the bubble, and surely it is true that someone who just missed being a 5 star is similar to someone who just made it. But that’s the nature of cut-points – and it’s a small price to pay to make data more accessible to patients.

Making sense of this and moving forward

CMS has signaled that they will be doing similar star ratings for other aspects of quality, such as hospital performance on patient safety. The validity of those ratings will be directly proportional to the validity of the underlying measures used. For patient experience, CMS is using the gold standard. And the goals of the star rating are simple: motivate hospitals to get better – and steer patients towards 5-star hospitals. After all, if you are sick, you want to go to a 5-star hospital. Some people will be disturbed by the fact that small, for-profit hospitals with high margins are getting the bulk of the 5 stars while large, major teaching hospitals with a lot of poor patients get almost none. It feels like a disconnect between what we thinks are good institutions and what the star ratings seem to be telling us. When I am sick – or if my family members need hospital care, I usually choose these large, non-profit academic medical centers. So the results will feel troubling to many. But this is not really a methodology problem. It may be that sicker, poor patients are less likely to rate their care highly. Or it may be that the hospitals that care for these patients are generally not as focused on patient-centered care. We don’t know. But what we do know is that if patients start really paying attention to the star ratings, they are likely to end up at small, for-profit, non-teaching hospitals. Whether that is a problem or not depends wholly on how you define what is a high quality hospital.

Like this:

Of all the pressing challenges in the US health care system, lack of innovation in delivery may be the most important. Indeed, as we come upon the 50th anniversary of Medicare, a few facts seem apparent. What we do for patients—whether they have infectious diseases, heart disease, or cancer—has changed dramatically. Yet, how we do those things—the basic structure of our health care delivery system—has changed very little.

Share:

Like this:

I’m sorry I haven’t had a chance to blog in a while – I took a new job as the Director of the Harvard Global Health Institute and it has completely consumed my life. I’ve decided it’s time to stop whining and start writing again, and I’m leading off with a piece about adjusting for socioeconomic status. It’s pretty controversial – and a topic where I have changed my mind. I used to be against it – but having spent some more time thinking about it, it’s the right thing to do under specific circumstances. This blog is about how I came to change my mind – and the data that got me there.

Changing my mind on SES Risk Adjustment

We recently had a readmission – a straightforward case, really. Mr. Jones, a 64 year-old homeless veteran, intermittently took his diabetes medications and would often run out. He had recently been discharged from our hospital (a VA hospital) after admission for hyperglycemia. The discharging team had been meticulous in their care. At the time of discharge, they had simplified his medication regimen, called him at his shelter to check in a few days later, and set up a primary care appointment. They had done basically everything, short of finding Mr. Jones an apartment.

Ten days later, Mr. Jones was back — readmitted with a blood glucose of 600, severely dehydrated and in kidney failure. His medications had been stolen at the shelter, he reported, and he’d never made it to his primary care appointment. And then it was too late, and he was back in the hospital.

The following afternoon, I spoke with one of the best statisticians at Harvard, Alan Zaslavsky, about the case. This is why we need to adjust quality measures for socioeconomic status (SES), he said. I’m worried, I said. Hospitals shouldn’t get credit for providing bad care to poor patients. Mr. Jones had a real readmission – and the hospital should own up to it. Adjusting for SES, I worried, might create a lower standard of care for poor patients and thus, create the “soft bigotry of low expectations” that perpetuates disparities. But Alan made me wonder: would it really?

To adjust or not to adjust?

Because of Alan’s prompting, I re-examined my assumptions about adjustment for SES. As he walked me through the data, I concluded that the issue of adjustment was far more nuanced than I had appreciated.

Here’s the key: effective socio-economic adjustment doesn’t reward providers for giving bad care to poor patients. It just ensures that they aren’t penalized for taking care of more of them. In my clinical example, if people like Mr. Jones had a higher readmission rate, adjusting for SES wouldn’t give hospitals credit for lower quality care to poor patients. Done right, it would give credit to hospitals for having more poor patients, and that’s an important difference. Consider three scenarios of hospital performance on a readmission rates (modified from our JAMA piece).

In scenario 1 and 2, let’s assume that patients are readmitted 20% of the time on average, whether or not they’re poor. In scenario 1, Hospital A (a safety-net hospital) has higher readmission rates for everyone. They may have more poor patients, but their readmission rate is high for both poor and non-poor patients. So, compared to Hospital B, they look worse in unadjusted and adjusted scores. Adjustment doesn’t help.

In scenario 2, Hospital A has higher readmission rates for its poor patients and therefore has an overall readmission rate of 25%. Hospital B doesn’t suffer from readmitting its poor patients too often – hence its readmission rate is 20%. In this case, safety-net hospitals look worse than Hospital B in both unadjusted and adjusted analyses. Again, adjustment doesn’t help.

In scenario 3, Hospital A and B both struggle with readmissions for their poor patients – as does the rest of the country. The only thing that differentiates Hospital A from Hospital B is the proportion of poor patients in the hospital. In this case, adjustment makes a big difference. By adjusting, we account for the different proportions of poor patients between Hospital A and B. Adjustment ensures that organizations are judged by how well they care for their patients, not by how many poor patients they have.

One Size Does Not Fit All

The debate about whether to adjust for socioeconomic status needs to be far more nuanced than it has been to date. Specifically, we must recognize that quality measurement has multiple purposes, and we need to think about each one when deciding whether to adjust or not. If the goal is transparency –letting patients know how they are likely to fare – then the best approach is stratified data. In scenario 3 (where adjustment makes a difference) a poor patient will do about as well at both hospitals – and unadjusted numbers are misleading, because they tell poor patients that hospital B is better. If Hospital B has a larger co-pay or is out-of-network, you have done real harm by pushing a patient to a more expensive place that doesn’t provide better care.

To push hospitals to improve quality, unadjusted numbers are best. In all three scenarios, Hospital A should be more motivated to get better than Hospital B because for its patients, it tends to have worse performance. But in each scenario, the hospitals need stratified data. Without it they will have no idea where to target their efforts.

For penalties, we should use adjusted data. It will make no difference in scenarios 1 and 2. But, in scenario 3, it makes little sense to penalize the safety net hospital compared to other hospitals just for taking care of more poor patients. That’s not a smart policy. Penalties for bad care for poor patients? Sure. Penalties just for caring for more poor patients? Not so sure.

A way forward

The bottom line is that the care of poor patients is not evenly distributed across all U.S. hospitals. Some hospitals have a lot more patients like Mr. Jones than others have. And caring for people like him, who are homeless and without a social network, is challenging. None of us are very good at it. Why penalize the safety-net hospitals just for taking care of more poor patients?

Given the concern that safety-net hospitals may be disproportionately penalized, a bi-partisan group of Senators (3 Democrats and 3 Republicans) has signed on to a bill that would require CMS to account for SES when it doles out penalties for the HRRP (Senate Bill 2501). It’s an excellent start.

Adjusting for SES is an acknowledgement that medicine is not the only factor – and indeed may be a relatively minor factor – in health outcomes. For Mr. Jones, homelessness and poverty clearly contributed to his readmission to the hospital. Bad medical care did not. We should have no qualms penalizing safety-net hospitals for providing sub-standard care. But we just shouldn’t penalize them simply because they have more poor patients.

Share:

Like this:

Adverse events – when bad things happen to patients because of what we as medical professionals do – are a leading cause of suffering and death in the U.S. and globally. Indeed, as I have written before, patient safety is a major issue in American healthcare, and one that has gotten far too little attention. Tens of thousands of Americans die needlessly because of preventable infections, medication errors, surgical mishaps, and so forth. As I wrote previously, according to Office of Inspector General (OIG), when an older American walks into a hospital, he or she has about a 1 in 4 chance of suffering some sort of injury during their stay. Many of these are debilitating, life-threatening, or even fatal. Things are not much better for younger Americans.

Given the magnitude of the problem, many of us have decried the surprising lack of attention and focus on this issue from policymakers. Well, things are changing – and while some of that change is good, some of it worries me. Congress, as part of the Affordable Care Act, required Centers for Medicare and Medicaid Services (CMS) to penalize hospitals that had high rates of “HACs” – Hospital Acquired Conditions. CMS has done the best it can, putting together a combination of infections (as identified through clinical surveillance and reported to the CDC) and other complications (as identified through the Patient Safety Indicators, or PSIs). PSIs are useful – they use algorithms to identify complications coded in the billing data that hospitals send to CMS. However, there are three potential problems with PSIs: hospitals vary in how hard they look for complications, they vary in how diligently they code complications, and finally, although PSIs are risk-adjusted, their risk-adjustment is not very good — and sicker patients generally have more complications.

So, HACs are imperfect – but the bottom line is, every metric is imperfect. Are HACs particularly imperfect? Are the problems with HACs worse than with other measures? I think we have some reason to be concerned.

HACs – Who Gets Penalized?

Our team was asked by Jordan Rau of Kaiser Health News to run the numbers. He sent along a database that listed CMS’s calculation of the HAC score for every hospital, and the worst 25% that were likely to get penalized. So, we ran some numbers, looking at characteristics of hospitals that do and do not get penalized:

These are bivariate relationships – that is, major teaching hospitals were 2.9 times more likely to be penalized than non-teaching hospitals. This does not simultaneously adjust for the other characteristics because as a policy matter, it’s the unadjusted value that matters. If you want to understand to what degree academic hospitals are being penalized because they also happen to be large, then you need multivariate analyses – and therefore, we went ahead and ran a multivariable model – and even in the multivariable model (logistic model with each of the above variables in the model), the results are qualitatively similar although not all the differences remain statistically significant.

What Does This Mean?

So how should we interpret these data? A simple way to think about it is this: who is getting penalized? Large, urban, public, teaching hospitals in the Northeast with lots of poor patients. Who is not getting penalized? Small, rural, for-profit hospitals in the South. Here are the data from the multivariable model: The chances that a large, urban, public, major teaching hospital that has lots of poor patients (i.e. top quartile of DSH Index) will get the HAC penalty? 62%. The chances that a small, rural, for-profit, non-teaching hospital in the south with very few poor patients will get the penalty? 9%.

Is that a problem? You could make the argument that these large, Northeastern teaching hospitals are terrible places to get care – while the hospitals that are really doing it well are the small, rural, for-profit hospitals in the south. May be. I suspect this is much more about the underlying patient population and vigilance than actual safety. Beth Isarel Deaconess Medical Center (BIDMC) in Boston is one of the very few hospitals in the country with exceptionally low mortality rates across all three publicly reported conditions and a hospital that I have written about as having great leadership and a laser focus attention on quality. And yet, it is being penalized as being one of the hospitals with, according to the HAC metric, a poor record on safety. So is Brigham and Women’s (though I’m affiliated there, so watch my bias) – a pioneer in patient safety whose chief quality and safety officer is David Bates, one of nation’s foremost safety gurus. So are the Cleveland Clinic and Barnes Jewish, RWJF Medical Center, LDS Hospital in Salt Lake, and Indiana University Hospital, to name a few.

So what are we to do? Is this just whining that our metrics aren’t perfect? Don’t we have to do something to move the needle on patient safety? Absolutely. But, we are missing a great opportunity to do something much more useful. Patient safety as a field has been stuck. It’s been 15 years since the IOM’s To Err is Human report came out – and by all counts, progress has been painstakingly slow. Therefore, I am completely on board with the sentiment behind Congressional intent and CMS’s efforts. We have to do something – but I think we should do something a little different.

If you look across the safety landscape, one thing becomes clear: when we have good measures, we make progress. We have made modest improvements in hospital acquired infections – because of tremendous work by the CDC (and their clinically-based National Hospital Surveillance Network) that collects good data on patient safety and feeds it back to hospitals. We have also made some progress on surgical complications, partly because a group of hospitals are willing to collect high quality data, and feed it back to their institutions. But the rest of the field of patient safety? Not so much. What we need are good measures. And, luckily, there is still a window of opportunity if we are willing to make patient safety a priority.

How to Move Forward

This gets us to the actual solution: harnessing the power of meaningful use in the Electronic Health Records incentive program. We need clinically-based, high quality patient safety metrics. Electronic health records can capture these far more effectively than billing codes can. The federal government is giving out billions of dollars to doctors and hospitals that “meaningfully use” certified EHRs. A couple of years ago, David Classen and I wrote a piece in NEJM that outlined how the federal government, if it wanted to be serious about patient safety, could require, that EHR systems measure, track, and feed back patient safety events as part of certification and requirements for meaningful use. The technology is there. Lots of companies have developed adverse event monitoring tools. It just requires someone to decide that improving patient safety is important – and that clinically-based metrics are useful.

So here we are – HACs. Well intentioned – and a step forward, I think, in the effort to make healthcare better. Everyone I know thinks HACs have important limitations – but reasonable people disagree over whether their flaws make them unusable for financial incentives or not. The good news is that all of us can agree that we can do much better. And now is the time to do it.

Share:

Like this:

Last year, about 43 million people around the globe were injured from the hospital care that was intended to help them; as a result, many died and millions suffered long-term disability. These seem like dramatic numbers – could they possibly be true?

If anything, they are almost surely an underestimate. These findings come from a paper we published last year funded and done in collaboration with the World Health Organization. We focused on a select group of “adverse events” and used conservative assumptions to model not only how often they occur, but also with what consequence to patients around the world.

Our WHO-funded study doesn’t stand alone; others have estimated that harm from unsafe medical care is far greater than previously thought. A paper published last year in the Journal of Patient Safety estimated that medical errors might be the third leading cause of deaths among Americans, after heart disease and cancer. While I find that number hard to believe, what is undoubtedly true is this: adverse events – injuries that happen due to medical care – are a major cause of morbidity and mortality, and these problems are global. In every country where people have looked (U.S., Canada, Australia, England, nations of the Middle East, Latin America, etc.), the story is the same. Patient safety is a big problem – a major source of suffering, disability, and death for the world’s population.

The problem of inadequate health care, the global nature of this challenging problem, and the common set of causes that underlie it, motivated us to put together PH555X. It’s a HarvardX online MOOC (Massive Open Online Course) with a simple focus: health care quality and safety with a global perspective. I believe that this will be a great course—not because I’m teaching it, but because we have assembled a team of terrific experts. But, let me be clear: putting this MOOC together is unlike any educational experience I have ever had before.

First, you get to assemble the faculty – and here, I had almost no constraints. Want to learn about quality measurement? We have Jishnu Das (World Bank economist whose ground-breaking work includes sending trained, fake patients into doctors’ offices in Delhi) and Niek Klazinga (a Dutch physician who led the creation of the Health Care Quality Indicators for the OECD). These two guys have thought more deeply and broadly about quality measurement than almost anyone else in the world. What about the role of leadership? We have Agnes Binagwaho (Minister of Health, Rwanda) and Julio Frenk (former Minister of Health, Mexico and current dean of the Harvard School of Public Health) speaking about what leadership in quality looks like from a health minister’s perspective. We have T.S. Ravikumar, the CEO of a massive public hospital system in Pondicherry, India talking about how his decision to prioritize quality transformed his institution.

Sometimes, when you want the best people in the world, you don’t even have to go very far. On patient safety, we only had to cross the street for David Bates, Chief Quality Officer at Brigham and Women’s Hospital and patient safety maven. When we wanted to learn about the empirical basis for the role of management in improving quality, we went across town to Harvard Business School to spend time with Rafaella Sadun. And when we wanted to learn about quality improvement, we only had to cross the Charles River to find Maureen Bisignano, CEO of IHI.

Beyond getting to assemble an excellent, world-class faculty, the MOOC is a completely different approach to education. Because this course has never been offered before –we had the freedom to write a fresh syllabus specifically for online learners. This is not a live course copied onto a web platform. These are not hour-long lectures videotaped from the back of a classroom. Our lectures are short, pithy conversations on pressing topics. Instead of asking Professor Ronen Rozenblum, an Israeli expert on patient experience, to lecture about how and why we might measure patient-reported outcomes, we are having a meaningful discussion – back and forth, where I get to challenge his assumptions and let him articulate why patient experience should be considered an integral part of quality and more importantly, why he cares.

Beyond the discussions, we have interactive sessions where students create content. One of my favorites? Through this course we will crowd source the first global “atlas” on healthcare quality. Lets be honest, it’s one thing for me to point to individual studies on hospital infections in Canada or India, but right now, we have no place to turn to if we want to really understand key issues in healthcare quality around the globe and how they compare to one another. The goal of this exercise is as simple as it is ambitious. By the end of the course, we will draft a resource that maps out where the world is on the journey towards a safe, effective, patient-centered healthcare systems. It will be created by the collective energy and creativity of people in the course – a range of students, providers, policy folks and people just simply passionate about improving the delivery of healthcare. It will be a public good for us all to use and improve.

Finally, we have a few enticements to keep everyone engaged. The attrition rate in these courses tends to be high, so we have a few carrots. First, half-way through the course we will have a series of live discussion in which expert faculty will help students solve pressing quality and safety problems in their own institutions. Have a problem with high infection rates in your ICU? We will get an expert on nosocomial infections to help you think it through and figure out how to begin to solve it. Wondering how to keep your family members safe during their hospital visit? We will have healthcare consumer experts help you navigate those waters. Finally, at the end of the course, students have the opportunity to submit a 1200 word thought piece on the importance of improving quality and safety in their own context whether as a clinician, patient, or health policy expert. The top three pieces will be published in the BMJ Quality and Safety, arguably the most influential global quality and safety journal.

This is a grand experiment in a new way of teaching, engaging, and creating information on quality and safety of healthcare. I’m sure there are parts that won’t work, but we will learn along the way. I’m also sure that the pressing issues facing the US – healthcare that is not nearly as safe, effective, or patient-centered as it should be – are similar to issues facing not just other high-income countries, but also low and middle -income countries. Thinking globally about these issues, and their adaptable solutions, can help us all deliver better care.

Quality needs to be on the global health agenda. Don’t believe me? Take the course.