The message comes in over the office slack line at 1:05 pm. There are four patients in rooms, one new, 3 patients in the waiting room. Really, not an ideal time to deal with this particular message.

“Kathy the home care nurse for Mrs. C called and said her weight yesterday was 185, today it is 194, she has +4 pitting edema, heart rate 120, BP 140/70 standing, 120/64 sitting”

I know Mrs. C well. She has severe COPD from smoking for 45 of the last 55 years. Every breath looks like an effort because it is. The worst part of it all is that Mrs. C just returned home from the hospital just days ago.

The youngest of six children, Mrs. C was born with many embedded disadvantages. Being born black in a poor West Philadelphia neighborhood in the 1960s is a story that too often writes itself with a bad ending. But Mrs. C avoided the usual pitfalls that derail young women in the neighborhood early. No drugs. No alcohol. No teenage pregnancies. Finished high school. Mrs. C. worked for the hospital as a unit clerk, had her own place, health benefits, and even a retirement plan.

Certain life habits, however, carry a heavy price. George Burns, the comedian never pictured without a cigar who died past his hundredth birthday, may have been immune to the effects of tobacco. Mrs. C was not. She started smoking when she was 16. She doesn’t recall why. Her dad smoking didn’t help perhaps. Nausea racked her body after that first drag. It eased up after. Too bad.

That measly cigarette became the great addiction of her life. Day by day, the exquisitely thin membranes of the lungs that mediate gas exchange were destroyed. By the time the disease manifests with shortness of breath and bluish tinged lips, it’s too late. Short of the very few who qualify for a lung transplant, the efforts of doctors at this point are for mitigation rather than cure.

Complicating things further, in Mrs. C’s case, the normally low pressure vascular circuit of her lungs became a high pressure circuit that places ever increasing demands on the normally thin-muscled right ventricle of the heart. This jeopardizes the ability of her heart to handle changes in blood volume.

A little extra fluid and the right side of the heart ends up causing unbearable swelling in her legs. A little dehydration and severe disabling dizziness on standing ensues. Adding to that that her tenuous lung function decompensates with the slightest respiratory infection, that chronic steroid treatment to decrease her wheezing suppresses her immune system, and that the young man down the street helpfully drops off Newports at her home for a few extra dollars, and it’s easy to see why the hospital is her second home.

The most recent admission to the hospital was for kidney failure related to taking too much fluid off with diuretics. What was to be a short stay for gentle hydration turned into a longer stay when a pneumonia complicated the matters (though a trip to the intensive care unit and a ventilator was barely but fortunately avoided). She was treated by the pulmonology team and sent home on a lower dose of diuretics.

The situation I am now confronting puts me in a quandary. Her edema and weight are up markedly just a few days after returning home. Could her fluid overload be because her kidneys are shutting down? Or does she just need more aggressive diuresis?

Should I guess? Knowing her present renal function would be helpful. But even if the Theranos lab I could appeal to for help wasn’t fictional, I would have to get her to my office everyday or every other day while adjusting her diuretic dose.

And so it comes to be that, days removed from a hospital admission, I’m sending her back to the hospital to be readmitted. According to some, this is not supposed to happen.

A policy on readmission

In 2008, the commission that advises Medicare – the Medicare Payment Advisory Commission (MEDPAC), issued a report that focused on hospital readmissions.

The focus on hospital readmissions had been of great interest to the health policy community for some time. At the core of this interest lies the belief that hospitals and physicians are incentivized to treat patients rather than prevent admissions.

The MEDPAC report wanted to discourage readmissions like Mrs. C’s. And so it wasn’t a terrible surprise that, rolled into the 2010 Affordable Care Act, was a section called the Hospital Readmission Reduction Program (HRRP) which created a system for Medicare to penalize hospitals with ‘high’ readmission rates. The program was rolled out in 2013.

At first, the program seemed to work like a charm. Hospitals significantly ramped up their efforts at care coordination. Teams of nurses and aids were assembled to make sure patients would get their medications as prescribed upon discharge and to check on patients once they got home.

Hospital readmission rates suddenly dropped and Medicare started saving money. A staggering 81% of all hospitals suffered penalties in 2018, which translates to ~$ 500 million or 0.3% of total Medicare payments to hospitals.

A complex analysis

But there’s more to this too good to be true story.

The HRRP penalty schemes are risk-adjusted based on administrative claims data. Risk-adjustment is a statistical procedure to take into account the diversity in complexity and severity of disease among patients so they can be compared.

Physicians know that risk-adjusted claims data are of dubious value because they themselves are often the reluctant data entry clerk in the byzantine scheme that starts with adding diagnostic items to the medical chart and ends with generating a coded billing claim for Medicare. Needless to say, there’s a huge potential disconnect between what a claim attempts to convey and the actual condition of a given patient.

Yet another major problem is that the risk-adjustment employed by the HRRP does not take socioeconomic status into account, when that is arguably the single biggest driver of poor outcomes and of hospital readmissions. The creators of the HRRP program seem to believe that a hospital located in poor area shouldn’t get a break for having high readmission rates, perhaps because they believe that hospital systems in general should be mindful of health inequities and address “care gaps” (differences in care provided to poor vs. affluent patients) in their neighborhoods no matter what.

Finally, the initial out of the gate benchmark for readmission rates on which the HRRP would adjudicate the need for a penalty was a national average. In such a scheme, a Johns Hopkins Hospital serving inner city Baltimore could be pitted against a regional hospital in rural Montana with an entirely different patient demographics. This made the regional hospital in Montana very happy.

Gaming the metrics

Regardless of these technical considerations, it is an adage of social science that any metric will be gamed, and healthcare is unfortunately not immune to that law.

One tool increasingly used by hospitals to comply with Medicare payments rules is to admit patients to short stay units, under so-called “observation status.” Another is to put pressure on emergency departments to avoid readmitting certain types of patients.

So, instead of primarily functioning as a triage operation where sick patients would be turned over to the care of the cardiologist in the hospital, the ER has been increasingly housing and managing heart failure patients to save the hospital money.

But the ER physician or the hospitalist supervising the short stay unit and who just meet a patient in the setting of an acute illness are poorly equipped to know which heart failure patient to discharge after a diuretic dose and which to keep for advanced heart failure therapies.

Nowadays, the cardiologist is increasingly insulated from those decisions. I have personally experienced with alarming frequency instances where I learn only after the fact that a complex patient of mine has been treated for heart failure in the ED.

And my experience seems to be shared by many of my cardiology colleagues, especially among cardiologists who work in academic centers that are most affected by the policy. Luckily, some of them are also clinician-scientists that can do more than just whine to colleagues about the new policy. They can also study its outcomes.

What do the outcomes data show?

In a pivotal study, a group of cardiologists (Gupta et al.) saw that the drop in readmissions that followed the introduction of HRRP was unfortunately accompanied by a reversal in the decade long downward trend in heart failure mortality. This reversal suggested a serious potential harm from the policy.

But the possibility of harm was quickly challenged by another group of researchers led by one of the biggest names in health policy: Harlan Krumholz, a cardiologist who directs the influential Center for Outcomes Research and Evaluation at Yale University.

Krumholz at al. noted that mortality rates for heart failure started climbing before the HRRP program was announced and they noted no inflection point in mortality rates with the policy announcement in 2010. The evidence for their claim is highlighted in the table below:

As can be seen in the boxed row, Krumholz’ team concludes that the increasing mortality slope post-HRRP is no different from that pre-HRRP because the change did not reach statistical significance at the obligatory and arbitrary P<0.05 level. The actual P-value was 0.11 and the confidence interval for the positive increase in mortality slope of 0.006 is (-.002 to .015). Even poor students of epistemology would be loathe to conclude this result excludes a signal of harm. It seems entirely plausible that, with all the limitations of the data set in question, mortality may in fact have accelerated after the institution of the HRRP. Yet Krumholz insists that no signal of harm is to be considered.

But this did not stop another group of cardiologists (Wadhera et al.) from adding their contribution to the HRRP literature. Using the same data-set that the Krumholz group used — Medicare claims data — these researchers found once again that accelerating mortality coincided with the announcement of the HRRP. More troubling, they also demonstrated that mortality rose primarily among patients not readmitted to the hospital.

A messy science

Admittedly this whole business of analysis is incredibly messy, with a number of moving parts.

My brief summary doesn’t do justice to a variety of maneuvers taken by the various groups to account for many of the limitations inherent in this type of study. Two of the competing analyses (Krumholz, Wadhera) used Medicare claims data while the other (Gupta) used a more limited voluntary registry.

During the time period in question, there were also other policy changes such as the introduction of new hospital billing codes (MS-DRG) that sought to adjust hospital payment rates to patient complexity. Better patient coding meant higher reimbursement from Medicare. Armies of “documenters” were then employed by hospitals to capture more revenue.

This means that the claims data gathered by the researchers might look significantly different from one time period to the next even if the patients themselves were ostensibly the same. As the readmission rate is risk-adjusted, it is eminently plausible and likely that systematically upcoding patient risk could actually have been the primary driver of the drop in hospital readmission rate.

The other program playing a confounding role is the Recovery Audit Contractor (RAC) program begun in 2010 to reduce payments for inappropriate hospital admissions. Hospitals responded to denials for inpatient admissions by expanding ‘observation status’ stays. Which was the biggest driver for expanded observation stays? RAC or the HRRP? Once again an exact attribution is impossible.

Denying the obvious?

Despite the messiness of the data and the variety of analytic methods used, a consistent and uncontested observation remains: Heart failure mortality has increased in the last decade. The question being hotly contested is Why?

Oddly, Dr. Krumholz is steadfast in denying the possibility that the policy may have caused harm, even though the independent and contradictory conclusions from the other research groups at least raises a reasonable doubt. And Dr. Krumholz has been quick to cast shade on research that does not conform to his conclusion.

By tweet he appears to ask for a level of detail his own papers lack, and he questions the legitimacy of another group’s data-set, all the while resisting any calls to put the program on hold despite the paucity of evidence showing benefit, the signal for harm, and perhaps most importantly, the concern of clinicians who see a mechanism for harm.

Greatly admire @rwyeh and his group…appreciate his focus on readmission &public policy. For such high-profile article, really need more info about statistical weighting. Methods should be sufficient so others can reproduce results. Can’t do that here. Look forward to more info.

Can you account for why the registry you used had, on average, such a small number of patients per site. Did you determine how many patients coded with heart failure by CMS were in the Registry? Just curious about the selection. It may not explain your results…but is a question. https://t.co/WSvJAN8Jjd

Dr. Krumholz also places much weight on an independent analysis carried out by MEDPAC which concluded there was no link between policy and the uptick in mortality. This particular conclusion rests heavily on the assertion that heart failure patients in 2016 were much sicker than patients in 2010. Recall that this coincides with a period of more intensive coding over the same time frame, so it is impossible to say this with any confidence.

The MEDPAC conclusion also relies on an analysis that finds no correlation between hospital level readmission and mortality rates.

While technically true, that conclusion overlooks that a large number of hospitals exhibited reduced readmission rates and increased mortality. Perhaps MEDPAC feels that patients dying at low readmit/high mortality hospitals should be mollified by the knowledge that somewhere there’s a low-readmit/low mortality hospital to balance things out?

That Krumholz and MEDPAC display such certainty about the direction of the signal they observe, and take pains to discount other possibilities seems strange and suggests that pre-existing biases may be at work. What might those biases be?

Conflicts of interest: You get what you pay for

In a world where heads roll for undisclosed personal financial conflicts of interest, it is remarkable that the current dispute, while full of scintillating exchanges about “propensity weighting” and other arcane points of statistics, does not reference any other potential conflicts at work that might affect the conclusions being reached.

Medicare’s decision to start the HRRP program didn’t come in a vacuum. It was inspired by years of research from Dr. Krumholz himself, who suggested that preventing admissions should be a goal for any policy that would aim to move the system from one paying for “volume” to one paying for “value.”

“Hospitals may not support programs that improve the quality of care delivered to heart failure patients because these programs lower readmission rates and empty beds, and therefore further diminish already-declining revenues.”

If Krumholz’s unfavorable and crudely simplistic view of the operations and motivations of hospitals (and of the still relatively independent physicians staffing those hospitals) informs his position on health policy, it stands to reason that serious blinders would prevent him from seeing any evidence of harm in a particular policy that promotes the same view.

But that’s not all. Krumholz’s group at Yale received grants from CMS under the auspices of the Measure and Instrument Development and Support (MIDS) program to study and produce the metrics and instruments needed to devise the readmission measures.

The MIDS program supports the “development and use of clinical quality measures which remains a critical healthcare priority and the tool of choice for improving quality of care at the national, community and facility levels” and it allocates $ 1.6 billion dollars to this purpose.

Thanks to a bipartisan act of Congress, a helpful little website, usaspending.gov, provides contract level detail about payments made to the Krumholz’s group from the MIDS program. Those payments can be seen in the table below:

The numbers are staggering. I know little about how to interpret these data about federal contracts, but it sure appears that the Yale-New Haven Health Service group led by Krumholz has received $ 144 million dollars since 2008.

Yet the only clue to these payments in Krumholz’ published analyses of the HRRP program comes in one disclosure sentence in a footnote, as seen here:

It seems to me that the disclosure is hardly proportional to the amount of funding that his group receives and understates the inherent pressures it must be under to demonstrate that the policy did not actually result in higher mortality.

And recall that MEDPAC’s “independent” analysis that also rejected a policy-mortality link came from the organization that recommended the policy to begin with. The bottom line is this: There’s a tremendous amount of face to lose and a massive source of institutional funding at risk if the policy is found to be harmful.

It now becomes more clear why, in the following tweets, Dr. Krumholz feels that only he can say anything definitive about readmission rates and mortality:

Um, @JAMA_current, even the authors say they cannot say their findings are causal… “but whether this finding is a result of the policy requires further research.” Why do you promote the paper as proving harm? Need to treat twitter like you do any of your Editorial comments. pic.twitter.com/xugasHvPWY

Biases are ubiquitous. When I was a cardiologist-in-training, spending hours on the hospital consult service for a fixed salary, I vividly recall looking for ways to avoid doing any work I considered unimportant or banal: The minor cardiac enzyme leak in a patient with a widespread infection; The extra heartbeats on the ECG that the ER physician didn’t like the look of; etc. “Are you sure you need an official consult?” “The chances we’re going to recommend doing anything about a small enzyme leak in an 80-year-old with a severe lung infection are very low… “ I was even successful sometimes.

Contrast these comments to my demeanor in private practice where I am acutely aware that my income relies on such consults: “I just need the patients name or room number…” “I’ll take care of it!…” “I can put in the orders if you’d like!”

But there are other biases and incentives that motivate human beings, apart from personal financial incentives. Do they pale relative to the financial ones as is so often claimed? How does one begin to quantify them?

When it comes to the HRRP policy, no individual person’s bank account ballooned every time a patient didn’t get admitted. And yet this is a story of ideological bias that drove the design of policy and now claims ‘success’ for its own program. The HRRP saga is illustrative of the importance of non-financial bias and of the dangers of blinding ourselves to that bias.

The story also highlights the downsides of tweaking healthcare systems that were built to deliver more care.

Clearly, I personally have a direct financial conflict of interest to provide more care. Since I haven’t talked anyone out of a consult in 8 years, I’m probably guilty of participating in a system that detractors appropriately criticize for promoting overuse of healthcare.

But the problem is that some of those consults I was trying to avoid as a fellow ended up really needing a cardiologist. There was the 55-year-old Cambodian woman admitted to the medical intensive care unit with pneumonia who went into atrial fibrillation. I recall rolling my eyes and thinking that the ICU could certainly handle this without a cardiologist. It turned out she didn’t have a pneumonia. It was pulmonary edema from heart failure related to undiagnosed rheumatic mitral valve disease. She had been in the wrong unit. She needed diuresis, heart rate control, and eventual surgery to replace her valve, not antibiotics. Less isn’t always more.

Attempt to reduce inappropriate hospital admissions? Get ready to pay a price. To contradict Dr. Krumholz, it is entirely probable that we are underestimating the upside of our current system when we contemplate changing the status quo.

We are underestimating the downside of our current system when we contemplate change. We need to take some risks to do better. #abimf2013

Empiricism in social policy is a subjective enterprise. The often parroted conclusion is that cold, hard, unbiased evidence trumps the biased, unmeasurable judgment of clinicians. Yet, frequently, real world data-sets are complex, the choice of analytic paths can be highly variable, and the instruments to measure success are often imperfect. As it relates to HRRP, which analysis should we trust? The choice requires faith. And if the currency here is faith, perhaps the concern of clinicians at the bedside has more value than advertised.

Metrics won’t save us. The narrative of metrics is an appealing one that promises hard and objective accountability. The problem comes when the metric (readmission) becomes disconnected from outcomes that actually matter (death). False and blind prophets are good descriptors for those who claim to be unable to see without metrics. The fools in this enterprise are easy to identify as those who think the answers lie with ever better metrics.

Conflicts of interest: Going beyond the simple narrative. Focusing on biases induced by personal financial interests is a mistake. Personal enrichment is just one bias in a sea of conflicts. In the healthcare context, financial disclosures—while clarifying in themselves—may simply give cover to other, more perverse biases, unless those other biases are equally disclosed. It requires diligence to ascertain the impact, and direction of bias. Rarely do we get the opportunity to observe the direction of bias in policy research. In the case of the HRRP, the presence of bias was made evident because research groups with opposing biases (clinician-scientists versus policy wonks) have reached conclusions that would be expected on the basis of those pre-existing biases. How often is the problem of such bias examined in the design, implementation, and analysis of health policy?

Beware of technocrats with all the answers. I am reasonably sure that if practicing clinicians would have been asked to devise a rule to reduce heart failure readmissions for the whole population they would have refused. It seems too challenging a task to get right. Even if clinicians can be induced to participate in the design of such a policy, most would readily acknowledge the likelihood that it could harm some patients. It requires a special type of hubris to design a policy and refuse to acknowledge its potential for harm. Unfortunately, hubris within a public health community that believes only they can give us a better health system is more feature than bug.

As for Mrs. C, she has been home for 16 days. All fingers on both hands are currently crossed.

Anish Koka is a cardiologist in private practice in Philadelphia. He can be followed on Twitter @anish_koka. This post originally appeared here on The Accad & Koka Report.