The Apothecary, a blog about health care and entitlement reform, is edited by Avik Roy, a Senior Fellow at the Manhattan Institute for Policy Research and a former health-care policy adviser to Mitt Romney. Avik also writes a weekly column on politics and policy for National Review.
The other contributors to The Apothecary are: Josh Archambault, Director of Health Care Policy at the Pioneer Institute in Boston; Robert Book of the American Action Forum; Chris Conover, Research Scholar in the Center for Health Policy and Inequalities Research at Duke University and an Adjunct Scholar at the American Enterprise Institute; Nicole Fisher of the University of North Carolina; John R. Graham of the Advanced Medical Technology Association; and Jeet Guram of Harvard Medical School.

Will 17,104 Americans Really Die In States That Don't Expand Medicaid?

Single-payer proponents Steffie Woolhandler and David Himmelstein (both physicians) are at it again. Their latest study purports to demonstrate that failure to expand Medicaid in the 25 states that have failed to do so will result in as many as 17,104 avoidable deaths each year among low-income Americans who remain uninsured. That’s a pretty eye-popping figure (equivalent to a 50% increase in U.S. automobile-related deaths). If it’s true, it might lead some governors who have resisted Medicaid expansion to reconsider their position. Indeed, progressives in my state are already sounding the alarm. In a post titled “New Harvard/CUNY Study: Thousands will die in states that don’t expand Medicaid (like NC)” Adam Searing declares at The ProgressiveProgressive Pulse:

If there ever was a reason for NC Governor McCrory and legislators to change their short-sighted decision blocking Medicaid expansion, this is it.

But is it true? The simple answer is no: here’s why.

This latest study (which I will label the Harvard/CUNY study to avoid mixing it up with other studies involving Harvard researchers) calculates two different estimates of the adverse mortality consequences of failure to expand Medicaid: a lower bound figure of 7,115 predicted deaths and the upper bound figure of 17,104 deaths.

Misleading Extrapolations from a Medicaid Expansion Study

Their upper bound figure is based on a study by a team led by Ben Sommers, a highly regarded physician and health economist at the Harvard School of Public Health. He is a highly talented researcher, but that does not mean that the study he did–which fellow blogger Avik Roy nicely critiqued when it came out nearly 18 months ago– is without flaws. Indeed, there’s 3 big problems with the study.

Medicaid Actually Produces Mixed Results. First, the Sommers study examined 3 states, but the Harvard/CUNY analysis is based only on the aggregate finding of a weighted average mortality decrease of 19.6 per 100,000 adults age 18-64 in states that had substantially expanded Medicaid eligibility for adults since the year 2000. What the Harvard/CUNY researchers failed to report is that the Sommers study showed that the only statistically significant mortality decrease occurred in New York (by 22.2/100,000). In contrast, an apparent increase in mortality in Maine (by 13.4/100,000) and an apparent mortality decline in AZ (by 10.2/100,000) were not statistically significant.

It should be obvious even to a non-researcher that if Medicaid truly reduced mortality risk, we would not expect to see it having a demonstrable beneficial effect on mortality in only 1 out of 3 states studied. Thus, the aggregate result of a significant reduction in mortality is driven largely by New York. Indeed, had NY not been in the study and a state like ME or AZ substituted, the authors would have had to report that Medicaid had no significant effect on mortality.

In terms of generalizability, the evidence is that more states are like Maine and Arizona than New York in terms of generosity of eligibility/benefits etc. Indeed, Public Citizen ranks New York’s Medicaid program 8th in the nation compared to Maine’s ranking of #13 and Arizona’s ranking of #24. Thus, most states would be likely to exhibit the pattern seen in Maine and Arizona–i.e., no statistically significant reduction in morality–than to attain outcomes as good as New York’s. Consequently, the Harvard/CUNY team’s extrapolation to the entire nation is quite inappropriate especially given that the Sommers study explicitly cautions “the results are largely driven by the largest [state] (New York), so our results may not be generalizable to other states.” In short, the researchers did precisely what Sommers et al. cautioned them not to do!

Sommers Study Measures Aggregate Mortality Risk, Not Actual Mortality Risk Among Mediciad Recipients. Unlike the Oregon Health Insurance Experiment–which actually found no statistically significant decline in mortality among those newly enrolled in Medicaid[1]–the Sommers study didn’t directly measure mortality risk among those with Medicaid. It compared changes in all-cause county-level mortality for adults 20-64 in 3 states that expanded Medicaid to states presumed to be good comparison states (that is, they calculated an average annual county-level mortality rate for 5 years before and 5 years after Medicaid expansion, taking into account county characteristics such as percent female, age, race, urban/rural status and socioeconomic characteristics).

To be clear, the Sommers team did as sophisticated a job as possible with the data they had, but could not control for everything. Thus, for example, if the Medicaid expansion states experienced a relative reduction in deaths due to automobile accidents (e.g., due to more aggressive enforcement of drunk driving laws) or a reduction in deaths among the vast majority of adults who are not on Medicaid, all of these would have chalked up to Medicaid expansion even when it’s obvious Medicaid would have had nothing to do with such mortality reductions. It’s worth noting that that roughly one-quarter of the estimated mortality reduction was due to “external causes” (injuries, suicide, homicide, complications of medical treatment, and substance abuse). Clearly, it’s not impossible for Medicaid to have affected these: the Oregon Health Insurance Experiment suggests Medicaid may have a beneficial effect on depression, which could affect suicide rates, for example. And another study of automobile accidents showed that mortality rates were 39% higher among uninsured adults compared to those with private insurance, principally because of a greater intensity of care and longer lengths of stay. But keeping in mind that what was being measured was mortality in the entire non-elderly adult population, it’s a little counterintuitive to expect that a Medicaid expansion affecting only 14% of that population (inferred from Table 4) would produce a 7.6% reduction in such external causes of death. That it, if such deaths were approximately divided proportionately across the population of newly enrolled in Medicaid vs. all other adults, the 7.6% reduction would imply more than a 50% reduction in such deaths among those gaining Medicaid coverage! That seems quite implausible to me, yet that’s what the Sommers team found.

The New York Mortality Reduction May be A Statistical Artifact. Avik Roy has provided the most detailed and compelling explanation I’ve seen of why the particular state selected for comparison with New York (Pennsylvania) might not have been a good one. But as Roy explains, this is an extremely flawed comparison in light of the substantial differences between New York and Pennsylvania in terms of poverty rates (14.1 percent vs. 11.5 percent) and presence of ethnic or racial minorities (38 percent vs. 16 percent). Both factors have well-established connections to mortality risk. The way the Sommers team calculated the purported mortality reduction is by observing the change in mortality rates for non-elderly adults in NY and PA and then essentially subtracting what happened in PA (on grounds any observed changes in mortality would also have happened anyway in NY even without Medicaid expansion) and attributing the difference to the results of the Medicaid expansion. But that kind of comparison is valid only if we have good reason to believe NY and PA have comparable populations experiencing comparable trends in mortality, an assumption Avik’s analysis calls into question. The point is, had a different, more appropriate comparison state been selected, the estimated beneficial effect of Medicaid expansion might well have disappeared entirely.

On a related point, the authors don’t report results for individual states, so we don’t know what happened to mortality in PA vs. NY. But in the aggregate (Figure 1), about half of the reported mortality benefit from Medicaid expansion appears to come from the fact the mortality rose in the control states (that is, the estimated mortality gain would have been approximately half as much had mortality rates remained stable in the control states). Here’s where it gets very interesting. This increase in mortality in the non-expansion states occurred despite the fact that the fraction of non-elderly adults on Medicaid grew in those very same states. Admittedly, this Medicaid-enrolled fraction grew even faster in the expansion states, but the point is that if Medicaid is truly mortality-reducing, it makes no logical sense to observe mortality rates increasing during a period that the Medicaid share of the adult population is going up.

The Bottom Line. Even if one believed NY’s Medicaid program actually reduced mortality, one cannot cherry-pick the Sommers’ results. If people are willing to overlook the study’s clear methodological limitations to claim it “proves” Medicaid saved lives in NY, then they have to be prepared to concede that Maine and Arizona’s Medicaid programs evidently had no impact on mortality. The reality is that the New York Medicaid program is not like that in most other states, which is why Public Citizen gives them a #8 ranking. So implicitly assuming all other states would attain mortality reductions similar to New York’s is inappropriate.

Inappropriate Extrapolations from a Flawed Observational Study

The second study used to generate the lower bound mortality figure (7,115) is based on Himmelstein and Woolhandler’s own previous work in American Journal of Public Health (AJPH). Thus, they should be well aware of its limitations and flaws.

The AJPH Study Estimates Mortality Gains from Private Coverage, Not Medicaid. First and foremost, the AJPH study compared the uninsured to individuals with private insurance. There’s a mountain of evidence showing the private insurance is vastly superior to Medicaid when it comes to health outcomes, including mortality (fellow Forbes bloggers Avik Roy (here and in his new book), Scott Gottlieb and I all have codified various pieces of this evidence). Much of this evidence also is observational but it’s clearly not fair to dismiss the huge body of such studies that persistently show Medicaid is worse than private coverage and then turn around and accept the results from the AJPH observational study simply because it appears to show something that would put Medicaid expansion in a favorable light.

Moreover, the best studies of Medicaid vs. private coverage, which take into account selection effects, show that private coverage is superior, e.g. Bhattacharya et al.’s analysis of the impact of insurance coverage on mortality for HIV/AIDS patients. The authors conclude: “The better outcomes associated with private insurance are attributable to the more restrictive prescription drug policies of Medicaid.” Since Medicaid produces inferior health outcomes compared to private health insurance, extrapolation from the AJPH results is inappropriate at worse and will lead to exaggerated estimates of mortality gains at best.

The AJPH Study Was Observational. Second, the AJPH study was an observational study, with all the attendant concerns about selection effects that might contaminate comparison. To illustrate, the AJPH authors did take into account many factors (age, gender, race/ethnicity, income, education, self- and physician-rated health status, body mass index, leisure exercise, smoking, and regular alcohol use) that could account for mortality differences between the 2 groups compared, but not everything. It’s notable that on every metric of health risk (obesity, lack of exercise, smoking, drinking) the uninsured led riskier lives (Table 1). So it is not at all implausible to imagine they are more prone to dying in motor vehicle accidents due to lack of seatbelt use, speeding etc. The AJPH study merely examines the uninsured and privately insured in year 1 and then measured what fraction have died 6-14 years later. It does not examine the causes of deaths, including many that would have nothing to do with health insurance. All of the observed difference in death rates is chalked up to being uninsured even when this might include deaths (e.g., falling off a ladder) having nothing to do with insurance coverage.

Post Your Comment

Post Your Reply

Forbes writers have the ability to call out member comments they find particularly interesting. Called-out comments are highlighted across the Forbes network. You'll be notified if your comment is called out.