My new book, "Health, Medicine and Justice: Designing a fair and equitable healthcare system", is out and and widely available!
Medicine and Social Justice will have periodic postings of my comments on issues related to, well, Medicine, and Social Justice, and Medicine and Social Justice. It will also look at Health, Workforce, health systems, and some national and global priorities

What are some of the components of the social determinants of health? They include, certainly, housing, food, warmth, education, the overall treatment of women and especially the education of women. There is evidence that greatest determinant of the quality of a society, and especially its economic standing, is related to the education of women. Let us take the example of “food deserts”. 2.3 million (2.2%) of continental US households are more than a mile from a supermarket and do not have access to a vehicle. There are food deserts not far from me in Kansas City, KS. In a community called “the Argentine”, largely populated by Mexican Americans, it can take more than two hours on 3 buses to reach the supermarket. It could be faster to walk, but hard to carry back groceries. In fact, the inability to carry much on any one trip can lead to fresher food, but this requires availability of stores. In an article in the NY Times (July 31, 2011), Russell Shorto describes “The Dutch way: bicycles and fresh bread”; riding bicycles everywhere not only provides great exercise, but the limited carrying capacity means that the bread – and other food – is fresh daily.

The unequal distribution of the social determinants of health is a major cause of health disparities, as demonstrated in the slides from Dr. Jones, and in a few from Dr. Woolf. The latter shows us a list of the 9 most common etiologies of death, from tobacco use (400,000 deaths per year) through illicit drugs (20,000, and not including the use of “licit”, legally prescribed, drugs). Seen this way, by root cause (that is “tobacco”, “diet and activity”, “alcohol”) as opposed to medical diagnosis (like “heart disease”, “cancer”, “liver disease”) we get a clearer picture of the true role of social determinants. Only one on the list, (#4, microbial agents) is even considered part of “traditional” medicine. Dr. Woolf also demonstrates the tremendous impact that education has on health with data from the University of California San Francisco (UCSF) Center for Health Disparities that show that 26.7% of those with less than a HS education describe themselves as having “fair” or “poor” health, compared with 5.8% of college graduates, a five-fold difference! The racial and gender difference in age-adjusted mortality rates is also dramatic; the death rate for black males is coming down, but still far exceeds white males and black and white females (and, although the rates for women are lower than for men, they are much higher for black women than for white women).

In a provocative “thought experiment” published in the American Journal of Public Health, “Giving everyone the health of the educated: an examination of whether social change would save more lives than medical advances”[1], Dr. Woolf and his colleagues demonstrate that even if we attribute all current and recent reduction in mortality to medical advances (nowhere near true; most are due to the types of societal change generally characterized as “public health”, such as clean water, sanitation, and cleaner air), eliminating the disparities that exist on the basis of educational level would dwarf that change, as shown in this graphic.

The County Health Calculator produced by Woolf and colleagues from Virginia Commonwealth University, and available free on line, allows one to look at the socioeconomic status (measured as percent of people with income s >200% of poverty) and education (measured as percent of people with at least some college) for every state and county, and compare it to the other states or counties within a state. A neat “slider” feature allows you to change these rates (e.g., make the rate the same as the best or worst) and see what the change in deaths would be. For Harris County, Texas (Houston) , if 5% more people attended some college and 5% more had an income higher than twice the federal poverty level we could expect to save 1,200 lives, prevent 12,200 cases of diabetes, and eliminate $97.8 Million in diabetes costs every year.

Bradley and Taylor, in an Op-Ed piece in the NY Times, “To fix health care, help the poor”, NY Times, (12/8/11) cite their research challenging the simplicity of the notion that the US spends far more per capita on health care than other developed countries. While true on its face, the US spends far less on social services that might decrease the need for health care; when lumped together, the difference decreases, although the US stands out as being one of very few countries where almost all health+social services spending is on medical care. [2]

A major way to address social determinants and health disparities is the implementation of “health in all” policies, for such things as

· Land use (what is the density? Are there open spaces? How is space used?)

· Built environment (are distances to schools and shopping walkable? Are there facilities for exercise?)

· Transportation (can people get to the store or the park?)

· Agriculture (what about antibiotic and drug use in raising livestock? How about the conditions of farmworkers, including exposure to pesticides?)

· Environmental Justice (are there toxins in the environment? Lead? Who is exposed to “brownfields” and do their children have higher rates of cancer? )

· Health policies (smoking in public places)

· Taxes (do these encourage or discourage the building of a healthful society?)

Although these areas are outside of traditional medicine, physicians can be involved in addressing them; they are (like Dr. Henry A. Withers, for whom this lecture is named) community leaders who have great moral authority.

Friday, May 25, 2012

This is the second of what will be 4 parts comprising the Henry A. Withers lecture I gave at the University of Texas-Houston Department of Family and Community Medicine. When they have all been posted, I will attach them as a "GoogleDoc".

The Social
Determinants of Health

A key measure of
Social Justice are the Social Determinants of Health, and manifests, in
the negative, as health disparities. Some of the most important work on disparities
was done by the British physician, Julian Tudor Hart. Practicing in the Welsh
coal-mining town of Glyncorrwg, Tudor Hart was able to identify who got sick from
what, and as the physician for this community, he could identify how it related
to their economic and social standing. As an epidemiologist he gathered this
data in the pre-computer era, and then expanded it by looking at access to health
care across Britain. The result was a 1971 article in Lancet called “The Inverse Care Law”,[1] in
which he demonstrated with empiric data that “the availability of health
care services is inversely proportional to the need for it.”

A corollary of this
law is that the “higher” the level of medical involvement, in terms of both
complexity and cost, the lower the overall impact on the health of the
population, as demonstrated in this graphic from Dr. Steven Woolf. The greatest
determinants of health of the populations are those that come before medical
care. This incontrovertible truth is integrally tied to the concept of social
justice. The role of social determinants in the health of the population and
the production of health disparities was developed as an outstanding cartoon by
Camara Phyllis Jones and colleagues, “Addressing the social determinants of
children’s health: a cliff analogy.”[2]

A link to a
powerpoint presentation of these, developed by Neal Palafox and colleagues at
the University of Hawai’i Department of Family Medicine, can be found here, and
is definitely worth reviewing. In brief, it pictorially demonstrates that all
people are at risk for injury or illness, but some live a little closer to the
“edge,” which puts them at greater risk of falling off. The same is obviously
true for populations who live “closer to the edge”, who are at higher risk for
disease – because of genetic risks, environmental risks, and behavioral risks –
but also because they have less money, or social support, or greater stress in
their lives. Things that, at the best of times, mean they are just able to get
by and keep from falling off.

So, what can we do?

· We can pick up the
person, or people, who “fall off the cliff”, who get sick. This is known as “tertiary prevention”, because the bad thing has
already happened and we are hoping to prevent complications, prevent it from
getting worse. This is
where we spend almost all of our “healthcare” dollars.

· Or maybe we can put up a safety net. You’ve heard
of “safety net clinics” and “safety net hospitals”. This can be thought of as a
form of secondary prevention – they have already fallen, their high blood
pressure or diabetes has become uncontrolled and they are at risk for something
really bad, but we intervene. In the nick of time.

· Or we could actually put a fence up on
the edge of the cliff, preventing people from falling off. This is a kind of
“primary prevention”.

But there is something else that might
even be more effective. We can move these people further from the edge. This
“pre-primary” prevention actually involves intervening on the core risk factors
for health – addressing the social determinants of health. It is not a major
component of our current medical model.

This is what health disparities are
about. They are about differences that we could control. About some people
living closer to the edge. And maybe the ambulance doesn’t come as quickly;
that is, high tech medical care is less available.Or there is no
safety net. And not even a fence, primary prevention. All these three are
characteristics of access to medical care. Social determinants address health
disparities by asking the questions that Dr. Jones asks: why are there differences in who is found at
different parts of the cliff, and why there are differences in resources along
the cliff face?

Earlier I mentioned the work of John
Rawls, and tried to distinguish between the concepts of intrinsic equality (as
in the Declaration of Independence, “All men [sic] are created equal”), and people actually being
equal in all things (including intelligence, wealth, physical ability,
genetics, etc.) I noted Rawls speaks of distributing societal goods equally,
which is a different thing. I also noted that the principle of “justice” in
medical ethics, which I said implies that people with the same conditions be
treated the same. This concept is equity. What is the difference between
equality and equity? Which should we strive for?

The Declaration of Independence, for
example, also states that all men [sic, again] are entitled to “life,
liberty and the pursuit of happiness”. It doesn’t guarantee happiness,
but suggests some degree of equity, of equality of opportunity. What about when
people start, as the folks on the cliff do, from such different places? What
are the implications? For example, we have all heard politicians rail against
inheritance taxes as “death taxes”, but what does it say about someone who is raised
with all the advantages of money – good food, education, support, tutoring – but
still cannot compete with a person raised with nothing? Are inheritance taxes
good or bad? For myself, I’d say it depends on what we are going to spend the
money on. Bombs? Feeding people? Bailing out banks? Housing people?

A key concept, going back to Rawls, is
that the exception to the general rule of distributing all social goods equally
is when not doing so is to the benefit of the least advantaged. This is also
the basis for why for it is a different thing for the underprivileged or oppressed
to band together to relieve that oppression or lack of privilege than it is for
the privileged or oppressors to band together to maintain it. This is the flaw
of the concept of “reverse discrimination”. Is it discrimination if we take
away all of the advantages that one group had that another did not? I suppose
that it is still a matter of perspective. While I do not know the source of
this quotation, I believe it speaks very well to the issue of perspective: “If
you’ve spent your whole life with the wind at your back, a calm day seems
unfair!”

Saturday, May 19, 2012

I was recently honored to be invited by the Department of Family and Community Medicine at the University of Texas Health Science Center in Houston to give their annual "Withers Lecture", which is named for and supported by the family of Henry A. Withers, MD, a family physician and Houston civic leader. My topic was Social Justice and Health. I am "serializing" the talk in this blog, with the first part today. For those who prefer looking a powerpoint slides, they are attached under "Links to documents in Google Docs" in the navigation bar on the left.

“Justice” is most commonly thought of in terms of courts of law, epitomized by a blind goddess holding a scale – and often a sword. Thus justice can be seen – and is seen by many – as punishment for crime s or transgressions. The rule of law may be necessary for a civilized society, but legal decisions, even in country such as ours, are not always just: think of the death sentences overturned by new DNA evidence, of the cases right here in Texas where a person was convicted of a capital crime while their court-appointed attorney dozed through the trial. Justice is, perhaps, in the eye of the beholder.

In the field of medical ethics, justice is one of the four key principles, but probably the least discussed. We often hear student groups discuss the relative implications of “non-maleficence” (do no wrong) and “autonomy”, as, for instance, when a person wishes a costly intervention that physicians believe will not help and may hurt (the fourth is "beneficence", do the right thing). But “justice” refers to the concept that people with the same conditions should have the same treatments available. What, then, is “social justice”?

Also known as “distributive justice”, the term social justice was popularized by the philosopher John Rawls in the 1970s, although obviously the concept has been in existence, in one form or another, for centuries. In “A Theory of Justice”, Rawls writes:

“All social primary goods – liberty and opportunity, income and wealth, and the bases of self-respect – are to be distributed equally unless an unequal distribution of any or all of these goods is to the advantage of the least favored.”[1]

While this seems pretty expansive, as it says “distributed equally”, the inclusion of the phrase “to the advantage of the least favored” suggests that things are not completely equal because there are people who are least favored. For example, even in a much more equal society, some people may be suffering from physical or mental challenges that require them to utilize more resources. From a medical perspective, we have to consider whether people who advocate for the disabled, or the expenditure of large amounts of money for the diagnosis and / or treatment for those who are close to them may see it as their individual “right”, but do not necessarily support other people having the same rights.

A somewhat earlier authority, Franklin Roosevelt, said that "The test of our progress is not whether we add more to the abundance of those who have much; it is whether we provide enough to those who have too little." This does not suggest that everything be divided equally, but makes a different moral claim: that what we do as a society (and it is fine to read “government”) should be to help those who need the help most rather than those who need it the least. Often in history, including today, that concept is rejected by many. In any case it is clear that, today in US, we do not have a system of social justice such as that described by either Rawls or Roosevelt; ratherWe have a system in which the most privileged exert great influence, and (mostly seem to) use it to increase their privilege. More modern discussions of social justice and medicine can be found in the many writings of Paul Farmer, including Pathologies of Power and Partner to the Poor: A Paul Farmer reader, and in the online journal Social Medicine, published by the Department of Family and Social Medicine at Montefiore Medical Center/Albert Einstein College of Medicine.

What are human rights? The most authoritative modern definition is that of the UN Universal Declaration of Human rights, passed in 1948. Article 25 states that:

“Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control.”

According to the UN Association of Canada (UNAC), while “originally the Universal Declaration was conceived as a statement of objectives to be pursued by Governments, and therefore it is not part of binding international law…. it is still a potent instrument used to apply moral and diplomatic pressure on states that violate the Declaration’s principles…. in 1968, the United Nations International Conference on Human Rights agreed that the Declaration ‘constitutes an obligation for the members of the international community to protect and preserve the rights of its citizenry.’”

So, then, how is social justice related to health, health care, and medicine? In 1978, the World Health Organization issued the “Declaration of Alma-Ata” (now called Almaty, it was then but is no longer the capital of Kazakhstan which was then but is no longer part of the Soviet Union!). It defined “health” as ““...a state of complete physical, mental and social wellbeing, and not merely the absence of disease or infirmity…” and asserted that it “… is a fundamental human right...” This has been an important cornerstone statement for the development of health care and primary care for the last 40+ years. Primary Health Care, which was also defined at Alma-Ata, is integrally tied to the definition of health:

“Primary health care is essential health care based on practical, scientifically sound and socially acceptable methods and technology made universally accessible to individuals and families in the community through their full participation and at a cost that the community and country can afford to maintain at every stage of their development in the spirit of self-reliance and self-determination”

Note here that health care is not limited to medical care, and that it is to be “universally” accessible. This statement makes an effort here to account for the different economic ability of different countries. I once heard a presentation on the Mexican health care system, which seems structured to provide universally accessible care, but does not always achieve this goal. I concluded that in Mexico, they have the desire to provide universal access, but not the resources, while in the US we have the resources but not the desire. Under a social justice framework, this is far less defensible.

In 1848, the Prussian government sent a young physician named Rudolf Virchow to investigate an outbreak of typhus in the coal-mining region of Upper Silesia. His conclusion, that the social and economic situation of the residents was the main cause, is one of the first clear discussions of the social determinants of health; though he is famous for advancing the Cell Theory and his name is attached to dozens of medical eponyms (Virchow’s node, Virchow cells, Virchow’s autopsy, etc.), he may be best known as the “Father of Social Medicine”. In his report he observes that:

“The physicians are the natural advocates of the poor, and social problems fall to a large extent within their jurisdiction,” and that

“Medicine has imperceptibly led us into the social field and placed us in a position of confronting directly the great problems of our time.”

Of course, today we see much less typhus, but we still see much disease that results from social conditions. And typhus itself is not completely gone. In the 1983 Gregory Nava film “El Norte”, one of the lead characters, after finally reaching Los Angeles at the end of a long and grueling journey from Guatemala, dies of typhus contracted when she was crawling through sewers. Would she have gotten typhus if she had not been crawling through sewers? Unlikely. The “medical” question of “why” she got typhus would be that she was bitten by a rat-flea carrying Rickettsia typhae. But we must ask the next question, “where was she that she got bitten by a rat-flea?” and in discovering that it was in a sewer we must ask “why was she crawling through a sewer?” Finally, our question must be “what is wrong with a situation in which a person crawling through a sewer infested with rats, fleas, and Rickettsia typhae is better than the alternative?”

Saturday, May 12, 2012

In a “Viewpoint”
article in JAMA, April 25, 2012, John
Nelson, Laurence Wellikson, and Robert Wachter discuss “Specialty hospitalists:
analyzing an emerging phenomenon”.[1]
They describe the progression of the hospitalist model – doctors who just care
for patients in the hospital, rather than seeing them also in the office from
general medical care to specialty care. They note that in recent years hospitals
have hired physicians in a variety of specialties, including neurology,
orthopedics, obstetrics/gynecology and others, to take care of patients,
particularly at night or in emergency situations, so that other doctors to not
have to come in to do so.

An argument in favor
of this arrangement is that these physicians are present for urgent events
(e.g., the neurology stroke specialist who is there right away to care for a
person who comes to the emergency room with an acute stroke) and that they may
have specialized knowledge that a more “general specialist” doesn’t. In a
useful “box”, the authors summarize the criteria that might be applied to
deciding if a specialty hospitalist is a good idea. These include the number of
inpatients who might require their services, the urgency of the need for those
services (is it a matter of minutes that may save a life?), whether the other
specialists are so tied up in the operating room or office that they could not
respond promptly, and if there so much “sub-specialization” that many doctors
in that specialty would not be capable of addressing the needs that arise in
the hospital.

I have previously
written about “generalist” hospitalists, (Hospitalists,
Dec 4, 2008) and expressed my concerns about this movement from the point of
view of the patient. The advantage for hospitals and health systems that employ
physicians is obvious – they can have some doctors that work in the ambulatory
setting, and some that work in the hospital, and each can be most “productive”
in that setting and not have to leave to go to the other, decreasing
efficiency. In theory, at least, the
hospitalists are very good at
managing the problems of people in the hospital, so quality may improve. And, to
be sure, doctorsoften like it also – it makes their
lives easier, or more controllable – they are only responsible for outpatient
medicine, and don’t have to travel to the hospital to see their patients, or if
they are hospitalists, don’t have to go to the office. While not one of those
listed by Nelson et al. as a benefit of having hospitalists, this advantage for
doctors is real.They can work set shifts, like many of the most popular
specialties such as emergency medicine and anesthesiology and intensive care –
and then be off.

This, of course, leaves the patients. While hospitalized
patients certainly want to be cared for by a physician or physicians who are
skilled in addressing the problems that they have, it is also often a very
scary time, and a good time to have the involvement of someone who knows you, who knew what you were like
before you got so ill that you had to be hospitalized. Your primary care
doctor, if you are lucky enough to have one. The technical skills of the
hospitalist may be fine, but they do not know what you were like before, and
will not be involved in your care after, your hospitalization. Plus the same
attractions that lead to hospitalists in the first place now have led to a
sub-species of hospitalist called “nocturnists”, and mean that you will not necessarily
even have the same hospitalist making decisions about your care, even during
the day, for the duration of your stay.

In addition, the skill sets of hospitalists vary. Dr.
Wachter is one of the founders of the hospitalist movement and heads a
long-standing hospitalist service at the University of California San Francisco
(UCSF). His 1996 article, The
emerging role of "hospitalists"
in the American health care system,[2]
written with Lee Goldman, is one of the seminal articles in the field. But the
results that are achieved by teams of experienced career hospitalist groups
such as his, in terms of both quality and cost, may well not be replicated by hospitalists
who are just out of their residency training and spending a year working in
this role prior to subspecialty fellowships in cardiology or gastroenterology.
Nelson, et al., cite a study by Seiler et al. showing that patient satisfaction
with hospitalist care is equal to that provided by primary care doctors,[3]
but this doesn’t separate out the satisfaction of patients who have primary care doctors who are now
not seeing them from those who do not.

That said, I do not have a problem with most specialty
hospitalists. Specialists are not generalists, unlike primary care providers, we
don’t think that every person should have one of each. The person who comes in
to the Emergency Department with an acute stroke and benefits from having a
stroke neurologist right there is not likely to have a general neurologist. The
same can be said for orthopedics and otorhinolaryngology (ENT) and neurosurgery,
among others, or for people who need emergency intervention for an acute heart
attack. The case of “laborists” is somewhat different; the women having a baby (arguably
the most common reason for people being glad
to be in the hospital) who has been followed by an obstetrician or family
doctor might well want and expect to be delivered by that doctor (a point acknowledged
by Nelson). While many primary care doctors would like to provide this
continuity to their patients, they may be unable to in the system they work in.
And if it is not their “fault”, it is a pretty guilt-free way to enjoy the
benefit.

If
the hospitals and health systems make more money and operate more “efficiently”
with separate hospitalist and “ambulists” (yes, this term is being used by
some!), and if the doctors are happy with the arrangement because it makes their
lives more controllable, the boat on generalist hospitalists and “laborists” has
probably already sailed, at least in communities large enough for this to be
feasible.

Anyone who has flown in and out of the Kansas City
International Airport (KCI) knows what a pleasure it is compared to other
airports in even relatively big cities. Built on only one level in 3
almost-circular terminals, there are only a few gates for each security
checkpoint so the lines are relatively short (compared to, say, the nightmare
at Denver International). Once you comein you get off your
plane, walk right out into the hall where your baggage carousel is nearby, and
then you walk right out to the street (even sooner if you have no checked bag),
where you can be picked up or go to your car in the garage right there. It is a
true pleasure for the traveler.

But it is not so desirable for the airport and airlines. I
have heard that this setup requires more security people than any airport
except Heathrow. There are rumblings about redesigning, maybe rebuilding, the
airport to make it more “efficient”. Sure, it will be worse for the traveler,
but that’s the way it goes.

So maybe you want to ask your doctor if s/he will see you in
the hospital. And let the hospital and health system know that you think it is
important, too. It is unreasonable to ask your primary care doctor to work a
full day in the office and also care for patients in the hospital; that time
needs to be built into their schedules by their employers. It could work; you
never know. What’s good for people sometimes actually happens.

And if you haven’t flown in and out of KCI, you should do it
soon before it becomes Denver.

Saturday, May 5, 2012

In October, hospitals around the country will begin having their payments from Medicare affected by the Value-Based Purchasing Program (VBP). The plan is that a portion of the money that hospitals would have received (beginning at 1% and rising gradually to 2% by 2017) will be withheld and then re-distributed based on a variety of performance measures, with low-performing hospitals losing money and high-performing hospitals getting bonuses. The measures that will be used in federal FY 2013 (which starts in October 2012) are “clinical process” and “patient satisfaction” indicators; they will be expanded the next year to include also patient mortality, hospital-acquired conditions, and patient safety measures. These are succinctly portrayed in a helpful “box” within the short Perspective “Making the best of hospital pay for performance” by Andrew Ryan and Jan Blustein in the New England Journal of Medicine, April 26, 2012.

Ryan and Blustein review the history of previous “pay for performance” efforts by Medicare, noting that a demonstration project begun in 2004 that required hospitals to report their quality data and paid money to those hospitals that did well had initial success. However, this success was not replicated in the second phase of the project beginning in 2006; while there was “…an increase of nearly 50% in the total amount of incentives paid out…these changes did not catalyze additional quality improvement…improvement relative to comparison hospitals actually declined.” Moreover, a similar “…program implemented by Medicaid in Massachusetts with incentives approximately 5 times the size…also showed that pay for performance had no effect on quality.”

They go on to observe that this is not necessarily a completely fair comparison because, for example, the economy was different in the two 3-year periods (always a problem for research in the real world; it changes!). But they remain guardedly optimistic about, or at least resigned to, VBP; their final line, scarcely a rousing call to action, is “It will be critical to ensure that VBP is as good as it can be.” They call for the Patient-Centered Outcomes Research Institute (PCORI, see Patient-centered research: answering the questions that matter to people, April 22, 2012) to study it.

The other thing that will be important is how well the quality measures actually measure quality. Of the initial two, “clinical process”, which accounts for 70% of the withhold, are the same ones that have been in place for several years (such things as getting a beta-blocker after a heart attack and getting antibiotics within a certain period of time if you are diagnosed with pneumonia). “Patient satisfaction” is also a good thing; clearly we all want hospitals to be clean and quiet and to have doctors, nurses and others communicate with us clearly and completely. Keeping people out of pain in the hospital is also the right thing to do, but sometimes (although more often in outpatient settings) it comes into conflict with efforts to monitor how often doctors prescribe narcotics. In an interesting piece in the “Science Times” section of the NY Times on May 1, 2012, E.R. Doctors Face Quandary on Painkillers by Catherine Saint Louis, an emergency room doctor notes this conflict and observes “If you’re going to criticize me for not giving out narcotics, and you never praise me for correctly identifying a drug-seeker, then I’m going to give out narcotics.” Indeed. While this stimulus-response (known as the Hawthorne effect – behavior changes depending on what is being measured) is the basic idea behind VBP (along with monetary incentives), it illustrates that sometimes incenting a desired behavior can have an unintended negative impact.

In general, hospitals, especially the more financially successful ones, are very good at modifying their behavior in response to economic incentives. While we can hope that this results in higher-quality care for patients, all too often it appears that they are just “gaming the system”, seeking to do only those things that make them money and avoiding patients who may put them at risk. The problem is that not all hospitals are starting off with equal resources, and those with the biggest challenges (in terms of unreimbursed patients) will probably do worse under such a system. While Ryan and Blustein note that “CMS has pledged to monitor whether VBP leads to ‘changes in access to [care] and the quality of care furnished to beneficiaries, especially among vulnerable populations’”, the also observe that impact on hospital bottom lines is likely to precede any significant quality changes and that the impact may be particularly great on “…safety net hospitals, which operate of very small margins.”

This is, to me, very important, because these are the hospitals that provide disproportionate care to poor, uninsured, and generally medically underserved people. Because such hospitals are often located in poor neighborhoods or rural areas, or because they depend on (almost constantly decreasing) public funding, they are not among those that already have a robust bottom line and will be able to invest in the equipment and process changes needed to be the “winners” in VBP. They are also likely to have a lower percent of Medicare patients, in part because once people, even poor people, receive Medicare they are no longer uninsured, they can and sometimes do go to hospitals perceived as “better”, and because the proportion of people who are sick enough to be in the hospital despite being younger (under 65) rises as socioeconomic status decreases (see “social determinants of health”, discussed in several previous blogs, including Michael Marmot, the British Medical Association, and the Social Determinants of Health, November 1, 2011 and Social Determinants, Personal Responsibility, and Health System Outcomes, September 12, 2010).

When I first heard of value-based purchasing my initial reaction was both pleased and confused. Wow, I thought, they are actually going to pay for medical care based upon values? I almost immediately realized my mistake – that their main “value” was paying less money. It is possible that the “other kind” of values do play a part; if Medicare is going to pay more for higher quality, or less for lower quality, that is something. But there is another value that is still missing, and that is the value of ensuring that access is high-quality is available to and provided for everyone. Hospitals (the “high end” ones) are already trying to figure out how they can divest themselves of Medicare patients, on whom they already make less money, and replace them with patients with better insurance. To the extent that they are successful, it will just add Medicare recipients to the growing list of “less desirable” patients.

Maybe we need to move to a program in which no people are “less desirable”. Where everyone is covered. Where hospitals that care for the most needy do not suffer as a result; where hospitals that cater to the least needy do not profit from this decision.

Then, perhaps, Medicare – for all – could really base its payments on value, and on values.