My new book, "Health, Medicine and Justice: Designing a fair and equitable healthcare system", is out and and widely available!
Medicine and Social Justice will have periodic postings of my comments on issues related to, well, Medicine, and Social Justice, and Medicine and Social Justice. It will also look at Health, Workforce, health systems, and some national and global priorities

Saturday, December 31, 2011

“Magical thinking” is believing something is true because you want it to be true even when there is strong evidence that it is not. It is normal in young children. They believe in Santa Claus and the Easter Bunny and conjurer’s tricks. This is in part because adults encourage them to, and because they do not know the evidence and they haven’t enough brain maturity to make the connections. Beyond a certain age, however, it is not normal. Yet we do it all the time.

It is common enough in politics, for sure. A wise expert (OK, me) once said “Data is only useful if it confirms your preconceived notions”. Otherwise, hearing the data that should demonstrate that you are wrong only confirms your pre-existing beliefs because it reminds you of why you believe it. The evidence is the evidence, and sometimes it is inconclusive and subject to different interpretations depending upon one’s perspective. That’s what makes horse races. Sometimes it is conclusive, but leads to a different conclusion than the one that you want to hear.

Religion is different; it is, by definition, based on faith. It becomes confusing, for me, when this is complicated by searching for evidence (e.g., the Catholic Church searching for evidence of a miracle in order to sanctify someone), but at bottom it is about faith. Some people have lost their faith in the religion in which they were brought up because of seeing contradictory evidence in the world, others have reconciled that evidence with their beliefs, others manage to separate the evidence from their faith, and still others reject all the evidence of their senses if it contradicts their faith. We have classic examples of this last, with lecturers in the early European medical schools reading from Aristotle on anatomy, ignoring the visual evidence provided by the cadavers being dissected in front of them that demonstrated that what Aristotle described was wrong. Luckily for anatomy and medicine, the schools were able to move on from this, in part because Aristotle, while revered, was not a Christian expert. It was rougher for Galileo when he demonstrated that the earth rotates around the sun.

I understand people’s interest in believing to be true things that the evidence demonstrates is not. It is comforting, it offers hope, and it can offer consistency. I wish, sometimes, I had more of it. My son died 9 years ago from completing suicide. If I believed that there was an afterlife, and that he was somewhere happily being cared for by my mother, who died over 30 years ago, it would make me feel better. After all, she was a wonderful, nurturing person, a kindergarten teacher who loved children, and she died just after he turned 2, so never got to see him grow up. It would be great to believe that they were getting to know and enjoy each other now. But I don’t.

Nonetheless, I am sure there are things that I believe that are contrary to the evidence. Certainly, things I believe that have conflicting evidence. Like that people are good, that the world can be a better place, that the ‘better angels’ of our nature may overcome selfishness and greed and hypocrisy and meanness. Sometimes that belief is sorely tried. It has been a particularly hard couple of years as the perpetrators of the greatest worldwide financial crisis have gotten off and maintained and increased their wealth while hundreds of millions of their victims have had their lives ruined, with no end in sight. And with whole cohorts of politicians and pundits advocating that these perpetrators be spared any penalty while slashing any programs that benefit their victims.

For most of us, and in most societies, there are limits to what we tolerate because of people’s beliefs. We do not, as a rule, accept that a false belief, a delusion, about another is an excuse for murder. Of course, if that false belief is on the part of the government that sends young people to war and to kill, it is accepted. And for many zealots, of many beliefs and causes, whether Islamic terrorists or anti-abortion murderers, there is a portion of the population who will accept it.

One group that has good reason to want to believe in things for which there is no evidence is those who are threatened with death from a disease for which there is no effective, “approved”, treatment. Cancer, for instance, or AIDS. In the 1980s and 1990s, AIDS advocacy groups pushed for quick FDA approval for drugs to fight a disease that was killing lots of people. To some degree it happened, and luckily those drugs were effective, and better drugs were developed, and today AIDS is most often a chronic disease. When a study showed that bevacizumab (Avastatin®), an anti-cancer drug created through recombinant DNA that had positive effect for some other cancers such as colorectal cancer, was also effective in prolonging the lives of women with metastatic breast cancer for a few months (not curing them), the large breast-cancer advocacy community pushed the FDA for early approval. It was approved. But then more studies appeared that showed it was not effective. Several of them. And the FDA, appropriately based upon the evidence, withdrew their approval. Blue Cross/Blue Shield of California then decided it wouldn’t pay for it. Yes, much of the motivation was financial – it costs $90,000 per year to treat a patient (except less, really, because few last a year), but it was based on the evidence. Would you pay $90,000 for a drug that didn’t work? How about spending that on treating someone else with a drug that doesn’t work? But having someone else pay for it for you (your insurance company and those other people who are paying premiums)is less painful. There was a big uproar. BC/BS (and Medicare) are now again paying $90,000 a year for treatment of breast cancer with a drug that doesn’t work.

On the other hand, kowtowing to true believers can have the opposite effect. It can lead to restricting access to a drug that does work. This has occurred recently with Plan B One-Step®, “the morning-after pill” which effectively provides emergency contraception if taken within 72 hours (maybe more) of unprotected intercourse. Approved for women 17 and over without a prescription, this form of the hormone levonorgestrel is kept “behind the counter” so those under 17 cannot get it. It doesn’t make sense, since girls under 17 can and do have unprotected sex and get pregnant. It is also safe. So, recently the FDA, examining all the evidence, recommended that it be made available without a prescription and sold “over the counter”. Then Secretary of HHS Kathleen Sebelius overruled, in an almost unprecedented action, the FDA’s recommendation. There was no science or evidence behind the Secretary’s action. Her stated reason, that younger women cannot understand the instructions, would, if one wanted to believe it, be an unreasonable standard. Can they understand the instructions to prevent adverse effects from ibuprofen or acetaminophen? Is the risk of pregnancy in these girls less than the risk from taking Plan B incorrectly? Nonsense. It is a political judgment, pandering to the belief of those who magically believe that because they don’t want young girls to have sex they won’t as long as contraception is not available to them to “encourage” it.

People read and support things that agree with what they think. I do not delude myself into thinking that what I write in this blog “converts” people; I recognize that people who read and like it probably already agree with me. But I do try to present evidence. And sometimes readers challenge me on my interpretation of the evidence (see, for example, the comments on Fluoridation: Dental health for all, October 26, 2011). One of the hardest things for physicians to do is to “un-learn”, to change the beliefs that they have had for years or decades when new information shows that what they believed is wrong. It is hard for them, and harder for the lay public, to understand that doing something was the right thing in the past because of the best evidence at the time, but is the wrong thing now. And what we think is the right thing now, based on the best evidence available, not be true in the future. That is how science evolves.

Sunday, December 18, 2011

That the US spends far more, in total and per capita, on health care than any other country is a well-established fact which no one bothers to deny. That this expenditure has not brought us greater health is also established fact, although many still find this hard to believe, or don’t want to believe it. That we do not have the “best health care system in the world”, or even close, or even, actually, a health care system at all, is also demonstrably true. This does not stop a larger percent of the population, and particularly the very privileged sector represented by politicians, from maintaining that untruth.

However, in a provocative op-ed in the New York Times (“To fix health care, help the poor”), Elizabeth H. Bradley and Lauren Taylor argue that it is only when health care is viewed in its most narrow sense that the US spends more than other countries. Their study of 30 countries expenditures, “Health and social services expenditures: associations with health outcomes”[1], “…broadened the scope of traditional health care industry analyses to include spending on social services, like rent subsidies, employment-training programs, unemployment benefits, old-age pensions, family support and other services that can extend and improve life.”

Essentially, their data shows that having services available to people that improve the quality of their lives, or, more important, decrease the negative health impact of the adverse circumstances into which they are born, develop, and live, lessens disease burden and improves health. This then decreases the costs of providing medical care to them. For example, they note, “The Boston Health Care for the Homeless Program tracked the medical expenses of 119 chronically homeless people for several years. In one five-year period, the group accounted for 18,834 emergency room visits estimated to cost $12.7 million.”

Bradley and Taylor indicate that among industrialized countries, the US ranks #10 in total health + social service spending , and is one of only 3 that spend more on health care than on all other social services. This means that, in addition to not getting the preventive or early-intervention health care that they need, Americans are at higher risk of illness and more ill when they come to medical attention. They may not be homeless, although obviously this dramatically increases their risk. People may not have adequate food, not have adequate warmth (see the discussion of “excess winter deaths” in Michael Marmot, the British Medical Association, and the Social Determinants of Health, November 1, 2011), not had a safe environment. They likely had far too little income. Many of them are children, and many of those, and often their parents before them, have had an inadequate education. A large number of the determinants of health are antenatal, and many more are in the early years of life. The other group at high risk of both adverse health outcomes and the poverty-related social deficits that influence them, are the elderly. So what do we see in the US? Threats to cut Medicare, cut Social Security, cut education.

This wouldn’t affect everyone equally, of course. Only the most vulnerable. Or, at least, the more vulnerable. The wealthy, of course, are unlikely to be inadequately housed, inadequately nourished, inadequately educated, and, in a tautology, inadequately employed. Another recent study, from the Organization for Economic Cooperation and Development (OECD), called “Divided we stand: why economic inequality keeps rising”, demonstrates rising inequality in income as indicated by the difference between the income of the top 10% and bottom 10%. “The income gap has risen even in traditionally egalitarian countries, such as Germany, Denmark and Sweden, from 5 to 1 in the 1980s to 6 to 1 today. The gap is 10 to 1 in Italy, Japan, Korea and the United Kingdom, and higher still, at 14 to 1 in Israel, Turkey and the United States. In Chile and Mexico, the incomes of the richest are still more than 25 times those of the poorest, the highest in the OECD, but have finally started dropping. Income inequality is much higher in some major emerging economies outside the OECD area. At 50 to 1, Brazil's income gap remains much higher than in many other countries, although it has been falling significantly over the past decade.”

In the report’s “country note” on the US, it observes that “The United States has the fourth-highest inequality level in the OECD, after Chile, Mexico and Turkey. Inequality among working-age people has risen steadily since 1980, in total by 25%. In 2008, the average income of the top 10% of Americans was 114 000 USD, nearly 15 times higher than that of the bottom 10%, who had an average income of 7 800 USD. This is up from 12 to 1 in the mid 1990s, and 10 to 1 in the mid 1980s….Income taxes and cash benefits play a small role in redistributing income in the United States, reducing inequality by less than a fifth – in a typical OECD country, it is a quarter. Only in Korea, Chile and Switzerland is the effect still smaller.” Of course, comparing deciles is deceiving; as the Occupy Wall Street movement emphasizes, the concentration of wealth is in the top 1%, and economist and NY Times columnist Paul Krugman (“We are the 99.9%”, November 24, 2011) and others point out that most of that wealth in the US is in the top 0.1%! The wealthiest 400 families in the US own as much as the bottom 50% of the population.

One obvious result of the rising inequality in the US is the increase in the overt control that this wealthy class exerts over the political process, through direct lobbying, political contributions, employment after and between stints of government service, and control of media. The “corporate personhood” decision by the US Supreme Court in Citizens United simply codified and protected this inequality. But income inequality in itself is not sufficient to lead to the destruction of the social safety net that exposes increasing numbers and percents of people to ravages that adversely affect their health. It also requires extreme selfishness and disrespect, so that billionaire people and corporations pay little in tax, and governments are purposely squeezed so that they have neither the will nor the resources to provide services.

The findings of Bradley and Taylor are not news to the public health community, of course, which is very familiar with the social determinants of health and the positive impact that investment in basic social supports has on the health outcomes of both populations and individual people. Investment is required to see future benefit, and the investment that we need, and are not making, is in education, is in nutrition, is in housing. It is far more than a shame. It is shameful.

Saturday, December 10, 2011

Much of the cost of training physicians is currently borne by Medicare (and, to a lesser extent, Medicaid). This is known as Graduate Medical Education, or GME, funding, and it pays some, all, or more than all (depending upon the hospital and based upon a complicated formula discussed on May 25, 2009, Funding Graduate Medical Education) of the cost of training residents in the various specialties that comprise medicine. For those unfamiliar with medical education, graduation from medical school, while it confers the MD (or DO, Doctor of Osteopathy) degree and the title “doctor”, no longer permits practice in any of the US states. A least one, and in some states two, years of residency (“GME”) is required for licensure, and most doctors complete an entire residency of 3 or more years to make them eligible for certification as a specialist in a field (eg, family medicine, general surgery, internal medicine, psychiatry, etc.). Fellowship training is requires addition years beyond the core residency to become a sub-specialist – for example, those who complete an internal medicine residency can then do additional years to become a cardiologist, gastroenterologist, endocrinologist, etc.

Medicare augments its payments to institutions (usually hospitals, although there are a few consortia and federally-qualified health centers) with two types of payments, Direct GME which is intended to pay residents’ salaries and cost of teaching, and Indirect ME which is for the additional costs that training hospitals bear for a variety of reasons. (In addition to the piece linked above, see also Training rural family doctors, Nov 5 2010; PPACA, The New Health Reform Law: How will it affect the public's health and primary care?, Apr 22,2010; Primary Care and Residency Expansion, Jan 7, 2010.) These payments have been the cornerstones for funding residency education. Because the amount is tied to the percent of Medicare patients in a hospital, rather than the total number of patients cared for in hospitals or outpatient settings, it could be (and has been) argued that funding GME should be done comprehensively and separately from Medicare. The most persuasive argument is that private insurers should also contribute to GME (they don’t, although Medicaid does in some, but not all, states). On the other side, many fear that uncoupling GME funds from Medicare would make it easier for a Congress looking at ways to cut the budget to cut GME than having it as part of Medicare.

Except this year, with exceptionally high pressure to cut the budget, Medicare is not even sacrosanct, although, as I have recently argued, (Medicare: A lifeline, not a Ponzi scheme, Dec 2, 2011) most of the proposals to cut it across the board by tactics such as raising the age of eligibility are poorly conceived. So there are now proposals to cut the funding from Medicare for GME. Unsurprisingly, this has created great anxiety in the community of academic health centers, and the Association of American Medical Colleges (AAMC), which has strongly supported expansion of GME residency slots, is quite alarmed (Preserve Medicare support for physician training, revised Oct 2, 2011). The Accreditation Council on Graduate Medical Education (ACGME), which accredits institutions that sponsor residency programs and, through its subsidiary Review Committees (RCs), each individual specialty and subspecialty, has done a study that shows that cuts in residency positions have already occurred and more major cuts are threatened if Medicare decreases its funding ("The Potential Impact of Reduction in Federal GME Funding in the United States: A Study of the Estimates of Designated Institutional Officials”). ACGME CEO Thomas Nasca, MD, is quoted by AAFP News Now as saying “We will actually reduce the number of physicians who are trained in the United States at a time when all workforce studies are demonstrating a mounting deficit of physicians….That will place us in a position where our physician-to-population ratio in 2020 and beyond is below (that of) most of the developed countries in the world." The study found that “With a 33 percent reduction in GME funding

68.3 percent of responders said they would reduce the number of core residency positions,

60.3 percent would reduce the number of subspecialty fellowship positions,

4.3 percent would close all core residency programs, and

7.8 percent would close all subspecialty programs.”

Because there are many more “core” residency positions than subspecialty fellowship positions, these would be disproportionately affected by across-the-board cuts. In addition, residency programs in primary care, which are not as profitable to the sponsoring institution, are even more likely to be cut despite the service that they provide to patients, especially those most in need. Perry Pugno, M.D., M.P.H., AAFP vice president for education, notes in that same article that "…any cuts to GME that go across the board are going to hurt primary care -- especially those of us who disproportionately take care of adults with chronic illnesses….In communities where primary care residency programs are present, those programs become the access point for the poor and disenfranchised of the area.” He says that it's not unusual for family medicine residency programs to see patients who live both in poverty and with numerous chronic illnesses. "The payment for taking care of those patients is so low that the local medical community often doesn't want to provide that care…But residency programs take all comers."

The key issue that Pugno is addressing is one that is very important issue and is not usually made explicit in national policy discussions: our current method of allocating Medicare GME funds to institutions (hospitals) rather than to individual residency programs tends to encourage funding the funding of positions in specialties that most profit those hospitals. The interests of the American people, in regard to the kinds of specialists they need, are not necessarily (and I would argue in fact are not) the same as the interests of the hospitals that sponsor residencies. Hospitals like to fund specialties whose trainees’ work enhances their revenue (e.g., cardiology fellows, who can increase the number of profitable procedures that are done) or at least decrease their loss (e.g., emergency medicine residents, who can fill gaps in seeing patients in emergency departments). Indeed, when hospitals can afford to, they often augment Medicare GME with their own funds to create more such positions. This is about their own financial interest, and does not take into account whether or not the US needs more cardiologists or ER docs, or more family physicians and general surgeons.

This contrast between the interests of the hospital (what kind of residency positions are most beneficial to its bottom line) and the needs of the population, is, of course, a subset of the larger tension. We train doctors in highly-specialized tertiary care academic health centers, while they will mostly practice in the community. There are a number of reasons that this is not brought up more often. For the general lay public, including most members of Congress and their staffs, it seems like a subtle difference. For experts, such as the AAMC, the issue is that they represent the interests of the medical schools, and want to have those interests seen as also representing the interests of the US population. Of course, they do not always, especially the interests of the most rural, poor, minority and other underserved portions of that population.

I think we need to use every opportunity to make this issue more clear and open. While it is probably true that it is a mistake to decrease federal funding for GME, it is absolutely necessary to increase the support for primary care and, in particular family medicine. And this will only happen if GME funding is explicitly tied to requiring it to be spent on primary care programs, and “prevents substitutions”.

Friday, December 2, 2011

In an earlier post (Medicare: We need to expand it, not cut it!, July 1, 2011), I commented on the proposals from politicians such as Wisconsin representative Paul Ryan and Connecticut Senator Joseph Lieberman to limit Medicare. I quoted economists Austin Frakt and Aaron Carroll (as cited by Paul Krugman (“Medicare saves money”,NY Times June 12, 2011), from their post on the Incidental Economist, that “…right now Americans in their early 60s without health insurance routinely delay needed care, only to become very expensive Medicare recipients once they reach 65. This pattern would be even stronger and more destructive if Medicare eligibility were delayed.” It is a stupid idea, more designed to engender the political support of people who do not think the issue through than to practically save money.

There are other similar proposals to “fix” Medicare that fit the same pattern: they superficially seem to make sense, but are actually nonsense. One of the most popular is the idea that we exclude “wealthy” seniors from Medicare, or, at least, require them to make a significant financial contribution. This contribution could consist of premiums paid to Medicare that were tied to income (or wealth, more relevant for retired people but much harder to assess accurately) or co-payments for services, again tiered to income. This seems to make sense – why not? There are many well-to-do elderly; why should currently-working people, who are struggling to make ends meet, have to pay for their care?

One reason is that the reason that Medicare is an “entitlement” because these people have paid for it in advance by their taxes during their working lives. Some of this is from the specific Medicare deduction that comes from each of our paychecks, which supports only “Part A” (coverage for hospital care), as well as from the general income tax revenue that pays for “Part B” (doctors) and “Part D” (drugs). People pay into these plans during their working lives, and draw the benefits when they need it when they are older. This is, in principle, what “saving” is about, but it goes beyond an individual retirement plan to cover everyone. This is the nature of social insurance.

Governor Perry of Texas, a Republican candidate for the presidential nomination (perhaps, if we are lucky, soon to be former candidate), called Medicare (and Social Security, vide infra) “Ponzi schemes”. : “Perry: I think every program needs to stand the sunshine of righteous scrutiny. Whether it’s Social Security, whether it’s Medicaid, whether it’s Medicare. You’ve got $115 trillion worth of unfunded liability in those three. They’re bankrupt. They’re a Ponzi scheme.” They are not. A “Ponzi” scheme involves taking one person’s property (money), and using it to pay off previous investors, who are seeking to make money on their investments. Medicare (and Social Security) are social insurance programs where the benefit is understood to be care (in the case of Medicare) or [minimal] income (in the case of Social Security). The entire beauty of both of these programs is that they involve everyone. Thus the well-to-do as well as the poor and the people in the middle have a stake in keeping the program running and effective.

If we were to exclude certain sectors of the population from receiving benefits from either of these programs, it would undermine the collective investment that we as a society have in each other. The better off, better educated, more empowered now fight for these programs because they are beneficiaries, and results in their being in place for those who are not so privileged. It is probably this very sense of mutual interdependence that makes ideological conservatives oppose them, but such opposition is short-sighted. The reason for having social insurance programs that make us interdependent is that – we are interdependent. The society, in the US (and, arguably worldwide) requires not only healthy, educated, productive workers but also consumers who are able to purchase goods and services. Billionaires like Warren Buffett call for higher taxes on the wealthy (an idea picked up on by President Obama) because they understand that a prosperous society requires contributions from everyone. We ARE in it together.

If we were to exclude only the very wealthy from benefits under these programs (say the top 1%), it would not hurt them financially, but it would hurt the rest of us because these very powerful people would no longer have a personal stake in supporting such programs. And, of course, it would save essentially no money; the corollary of the enormous concentration of wealth in a small number of people is that there are not very many of them. Thus, if they never drew a single dollar of benefit from Medicare (or Social Security) the programs would not be any better off. In order to save money, we would have to exclude a lot of people beyond the very wealthy (10%? 20%? 30%? of the population), and this would be then excluding a large section of the population, and truly reduce support.

More recently, Jane Gross writes in the NY Times about “How Medicare fails the elderly” (October 16, 2011). Her emphasis is not on excluding people from coverage, but rather on not covering services that do not enhance, and often decrease, recipients’ quality of life. Medicare pays for many services that fall into this area, and the reason has rarely to do with the desires of the patients themselves. “Of course, some may actually want everything medical science has to offer. But overwhelmingly, I’ve concluded in a decade of studying America’s elderly, it is fee-for-service doctors and Big Pharma who stand to gain the most, and adult children, with too much emotion and too little information, driving those decisions.” Among the treatments that she notes that Medicare pays for but are usually not medically indicated (especially in the old, debilitated, and demented) are feeding tubes, many forms of surgery (particularly abdominal and joint replacement) and “tight” control of Type II diabetes. All of these treatments have high risks and rarely prolong life while significantly decreasing its quality.

Gross notes that when these complications arise patients often need long-term, very expensive (she cites costs for her mother 8 years ago of $14,000 a month!) care in nursing homes, which Medicare does NOT pay for. Medicaid will, but only after the senior has exhausted all their resources (including savings house, etc., and then only in some nursing homes which are willing to take Medicaid reimbursement, and these are often not those of highest quality). Thus, by paying for the performance of procedures that do not help, Medicare leads patients into worse quality of life at high cost.

Clearly, the motivations of the drug and device makers, hospitals and physicians and nursing homes are often (in some cases usually or always) financial, but this is not the case for the family members, who mostly want to “do the best” for their parent or relative. However, given unclear guidance by their physicians, or incorrect information from any source, they may associate “doing something” with “doing the best thing”; often “doing the best thing” is not doing “something”. If Medicare did not pay for unnecessary and potentially harmful procedures, there would be little motivation among providers to do them, and it would not only save money but more important improve the health care and preserve the dignity and quality of life of people in their last years.

There is a solution to the potential bankrupting of Medicare. One: Pay for only medically necessary and indicated services. Two: revise the Medicare fee schedule to maintain the payment for primary care services but decrease excessive payment for high cost specialty services. Three: Expand Medicare to include everyone. Then we all have a stake, right now.