Health Care: Who Knows ‘Best’?

One of the principal aims of the current health care legislation is to improve the quality of care. According to the President and his advisers, this should be done through science. The administration’s stimulus package already devoted more than a billion dollars to “comparative effectiveness research,” meaning, in the President’s words, evaluating “what works and what doesn’t” in the diagnosis and treatment of patients.

But comparative research on effectiveness is only part of the strategy to improve care. A second science has captured the imagination of policymakers in the White House: behavioral economics. This field attempts to explain pitfalls in reasoning and judgment that cause people to make apparently wrong decisions; its adherents believe in policies that protect against unsound clinical choices. But there is a schism between presidential advisers in their thinking over whether legislation should be coercive, aggressively pushing doctors and patients to do what the government defines as best, or whether it should be respectful of their own autonomy in making decisions. The President and Congress appear to be of two minds. How this difference is resolved will profoundly shape the culture of health care in America.

The field of behavioral economics is rooted in the seminal work of Amos Tversky and Daniel Kahneman begun some three decades ago. Drawing on data from their experiments on how people process information, particularly numerical data, these psychologists challenged the prevailing notion that the economic decisions we make are rational. We are, they wrote, prone to incorrectly weigh initial numbers, draw conclusions from single cases rather than a wide range of data, and integrate irrelevant information into our analysis. Such biases can lead us astray.

The infusion of behavioral economics into public policy is championed by Cass Sunstein, a respected professor of law and longtime friend of President Obama; he is now in the White House, overseeing regulatory affairs, and will have an important voice in codifying the details of any bill that is passed. In Nudge: Improving Decisions About Health, Wealth, and Happiness, Sunstein and Richard Thaler, a professor of behavioral science and economics at the University of Chicago, propose that people called “choice architects” should redesign our social structures to protect against the incompetencies of the human mind.1 Those who understand thinking better can make life better for us all.

Thaler and Sunstein build on behavioral economic research that reveals inertia to be a powerful element in how we act. Most people, they argue, will choose the “default option”—i.e., they will follow a particular course of action that is presented to them instead of making an effort to find an alternative or opt out. Further, they write,

These behavioral tendencies toward doing nothing will be re- inforced if the default option comes with some implicit or explicit suggestion that it represents the normal or even the recommended course of action.

Sunstein and Thaler propose to use default options as “nudges” in the service of “libertarian paternalism.” For example, to promote a healthy diet among teenagers, broccoli and carrots would be presented at eye level in the cafeteria and would be easily available, while it would take considerable effort for students to locate junk food, thereby nudging them into accepting a healthier diet. But all choices should be “libertarian”—people should be free to opt out of “undesirable arrangements if they want to do so.” The soft paternalistic nudge Sunstein and Thaler envisage should try “to influence choices in a way that will make choosers better off, as judged by themselves.” They are very clear that nudges are not mandates, and that behavior should not be forcefully directed by changing economic incentives. Your doctor should not be paid less if she follows a course of treatment that she can defend as reasonable, even if she deviates from officially issued guidelines. To prevent policy planners from going down the slippery slope of coercion, there should, in Sunstein’s view, be safety rails. Whatever the proposal put forward, he has written, people must retain “freedom of choice” and be able to oppose the more objectionable kinds of government intervention.

Such freedom of choice, however, is not supported by a second key Obama adviser, Peter Orszag, director of the Office of Management and Budget. In June 2008, testifying before Max Baucus’s Senate Finance Committee, Orszag—at the time director of the Congressional Budget Office—expressed his belief that behavioral economics should seriously guide the delivery of health care. In subsequent testimony, he made it clear that he does not trust doctors and health administrators to do what is “best” if they do no more than consider treatment guidelines as the “default setting,” the procedure that would generally be followed, but with freedom to opt out. Rather, he said,

To alter providers’ behavior, it is probably necessary to combine comparative effectiveness research with aggressive promulgation of standards and changes in financial and other incentives. [Emphasis added.]

The word “probably” is gone in the Senate health care bill. Doctors and hospitals that follow “best practices,” as defined by government-approved standards, are to receive more money and favorable public assessments. Those who deviate from federal standards would suffer financial loss and would be designated as providers of poor care. In contrast, the House bill has explicit language repudiating such coercive measures and protecting the autonomy of the decisions of doctors and patients.2

On June 24, 2009, when President Obama convened a meeting on health care at the White House, Diane Sawyer of ABC News asked him whether federally designated “best practices” would be mandated or simply suggested. That is, would he recommend Orszag’s shove or Sunstein’s nudge?

Obama: …Let’s study and figure out what works and what doesn’t. And let’s encourage doctors and patients to get what works. Let’s discourage what doesn’t. Let’s make sure that our payment incentives allow doctors to do the right thing. Because sometimes our payment incentives don’t allow them to do the right things. And if we do that, then I’m confident that we can drive down costs significantly.

Sawyer: Will it just be encouragement? Or will there be a board making Solomonic decisions… about best practices?

Obama: What I’ve suggested is that we have a commission… made up of doctors, made up of experts, that helps set best practices.

Sawyer: By law?

Obama: …If we know what those best practices are, then I’m confident that doctors are going to want to engage in best practices. But I’m also confident patients are going insist on it…. In some cases, people just don’t know what the best practices are. And certain cultures build up. And we can change those cultures, but it’s going to require some work.

Sawyer: But a lot of people… say…”I’m very concerned that there’s going be a reduction in treatment someplace in all of this.” And the question is if there is a board that is recommending, that’s one thing. If there is a board that is dictating through cost or through some other instruction, that’s another thing. Will it have the weight of law? Will it have the weight of regulations?

Obama: …I don’t think that there’s anybody who would argue for us continuing to pay for things that don’t make us feel better. That doesn’t make any sense. [Yet] that’s the reason why, in America, we typically pay 50 percent more for our health care than other advanced countries that actually have better health care outcomes.

Still, the President appears not to be entirely in Orszag’s camp. He has repeatedly deflected accusations of a “government takeover of health care” by asserting that no federal bureaucrat will come between the doctor and patient in clinical decision-making. The President has also repeatedly told physicians that reform would sustain them as healers, not make them into bean counters and paper pushers. In an interview on NPR two days before passage of the Senate bill, the President said that changes in how doctors and patients think about health care should come from giving them the “best information possible” and did not invoke the coercive measures favored by Orszag.

How do we reconcile this apparent difference between Sunstein and Orszag? The President contends that sound policies are built on data, but which data? Here the evidence is strongly in favor of Sunstein and his insistence on the need for freedom of choice and retaining the ability to oppose objectionable forms of government intervention. Over the past decade, federal “choice architects”—i.e., doctors and other experts acting for the government and making use of research on comparative effectiveness—have repeatedly identified “best practices,” only to have them shown to be ineffective or even deleterious.

For example, Medicare specified that it was a “best practice” to tightly control blood sugar levels in critically ill patients in intensive care. That measure of quality was not only shown to be wrong but resulted in a higher likelihood of death when compared to measures allowing a more flexible treatment and higher blood sugar. Similarly, government officials directed that normal blood sugar levels should be maintained in ambulatory diabetics with cardiovascular disease. Studies in Canada and the United States showed that this “best practice” was misconceived. There were more deaths when doctors obeyed this rule than when patients received what the government had designated as subpar treatment (in which sugar levels were allowed to vary).

There are many other such failures of allegedly “best” practices. An analysis of Medicare’s recommendations for hip and knee replacement by orthopedic surgeons revealed that conforming to, or deviating from, the “quality metrics”—i.e., the supposedly superior procedure—had no effect on the rate of complications from the operation or on the clinical outcomes of cases treated. A study of patients with congestive heart failure concluded that most of the measures prescribed by federal authorities for “quality” treatment had no major impact on the disorder. In another example, government standards required that patients with renal failure who were on dialysis had to receive statin drugs to prevent stroke and heart attack; a major study published last year disproved the value of this treatment.

Other “quality measures” recommended by the government were carried out in community health centers to improve the condition of patients with asthma, diabetes, and hypertension. The conclusion of subsequent research was that there was, as a result, no change in outcome for any of these three disorders. Finally, Medicare, following the recommendations of an expert panel, specified that all patients with pneumonia must receive antibiotics within four hours of arrival at the emergency room. Many doctors strongly disagreed with such a rigid rule, pointing out that an accurate diagnosis cannot be made so quickly, and the requirement to treat within four hours was not based on convincing evidence. But the government went ahead, and the behavior of physicians was altered by the new default setting—for the worse. Many cases of heart failure or asthma, where the chest X-ray can resemble a pulmonary infection, were wrongly diagnosed as pneumonia; the misdiagnosed patients were given high doses of antibiotics, resulting in some cases of antibiotic-induced colitis. The “quality measure” was ultimately rescinded.3

Cass Sunstein; drawing by John Springs

What may account for the repeated failures of expert panels to identify and validate “best practices”? In large part, the panels made a conceptual error. They did not distinguish between medical practices that can be standardized and not significantly altered by the condition of the individual patient, and those that must be adapted to a particular person. For instance, inserting an intravenous catheter into a blood vessel involves essentially the same set of procedures for everyone in order to assure that the catheter does not cause infection. Here is an example of how studies of comparative effectiveness can readily prove the value of an approach by which “one size fits all.” Moreover, there is no violation of autonomy in adopting “aggressive” measures of this kind to assure patient safety.

But once we depart from such mechanical procedures and impose a single “best practice” on a complex malady, our treatment is too often inadequate. Ironically, the failure of experts to recognize when they overreach can be explained by insights from behavioral economics. I know, because I contributed to a misconceived “best practice.”

My early research involved so-called growth factors: proteins that stimulate the bone marrow to produce blood cells. I participated in the development of erythropoietin, the red cell growth factor, as a treatment for anemic cancer patients. Erythropoietin appeared to reduce the anemia, lessening the frequency of transfusion. With other experts, I performed a “meta-analysis,” i.e., a study bringing together data from multiple clinical trials. We concluded that erythropoietin significantly improved the health of cancer patients and we recommended it to them as their default option. But our analysis and guidelines were wrong. The benefits ultimately were shown to be minor and the risks of treatment sometimes severe, including stroke and heart attack.4

After this failure, I came to realize that I had suffered from a “Pygmalion complex.” I had fallen in love with my own work and analytical skills. In behavioral economics, this is called “overconfidence bias,” by which we overestimate our ability to analyze information, make accurate estimates, and project outcomes. Experts become intoxicated with their past success and fail to be sufficiently self-critical.

A second flaw in formulating “best practices” is also explained by behavioral economics—“confirmation bias.” This is the tendency to discount contradictory data, staying wed to assumptions despite conflicting evidence. Inconsistent findings are rationalized as being “outliers.” There were, indeed, other experts who questioned our anemia analysis, arguing that we had hastily come to a conclusion, neglecting findings that conflicted with our position. Those skeptics were right.5

Yet a third powerful bias identified in behavioral economics can plague expert panels: this is the “focusing illusion,” which occurs when, basing our predictions on a single change in the status quo, we mistakenly forecast dramatic effects on an overall condition. “If only I moved from the Midwest to sunny California, I would be so much happier” is a classical statement of a focusing illusion, proven to be such by studies of people who have actually moved across the country. Another such illusion was the prescription of estrogen as the single remedy to restore feminine youth and prevent heart disease, dementia, and other complications of the complex biology of aging.6 Such claims turned out to be seriously flawed.

There is a growing awareness among researchers, including advocates of quality measures, that past efforts to standardize and broadly mandate “best practices” were scientifically misconceived. Dr. Carolyn Clancy of the Agency for Healthcare Research and Quality, the federal body that establishes quality measures, acknowledged that clinical trials yield averages that often do not reflect the “real world” of individual patients, particularly those with multiple medical conditions. Nor do current findings on best practices take into account changes in an illness as it evolves over time. Tight control of blood sugar may help some diabetics, but not others. Such control may be prudent at one stage of the malady and not at a later stage. For years, the standards for treatment of the disease were blind to this clinical reality.7

Orszag’s mandates not only ignore such conceptual concerns but also raise ethical dilemmas. Should physicians and hospitals receive refunds after they have suffered financial penalties for deviating from mistaken quality measures? Should public apologies be made for incorrect reports from government sources informing the public that certain doctors or hospitals were not providing “quality care” when they actually were? Should a physician who is skeptical about a mandated “best practice” inform the patient of his opinion? To aggressively implement a presumed but still unproven “best practice” is essentially a clinical experiment. Should the patient sign an informed consent document before he receives the treatment? Should every patient who is treated by a questionable “best practice” be told that there are credible experts who disagree with the guideline?

But even when there are no coercive measures, revising or reversing the default option requires a more complicated procedure than the one described by the President at the White House meeting. In November, the United States Preventive Services Task Force, reversing a long-standing guideline, recommended that women between the ages of forty and forty-nine do not need to have routine mammograms. To arrive at this conclusion, researchers made both a meta-analysis and computer models of data from seven clinical trials. The task force found that routine mammograms result in a 15 percent reduction in the relative risk of death from breast cancer for women in the forty to forty-nine age group, a similar level of benefit as in earlier analyses. For women in their forties, this means one life is saved for every 1,904 women screened. For older women in their fifties, one life is saved for every 1,359 women screened.8

If these estimates are correct, then how many lives might be saved in the United States for each age group if every woman received a mammogram? The 2008 US Census estimates the number of women between forty and forty-nine at 22.3 million. So if mammography were available to all these women, nearly 12,000 deaths could be potentially averted during these ten years in their lives. As for the 20.5 million women in their fifties, some 15,000 deaths could potentially be averted.

What are the risks of mammography for women in their forties? The task force estimated a higher rate of false positive findings in mammograms in women in their forties compared to older women. This translates into increased anxiety when women are told that there may be a cancer and there is not. A false positive reading may also result in a woman having a biopsy. For every case of invasive breast cancer in a young woman diagnosed by mammography, five women with benign findings will have biopsies. In addition, there are potential risks of radiation from the mammogram itself, although no one really knows how significant these are. Then there is an unanswered question in the biology of breast cancer: Which tumors are indolent and which are aggressive? We lack the molecular tools to distinguish between slow- and fast-growing cancers. Some slow-growing ones detected in young women might be treated later in life without any disadvantage in the rate of survival. But aggressive breast cancers in young women are notoriously difficult to treat and frequently result in death. And as with essentially all screening tests in a population, the majority of women receiving mammograms do not have any disorder.

These, roughly, are the statistics and state of the science with regard to breast cancer. How do we weigh the evidence and apply it to individuals and to society at large? Setting the default option that doctors will present to patients requires us to make value judgments. Dr. Otis Brawley of the American Cancer Society, an oncologist who worked for decades at the National Cancer Institute, is well versed in preventive care; he disagrees with the new default setting, based on findings that mammograms save lives. (Brawley also happens to be an African-American and has long been concerned about the meager access among minority and poor groups to potentially lifesaving screenings.)

Dr. Diana Petitti, a professor of bioinformatics at Arizona State University and vice-chair of the task force, appeared with Brawley on November 17, 2009, on the PBS NewsHour. She had no disagreement with him about what the studies show, and emphasized that the task force did not say that women in their forties should not get mammograms, only that they were no longer routinely recommended since the benefit to patients did not clearly outweigh the risks. Cost considerations were not part of the task force’s deliberations.

Other supporters of the new recommendations took a less temperate view. A statistician who developed computer models for the task force told The New York Times that “this decision is a no-brainer.”9 It did not appear to be so clear to Melissa Block of NPR when she interviewed an internist who agreed with the task force. The doctor said that stopping routine mammography for young women would spare them anxiety, distress, and unnecessary biopsies. Block replied, “I’ve heard this before…. When people say, you know, there’s unnecessary anxiety and false positives and fear and worry.” That, she said, is “a very patronizing approach to take toward women’s health…. Women may very well be willing to assume those harms if it means that they may be diagnosed earlier.” The internist replied that each woman should talk with her doctor and figure out what is best.10 Sunstein’s Nudge coauthor, the behavioral economist Richard Thaler, wrote a thoughtful analysis of the pros and cons of mammography in The New York Times and concluded that “one can make a good case that we don’t want the government making these choices” for us.11

Two days after the task force recommendations were released, Health and Human Services Secretary Kathleen Sebelius put some distance between the Obama administration and the task force’s conclusions, saying:

My message to women is simple. Mammograms have always been an important life-saving tool in the fight against breast cancer and they still are today. Keep doing what you have been doing for years….

Dr. Petitti later appeared before Congress to apologize for any “confusion” caused by the task force report. Petitti was not recanting a scientific truth. She correctly described the new recommendations as “qualitative.” That is, they were offered as value judgments that could be modified or revised; and the political process offers one way of doing so. As Sunstein has written, if default options embody standards that many people judge as not better for themselves, those standards can be changed.

Shortly after the new mammography guidelines were announced, an expert panel of obstetricians and gynecologists recommended that teenage girls no longer have routine pap smears for cervical cancer.12 The incidence of deadly cervical cancer among teens is at most one in a million and screening does not appear to save that one life. When false positive results from screenings are followed by cervical surgery, the risk may be injury that can predispose a young woman to later premature labor. There was no public uproar following this changed default setting for many women. It was consistent with how most people value the benefit of lives saved versus risks incurred. This is the reality of “comparative effectiveness” research. It is not simply a matter of “what works and what doesn’t.” Nor will patients always “insist” on being treated according to what experts define as “best practice.” They should be aware that there are numerous companies, some of them “not for profit,” issuing standards for treatment that are congenial to the insurance industry but are often open to the kinds of counterevidence I have described here.

What of the President’s statement that doctors will want to engage in federally approved “best practices”? The American College of Physicians, composed of internists, agreed with the task force conclusions about mammography. The American Society of Clinical Oncology, representing oncologists, did not. I am a member of both professional organizations. What do I do? As a physician who has cared for numerous young women with breast cancer, many dying an untimely death, my bias was that the dangers of mammograms do not outweigh the reduction in mortality. Notably, the oncologists who head the breast cancer programs at Minnesota’s Mayo Clinic and Utah’s Intermountain Health—described by President Obama as pinnacles of quality care using guidelines—also disagreed with the task force.

Such challenges to “best practice” do not imply that doctors should stand alone against received opinion. Most physicians seek data and views on treatments from peers and, as needed, specialists, and then present information and opinion to patients who ultimately decide.

While costs were not part of the task force calculations, they prominently entered the national debate on them. Dr. Robert Truog of Boston Children’s Hospital allowed that mammography saves lives, but asked if it is “cost effective.”13 That is, should policy planners set a price on saving those young women?

Cost-effectiveness is going to be a hard sell to the American public, not only because of the great value placed on each life in the Judeo-Christian tradition, but because the federal government has devoted many hundreds of billions of dollars to bail out Wall Street. To perform mammograms for all American women in their forties costs some $3 billion a year, a pittance compared to the money put into the bank rescue. The Wall Street debacle also made many Americans suspicious of “quants,” the math whizzes who developed computer models that in theory accurately assessed value in complex monetary instruments but in fact nearly brought down the worldwide financial system. When a medical statistician says that imposing a limit on mammography is a “no-brainer,” people may recall George Tenet’s claim that the case for invading Iraq was a “slam-dunk.”

At the White House gathering, the President portrayed comparative effectiveness as equivalent to cost- effectiveness, noting that other countries spend half of what we do by only paying for “what works.” This contention is not supported by evidence. Theodore Marmor, a professor of health care policy at Yale, writes in Fads, Fallacies and Foolishness in Medical Care Management and Policy that movements for “quality improvement” in Britain have failed to reduce expenditures.14 Marmor, with Jonathan Oberlander, a professor at the University of North Carolina, has written in these pages that the President has offered up rosy scenarios to avoid the harsh truth that there is no “painless cost control.”15 Lower spending in countries like France and Germany is accounted for not by comparative effectiveness studies but by lower costs of treatment attained through their systems of medical care and by reduced medical budgets. In Europe, prescription drugs cost between 50 and 60 percent of what they do in the US, and doctor’s salaries are lower. (Insurance premiums also are tightly constrained.) France and Germany have good records in health care, but in Great Britain, where costs are strictly controlled by the National Health Service, with rationing of expensive treatments, outcomes for many cancers are among the worst in Europe.16

The care of patients is complex, and choices about treatments involve difficult tradeoffs. That the uncertainties can be erased by mandates from experts is a misconceived panacea, a “focusing illusion.” If a bill passes, Cass Sunstein will be central in drawing up the regulations that carry out its principles. Let’s hope his thinking prevails.

On June 16, 2008, at the Health Reform Summit of the Senate Finance Committee, Orszag explicitly invoked behavioral economics to explain some of the deficiencies in American health care and as the basis for legislative interventions that would remedy rapidly escalating costs and gaps in quality.

On August 7, 2008, addressing the Retirement Research Consortium in Washington, D.C., Orszag presented "Behavioral Economics: Lessons from Retirement Research for Health Care and Beyond." Here, he states the likely need for aggressive measures. The Senate Finance Committee, under Max Baucus, was widely reported to have worked closely with the White House, and many of Orszag's proposals are prominent in the bill that Majority Leader Harry Reid brought to the floor. See Senate Bill HR 3590, Title III—Improving the Quality and Efficiency of Health Care.

The House rejected many of the ideas from the President's advisers in favor of safeguards on patient–physician autonomy, causing Rahm Emanuel, the White House chief of staff, to quip that politics trumps "ideal" plans made in the shade of the "Aspen Institute." See Sheryl Gay Stolberg, "Democrats Raise Alarms over Health Bill Costs," The New York Times, November 9, 2009. Explicit language in the House bill is intended to safeguard patient–physician autonomy. See House Bill HR 3962, Title IV—Quality; Subtitle A—Comparative Effectiveness Research.↩

3

These results, respectively, come from the NICE-SUGAR Study Investigators, "Intensive versus Conventional Glucose Control in Critically Ill Patients," The New England Journal of Medicine, March 26, 2009; Silvio E. Inzucchi and Mark D. Siegel, "Glucose Control in the ICU—How Tight Is Too Tight?," The New England Journal of Medicine, March 26, 2009; the Action to Control Cardiovascular Risk in Diabetes Study Group, "Effects of Intensive Glucose Lowering in Type 2 Diabetes," The New England Journal of Medicine, June 12, 2008; the ADVANCE Collaborative Group, "Intensive Blood Glucose Control and Vascular Outcomes in Patients with Type 2 Diabetes," The New England Journal of Medicine, June 12, 2008; Robert G. Dluhy and Graham T. McMahon, "Intensive Glycemic Control in the ACCORD and ADVANCE Trials," The New England Journal of Medicine, June 12, 2008; Gregg C. Fonarow et al., "Association Between Performance Measures and Clinical Outcomes for Patients Hospitalized with Heart Failure," The Journal of the American Medical Association, January 3, 2007; Bengt C. Fellström et al., for the AURORA Study Group, "Rosuvastatin and Cardiovascular Events in Patients Undergoing Hemodialysis," The New England Journal of Medicine, April 2, 2009; Bruce E. Landon et al., "Improving the Management of Chronic Disease at Community Health Center," The New England Journal of Medicine, March 1, 2007; Rodney A. Hayward, "Performance Measurement in Search of a Path," The New England Journal of Medicine, March 1, 2007; Robert M. Wachter et al., "Public Reporting of Antibiotic Timing in Patients with Pneumonia: Lessons from a Flawed Performance Measures," Annals of Internal Medicine, July 1, 2008.↩

4

The clinical development of other growth factors, like G-CSF for a low white blood cell count, fared better. G-CSF is a valuable treatment for many cancer patients, but, of course, not all.↩

5

Contradictory evidence reverses "best practices" so frequently that within one year 15 percent must be changed, within two years, 23 percent are reversed, and at 5.5 years, half are incorrect. See Kaveh G. Shojania et al., "How Quickly Do Systematic Reviews Go Out of Date? A Survival Analysis," Annals of Internal Medicine, August 21, 2007.↩

6

Focusing illusions are wonderfully illuminated by Daniel Gilbert, Stumbling on Happiness (Knopf, 2006). Also see the role of marketing in fostering the illusion: Natasha Singer and Duff Wilson, "Menopause, as Brought to You by Big Pharma," The New York Times, December 13, 2009. See also David A. Schkade and Daniel Kahneman, "Does Living in California Make People Happy? A Focusing Illusion in Judgments of Life Satisfaction," Psychological Science, September 1998.↩

7

Dr. Clancy seeks new statistical methods to analyze heterogeneous groups of "real world" patients, so treatment guidelines become "personalized," delivering "the right treatment to the right patient at the right time." (See Patrick H. Conway and Carolyn Clancy, "Comparative-Effectiveness Research —Implications of the Federal Coordinating Council's Report," The New England Journal of Medicine, July 23, 2009; Harold C. Sox and Sheldon Greenfield, "Comparative Effectiveness Research: A Report From the Institute of Medicine," Annals of Internal Medicine, August 4, 2009.) This is a laudable goal and deeply attractive. It is more likely to come from basic science that classifies patients based on their genetic characteristics rather than statistics. Past attempts at observing groups of "real world" patients have often generated conclusions that were flawed, mistaking correlation for causation. A valiant attempt to apply research on comparative effectiveness to prostate cancer treatment options came up against similar hurdles. See Jenny Marder, "A User's Guide to Cancer Treatment," Science, November 27, 2009.↩

Gina Kolata, "In Reversal, Panel Urges Mammograms at 50, not 40," The New York Times, November 17, 2009. A detailed summation of the controversy is found in The Cancer Letter, November 20 and December 4, 2009.↩

On June 16, 2008, at the Health Reform Summit of the Senate Finance Committee, Orszag explicitly invoked behavioral economics to explain some of the deficiencies in American health care and as the basis for legislative interventions that would remedy rapidly escalating costs and gaps in quality.

On August 7, 2008, addressing the Retirement Research Consortium in Washington, D.C., Orszag presented “Behavioral Economics: Lessons from Retirement Research for Health Care and Beyond.” Here, he states the likely need for aggressive measures. The Senate Finance Committee, under Max Baucus, was widely reported to have worked closely with the White House, and many of Orszag’s proposals are prominent in the bill that Majority Leader Harry Reid brought to the floor. See Senate Bill HR 3590, Title III—Improving the Quality and Efficiency of Health Care.

The House rejected many of the ideas from the President’s advisers in favor of safeguards on patient–physician autonomy, causing Rahm Emanuel, the White House chief of staff, to quip that politics trumps “ideal” plans made in the shade of the “Aspen Institute.” See Sheryl Gay Stolberg, “Democrats Raise Alarms over Health Bill Costs,” The New York Times, November 9, 2009. Explicit language in the House bill is intended to safeguard patient–physician autonomy. See House Bill HR 3962, Title IV—Quality; Subtitle A—Comparative Effectiveness Research.↩

3

These results, respectively, come from the NICE-SUGAR Study Investigators, “Intensive versus Conventional Glucose Control in Critically Ill Patients,” The New England Journal of Medicine, March 26, 2009; Silvio E. Inzucchi and Mark D. Siegel, “Glucose Control in the ICU—How Tight Is Too Tight?,” The New England Journal of Medicine, March 26, 2009; the Action to Control Cardiovascular Risk in Diabetes Study Group, “Effects of Intensive Glucose Lowering in Type 2 Diabetes,” The New England Journal of Medicine, June 12, 2008; the ADVANCE Collaborative Group, “Intensive Blood Glucose Control and Vascular Outcomes in Patients with Type 2 Diabetes,” The New England Journal of Medicine, June 12, 2008; Robert G. Dluhy and Graham T. McMahon, “Intensive Glycemic Control in the ACCORD and ADVANCE Trials,” The New England Journal of Medicine, June 12, 2008; Gregg C. Fonarow et al., “Association Between Performance Measures and Clinical Outcomes for Patients Hospitalized with Heart Failure,” The Journal of the American Medical Association, January 3, 2007; Bengt C. Fellström et al., for the AURORA Study Group, “Rosuvastatin and Cardiovascular Events in Patients Undergoing Hemodialysis,” The New England Journal of Medicine, April 2, 2009; Bruce E. Landon et al., “Improving the Management of Chronic Disease at Community Health Center,” The New England Journal of Medicine, March 1, 2007; Rodney A. Hayward, “Performance Measurement in Search of a Path,” The New England Journal of Medicine, March 1, 2007; Robert M. Wachter et al., “Public Reporting of Antibiotic Timing in Patients with Pneumonia: Lessons from a Flawed Performance Measures,” Annals of Internal Medicine, July 1, 2008.↩

4

The clinical development of other growth factors, like G-CSF for a low white blood cell count, fared better. G-CSF is a valuable treatment for many cancer patients, but, of course, not all.↩

5

Contradictory evidence reverses “best practices” so frequently that within one year 15 percent must be changed, within two years, 23 percent are reversed, and at 5.5 years, half are incorrect. See Kaveh G. Shojania et al., “How Quickly Do Systematic Reviews Go Out of Date? A Survival Analysis,” Annals of Internal Medicine, August 21, 2007.↩

6

Focusing illusions are wonderfully illuminated by Daniel Gilbert, Stumbling on Happiness (Knopf, 2006). Also see the role of marketing in fostering the illusion: Natasha Singer and Duff Wilson, “Menopause, as Brought to You by Big Pharma,” The New York Times, December 13, 2009. See also David A. Schkade and Daniel Kahneman, “Does Living in California Make People Happy? A Focusing Illusion in Judgments of Life Satisfaction,” Psychological Science, September 1998.↩

7

Dr. Clancy seeks new statistical methods to analyze heterogeneous groups of “real world” patients, so treatment guidelines become “personalized,” delivering “the right treatment to the right patient at the right time.” (See Patrick H. Conway and Carolyn Clancy, “Comparative-Effectiveness Research —Implications of the Federal Coordinating Council’s Report,” The New England Journal of Medicine, July 23, 2009; Harold C. Sox and Sheldon Greenfield, “Comparative Effectiveness Research: A Report From the Institute of Medicine,” Annals of Internal Medicine, August 4, 2009.) This is a laudable goal and deeply attractive. It is more likely to come from basic science that classifies patients based on their genetic characteristics rather than statistics. Past attempts at observing groups of “real world” patients have often generated conclusions that were flawed, mistaking correlation for causation. A valiant attempt to apply research on comparative effectiveness to prostate cancer treatment options came up against similar hurdles. See Jenny Marder, “A User’s Guide to Cancer Treatment,” Science, November 27, 2009.↩

Gina Kolata, “In Reversal, Panel Urges Mammograms at 50, not 40,” The New York Times, November 17, 2009. A detailed summation of the controversy is found in The Cancer Letter, November 20 and December 4, 2009.↩