A trio of Perspective essays in the October 17, 2013 issue of the New England Journal of Medicine compels me to provide my own perspective on how these three issues—the emergence of high-deductible health plans, the toxic side effects of out-of-pocket costs, and the “Thousand-Dollar Pap Smear” i.e. the ridiculous pricing of many laboratory and other technical services—intersect at the critical moment when doctors discuss diagnostic and treatment plans with patients.

For some 20 years, I cared for patients in two very different settings: metropolitan Minneapolis, where most patients had first or near-first-dollar coverage, and rural Marshall MN, where many of my patients had what they called “farmer’s insurance,” which typically had a $5,000 deductible, but which protected them from losing the family farm if they had a catastrophic illness. Early in my Marshall experience, I would recommend lab tests, imaging studies, and medications just as I did in Minneapolis—only to be confronted with questions I never heard in the big city: “Doc, how much are these pills going to cost?” and “Doc, is this test really important? I’ve heard that MRIs cost over $1,000.” And the most important question of all, “Doc, is there something that would work about as well, but cost less?” In Marshall, I rapidly learned that one of the key roles of physicians is to advise patients on the value of various options in diagnosis and treatment.

But when I adopted these same approaches with patients who had first-dollar coverage in Minneapolis, the response was “Doc, I’m not paying for this, my employer is. I don’t care what it costs. Are you trying to skimp on my care?”

The best rule for any business is to “Give the customer what she wants…and is willing to pay for.” I predict that high-deductible plans will cause a decrease in the price of costly services, simply because patients will not be willing to pay for them. Perhaps it is already happening. Two weeks ago, I drove past a billboard in Oregon with a picture of a smiling radiologist and the announcement: “High deductible? No insurance? No problem! MRI--$495.”

As for the risk that patients will deny themselves necessary services, it seems to me that the most vulnerable patients (Medicaid) currently have the most generous benefits of any insurer, public or private, in most states. Medicaid patients are not commonly faced with the high copay/deductible questions that my farmer’s families raised. For the wealthy, on the other hand, a certain amount of cost sharing might provide a good brake to counter the supply- and technology-driven cost accelerator that plagues our system. And for those with modest incomes, high-deductible health plans protect them from bankruptcy in case of serious illness, (isn’t that the primary purpose of “insurance?”) but cause them to engage their physicians in serious conversations on the question “Doc, is this worth it?” In my experience, that is a very important conversation, and a wonderful opportunity for the physician to become a trusted advisor on the value of services, medications, and other interventions.

Eleven years ago when I founded The Reinertsen Group, I added the tagline “Creating Organizational Environments in Which Quality Can Thrive.” That has proved to be a reasonably durable statement, bothof our goal, and of our strategy. It seems more relevant than ever for health care leaders today, as they prepare to take their organizations fully into the era of value-driven, accountable care.It’s also especially pertinent to something that Bryan Sexton taught me this year, one of my top lessons from 2012 (see below).

While our tagline has been constant, other things have changed. This was my final year as a Senior Fellow with IHI. It has been an extraordinary privilege to work with IHI’s world-wide family of leaders and faculty, and I am grateful to Don Berwick and Maureen Bisognano for their invitation in 2001 to join their team. I have learned an enormous amount from working with the IHI team on projects, white papers, and programs such as Pursuing Perfection, the Executive Quality Academy, the 100K and 5M Lives Campaigns, the Seven Leadership Leverage Points, Boards on Board, Engaging Physicians in a Shared Quality Agenda, and the annual CEO Summit at the National Forum.

Another change I have noticed over some years is my steadily increasing emphasis on two dimensions of quality: safety and efficiency (cost). Other quality attributes such as effectiveness, timeliness, and patient-centeredness obviously can’t be cleanly dissected away from safety and cost, but these two stand out, and seem to take precedence in most of my interactions with clients. In the case of safety, I often describe it as a moral imperative, rather than one of several strategic options. Safety seems to be on a more fundamental plane than, say, service quality. As I often tell clients, “The patients whom you injurewill notcare whether or not you had valet parking.”

Cost has emerged as the other key dimension of quality because of our national fiscal crisis. The US has no choice: we must reduce our rate of growth of health care costs. As Victor Fuchs puts it: “If we solve our health care spending, practically all of our fiscal problems go away.If we don’t, then almost anything else we do will not solve our fiscal problems.” Cost control—not better access, not insurance reform, but real reduction in spending—will likely dominate health care policy for the next 10 years. The key for health care delivery leaders will be to maintain or improve other quality dimensions—such as safety—while simultaneously reducing costs.

This year I facilitated a study tour for US leaders that focused on three countries: Denmark, Sweden, and Scotland. The question was “Why have these nations been able to control health care spending as a percent of GDP far better than the rest of us?” Some answers from the field (reinforced by the insights of Derek Feeley, NHS Scotland’s remarkable Chief Executive) would include:

1. Defined Budgets: An obvious feature of these nations is that there is a defined budget for each year for the health care system. In Denmark, this national budget is allocated to the regions, who in turn allocate it across the health and social care organizations, who in turn allocate it by department and function. While there is always the possibility of overspending the budget, there is also serious pressure not to do so. As Derek Feeley says, “Sure, I can overspend my budget. Once.” This system is familiar to staff and group model HMOs, but otherwise entirely foreign to the US business model for doctors and hospitals—until the advent of ACOs.

2. Willingness to Openly Face Cost as a Factor in Deciding What is Covered: The Danes make no apology for using guidelines and other instruments to prioritize and ration the care that is covered under the public system. An example is their recent decision to raise the BMI threshold for coverage of bariatric surgery, accompanied by a public conversation about the cost/quality tradeoffs inherent in this decision. Similarly, Jonkoping’s county council regularly debates “make or buy” decisions based on tradeoffs between convenience and cost. The English NHS’s NICE is akin to the guidelines groups in Denmark. This cultural feature, translated into guidelines commissions, evidence-based medicine councils, and other decision-making bodies, is a deep and powerful factor in cost control.

3. Local Control: Many decisions are made at the level of the counties and regions, (500,000 to 1.5 million people, typically) rather than at the whole nation level. And even the national scale in the case of Scotland and Denmark is only 5 million citizens. This allows for a healthy conversation that links the public’s desire for more and better services to their responsibility for paying for those services. In other words, the public is asked “do you want it badly enough to pay for it?”

4. Capacity Control and Facility Consolidation: These countries seem to be capable of making and executing difficult decisions such as centralization of tertiary and quaternary services, closing duplicative hospitals, and other politically chargedsteps necessary to rationalize capacity, technology, etc. They actually have more doctors per capita than the US, but provide neither unlimited playgrounds nor perverse incentives (see below) for these doctors.

5. Administrative Simplicity: Relative to the US, these single payer systems are typically not burdened with the administrative complexity and costs inherent in the US insurance system. Enough said.

6. Aligned Physician Incentives: Doctors in Sweden and Denmark, whether in private practices or employed in the public systems, are paid in a variety of models, but the predominant features are per capita payments (e.g. GPs in Sweden) and straight salary (most specialists.) There are some FFS elements especially in the private systems, but this is a relatively minor aspect of the overall incentive landscape. This mix of incentives (predominantly capitation and salary) combined with fixed work weeks around 40 or 42 hours, leads to some productivity issues, but effectively eliminates incentive-driven overuse overuse of lab tests, specialty consultations, imaging, or costly procedures. (Note: hospitals have no incentive to drive up volume either, under the budget system!)

7. Lower Prices: On a unit of service basis, primary care and specialty care doctors in Europe are paid less than in the US—by 15-40%. Which raises the question, “Can we afford to keep extending ‘the doc fix?’ “

8. Private Competition: in both Sweden and Denmark the public system is under pressure to provide good access. If patients can’t get into the public system quickly enough, they are allowed to access the private system (at public expense.)

9. Coordination and cooperation between health care delivery, social services, and public health. To a much greater extent than in the US, health and social services are working together, often under the direction of the same governing bodies and budgets. This is especially important in chronic disease, end of life care, and preventing admissions and readmissions.

10. Educated, health-conscious populations: Both Sweden and Denmark have invested greatly in public education, and encourage activity in their design of public transportation, buildings, etc. Obesity rates are low. Note: they don’t consider “screening” to be prevention, and typically have lower rates of screening for cancer and other conditions than the US!!

11. Innovation, Quality and Safety Improvement: Scotland provides a particularly powerful example, but all 3 of these nations have invested significant resources in quality improvement—particularly, in reducing rates of potentially avoidable complications, and removing waste from work. It’s not uniform, but QI seems to be a much more regular feature of “daily work” for doctors and nurses than it is in the US. In Sweden in particular, all staff members have two jobs: doing their work, and improving their work. It was refreshing to observe the extent of local initiative and creativity in improvement at the front lines. Many of the best innovations appear to be “wildflowers” that were allowed to grow and thrive without any obvious mandate from above.

12. Patient Self-Care: There appears to be a widespread encouragement of empowerment and ownership by patients and families, with some astonishing examples such as self-hemodialysis (now becoming the norm in many parts of Scandinavia!). This is the “IKEA Model” of care, in which you get to do your care for yourself, and it appears to lead to high levels of satisfaction, lower costs, and outstanding outcomes.

If the US’ primary policy focus is going to be cost control, we might consider this list of 12 factors, and ask two questions:

• Where are any of these factors in the Affordable Care Act? (A: Aside from investment in innovation, private competition, and reduction in rates of payment, there is little in the ACA that can match the power of real budgets, capacity control, incentives that at least don’t encourage overuse, and willingness to have tough conversations with the public about costs, rationing, and other taboo subjects. Much of all this “hard stuff” is supposed to happen magically within ACOs. Really?)

• Why are we in the US so enthralled by the private insurance model? Aside from a brief and painful period of “managed care” in the 1990’s, private insurers have never done anything significant to alter the cost or quality trajectory of US healthcare, but they have added enormous amounts of administrative waste and frustration. If Aetna, United, etc. add no value to the system, why did we preserve and expand their business model in the ACA? (A: I have none. The private insurance emperor has no clothes, as I see it.)

Bryan Sexton and others have made us all aware that hospitals don’t have one culture of safety—they have dozens of microcultures of safety, unit by unit. One of my most important insights from 2012 comes from what Bryan can teach leaders about how to deal with poorly functioning units. Most hospitals have one or more units that fit this description: staff are burned out, safety outcomes are poor, and new initiatives such as checklists simply don’t gain any traction. Staff in these units report that they don’t take time to eat properly, don’t sleep enough, work through breaks, and even avoid drinking fluids so that they don’t have to take time to urinate.These units are simply incapable of dealing with any changes, including new safety initiatives. The key insight for leaders? If you want to improve safety outcomes in these units, you must first make sure that the basic needs of staff are being met, by addressing the causes of poor resilience in the unit. Often the root cause can be traced to the quality of local leadership, but other issues such as proper staffing, protected time for breaks, and distrust of distant management must also be dealt with. Some of you recognize this phenomenon from Maslow’s Hierarchy, which I’ve always summarized this way: If you’re gasping for air and panting for water, it’s hard to sing opera.

So, take a good look at your culture of safety and other staff surveys. Units with low scores need a fundamentally different leadership approach to implementing change, compared to units with good scores.

In mid-2012 I sat at a dinner table with a half-dozen practicing physicians who are also quality leaders in their organizations. I asked them if the EHR, as currently being implemented in their setting, was making their care (in their own personal practices) better, or worse. Every single one said “Worse.” When asked “Why?” their most poignant examples dealt with the loss of meaningful clinical narrative in the blither of cut-and-paste documentation, They also complained of the substitution of screen time with computers for “touch time” with patients. And they all had examples of medication and diagnostic errors that seem to have been caused, rather than prevented, by the EHR.

Which brings me to the most disturbing document I’ve read this year: Health IT and Patient Safety, the IOM report released in November 2012. In this report, the nation’s top health IT experts were asked the same question that I asked the practicing doctors: “Is the EHR making care more safe, or less safe?” The experts’ answer appears to be “Uh, we’re not sure, but we’re worried.” That’s not very reassuring, especially since safety has been a primary target of EHR implementations since their early development.

The IOM report describes tremendous variation from one institution to another in how the EHRs of the same vendor have actually been implemented, with dramatically different impacts on safety. Furthermore, in conversations with David Classen and other notable experts, I’ve also learned about some rather alarming results of field tests of whether EHRs would “catch” incorrect medication orders. No one’s EHR caught more than 80% of the test examples, and many caught only 20% of them. Furthermore, no one vendor was any better than any other, and the range of 20%-80% was seen across various institutions that were using the same vendor! And this sort of thing—drug/drug interactions, dose adjustments for patients with renal failure, drug allergies….was supposed to be the strength of the EHR!!

I think all health care leaders should read the report, and then engage in a serious conversation with their nurses, doctors, pharmacists, vendors, and IT specialists to ask themselves whether their own EHR is making care better or worse. Far too much money is being spent hurtling toward “meaningful use” if it isn’t going to achieve meaningful improvements in safety.

The most delightful paper of the year award has to go to Franz Messerli’s masterpiece in the New England Journal of Medicine, Chocolate Consumption, Cognitive Function, and Nobel Laureates. The key table from the paper shows the relationship between chocolate consumption and the likelihood of winning a Nobel Prize, by country.

The correlation between chocolate intake and Nobels is striking. There are only three possible explanations:
1. Brilliant people eat more chocolate.
2. Eating chocolate makes you brilliant.
3. Eating chocolate and being brilliant are both related to some third, as-yet-unknown factor.

Beyond the correlation, it’s noteworthy that Sweden has enjoyed more Nobel Laureates than its chocolate consumption would warrant, which raises obvious questions of Stockholm-based bias by the Swedish Nobel committee. I have had some serious conversations with my Swedish friends about this problem.

But the main point of the paper is that you now have a perfect justification for having indulged in holiday chocolates for the past several weeks.
And who said medical journals are useless?

During the past 2 or 3 years, a number of hospitals and other health care systems have made signficant, measured improvements in quality and safety. In scores of ICUs, ventilator pneumonias are now rare. MRSA transmissions and other nosocomial infection rates have been cut dramatically. Mortality rates, both raw and risk-adjusted, have decreased by as much as 25% or more. And these improvements in safety and quality have not been an accident. They have been the result of focused attention, fresh ideas, and effective, engaged leadership. Also...they have been achieved during very good financial times, at least for hospitals.

The question for 2009 is: will these improvements sustain through the budget cutbacks that almost all health care systems are now experiencing? Or, as hospitals cut nurse staffing and slash budgets for education, travel, and quality infrastructure, will we see safety levels start to decline, nosocomial infection rates creep back upward, and hospital deaths start to climb? I don't know about you, but I think we're facing a major safety challenge, right now. And I think there are several practical steps every health care system might take to hold the ground that's been gained on quality and safety, despite the current financial crisis. Here are three ideas.

1. Keep the Board's attention on safety: Now, more than ever, it's critical to keep your Quality and Safety report (with measures of your rates of harm, infections, deaths...etc.) first on the Board agenda, not at the end of it. If the rates start to slip, the Board will start asking hard questions, and that's a good thing. Question: where is the Quality Report place on YOUR board's agenda?

2. Talk about it: I've been in far too many senior leadership meetings during the last 2 or 3 months during which staffing and other budget cuts were proposed and approved, without a single voice asking the question: "How can we do this, safely?" I don't want to be the skunk at the CFOs' picnic, but Linda Aiken's work clearly tells us that if all we do is reduce nurse staffing, we will reduce safety levels, and mortality rates will increase. We simply must talk about this issue, and find ways to take waste out of our nurses' and other professionals' work, if we are ever to reduce staffing costs SAFELY. Question: Have you been in a cost-reduction meeting recently? Has anyone raised the question of safety? If not, why not?

3. Go transparent with our measures of safety: A small number of brave organizations such as the Beth Israel Deaconess in Boston publicly display specific measures of "preventable harm" (www.bidmc.harvard.edu). I would bet on those organizations' ability to stay the safety course during a financial crisis. Their commitment is too public, and too important now to too many stakeholders, to be permitted to backslide. On the other hand, if a hospital keeps its safety measures hidden from view, who would notice if the measures started to slip? Question: Has your organization made this sort of public commitment, with highly visible data on measures such as infection and complication rates? If not, why not?

I'd like to hear your answers to these sorts of questions. I'd also like to hear your ideas for how to sustain the hard-won gains you've made in safety over the past few years.

Has it ever struck you that most of what we work on in quality assumes that the diagnosis is correct--diabetes, colon cancer, mycoplasma pneumonia, rheumatoid arthritis....etc.--and that our principal quality challenge is simply to deliver the right evidence-based care, safely, to the person who has that condition? Have you ever wondered about the quality of the process that resulted in reaching the diagnosis itself?

Here's an illustration of what I mean. If two doctors started with identical patients, at the very same stage in the evolution of the very same disease, and one doctor came to the correct diagnosis over 1 month, in two visits, after $150 of lab tests, and the other doctor took 6 months to reach the same diagnosis, despite 4 referrals to various other doctors, $2500 in lab and imaging studies, and two costly, invasive and dangerous procedures, wouldn't you say that the quality of the diagnostic process was better in the first instance?

In a recent article (JAMA 2008;299:338-340) Eric Holmboe, Rebecca Lipner and Ann Greiner of the American Board of Internal Medicine surface this question of "diagnostic quality" as they consider whether physician knowledge and clinical judgment have an impact on quality. I have argued for years that being Board certified in whatever specialty has relatively little impact on the likelihood that a doctor will reliably execute an evidence-based treatment plan for a common condition. That aspect of quality (delivering the treatment plan reliably for a given diagnosis) is far more dependent on the systems and teams with which that doctor works.

But I have also felt strongly that being Board certified (in the case of the ABIM, passing a fairly stringent test of clinical knowledge and judgment) would make a difference in how quickly, efficiently, and safely a doctor arrived at the right diagnosis, especially for less common conditions, or for unusual presentations of common diagnoses. It is surprising to me that so little work has been done to understand variation in the quality of the diagnostic process, and to assess what factors predict efficient, accurate diagnosis. Here are some of the questions I would like to ask, just to understand the variation, for starters: For patients with the same eventual diagnosis...

How many visits did it take to reach the correct diagnosis?

How many referrals to different specialists?

How many laboratory tests?

How many imaging studies?

How many procedures? (biopsies, endoscopic examinations...)

How long did the whole process take? (this is the "sleepless nights" question)

Once we had some sort of picture of variation, we could begin to study factors that predict the quality of diagnosis, and perhaps even begin to test changes that would improve the quality of the diagnostic process. This arena (the quality of diagnosis) might well be something of a "next frontier" for quality work, and I'm very interested in hearing from anyone who is working on this problem.

Patients and families are increasingly being invited into places and conversations that have historically been off limits to them. Many leading-edge organizations now invite patients to sit on improvement teams, include families on rounds, and seat patients on hospital committees. But one hospital has gone where no other hospital has gone before, at least with patient empowerment. St. Joseph’s PeaceHealth in Bellingham Washington now has a patient as a full member of the Medical Executive Committee of the organized medical staff!

According to Marla Sanger, VP, Quality and Process Improvement, the MEC at St. Joseph’s decided to try this out over a year ago, and asked a patient to sit in on the MEC meetings, but to excuse herself whenever the MEC needed to perform a peer review or some other sensitive function. After a few months, the physicians on the MEC started forgetting to ask the patient to leave. And after a year, they made the “Patient Representative” position a permanent feature of the MEC!

The report from Marla is that the presence of the patient has changed the conversation at the MEC. Topics that might have been prominent on the agenda in the past (such as interdepartmental squabbles about privileges, or perhaps being paid for call) just don’t seem to come up as often, and instead, the MEC focuses squarely on its primary function: what needs to be done to improve quality and safety for patients.

Has anyone else done this? What has been the experience? I’ve been asking around and have found no other examples, so I’m curious to know whether you’re aware of others who have placed a patient on the MEC. From my perspective, it’s the most dramatic example yet of “putting the patient in the room.”

Most “Quality Dashboards” contain data on rates of hospital-acquired infections, adverse drug events, falls, and other harm events e.g. “central line infections per 1000 line hours” or “falls per 1000 bed days .” Typically, these rates are shown alongside some sort of benchmark rate for that indicator, usually established by analyzing the rates for comparable hospitals, and then displayed as the 50th, 75th, or 90th percentile. It’s not uncommon for the dashboard to display any rate better than the 50th or 75th percentile as “Green.” Expressing data as rates, with benchmarks, allows the quality staff and executive team to answer a question commonly asked by Boards: “How are we doing compared to other hospitals like ours?” Knowing how you’re doing compared to other hospitals isn’t a bad thing.

But some innovative hospitals have started to ask a different set of questions, and to use a different sort of performance indicator to answer those questions. Instead of asking “How are we doing compared to the competition?” they’re asking “How are we doing compared to the theoretical ideal?” (The theoretical ideal is often either 100% or zero).And to track the answer to that question, they’re eliminating the denominator. (For example, they are simply tracking “total number of central line infections each month” and “total # of falls each month.”)

There are five reasons why eliminating the denominators is a good idea. 1. Neither your basic patient population nor your types of service change that dramatically from month to month, (with some notable exceptions for seasonal conditions such as allergies, and for institutions with large seasonal influxes of “snowbirds.”) So a raw count of the number of people who fall in your hospital, or get infected, or have adverse drug events, is a fairly accurate indicator of the burden of harm over time. 2. Any time we make a measurement more complex (e.g by making it a ratio between two measurements) we add measurement error. How accurately are we measuring things like “ventilator days?” 3. If a measurement is not adding value (many denominators fall into this category) they’re simply adding measurement waste. Somebody has to keep track of “line hours.” Is this value-added activity, or not? 4. In order to get benchmarks, deciles and other indicators of comparative performance, we usually sent off our denominator-based measurements to some national or regional data compiler (e.g. Premier, VHA, State Hospital Association…) so that we can get them to send us back our %tile ranking and position. This inevitably introduces delay. How old are the data you show your Board? Six months? Nine months? This isn’t a timely way to oversee and steer improvement. 5. Finally, and most important, many of these denominator-based measurements lull hospital leaders into complacency, in two ways. First, the ratios make the data fairly abstract e.g. “4.9 infections/1000 line hours.” Compare this to what that abstract really means: “14 people doubled their risk of dying in our care last month, because of a line infection that we gave them.” If we want our Board members to understand our data, and to oversee its improvement with urgency, they need to understand it viscerally. Eliminating the denominators helps. The second way in which denominators cause complacency is when leaders look at their dashboards and say, “Hey, we must be pretty good. All our indicators are Green.” To which I say, “And what, exactly, does it mean to be Green?” Being better than the 50th percentile for hospital-acquired infections, in a health care system where 200,000 people incur serious harm every year from these infections, is not “Green.”

So what do I recommend? Try eliminating the denominator, for many of your performance indicators. Track the number of patients who are harmed, or receive the care they should receive , every month, against the theoretical ideal…either 100% or zero. Your data will be more accurate, more timely, and more viscerally meaningful. And that will give you a jumpstart on improvement.

Note: from time to time, you might still have to answer the question “But how are we doing compared to others?” For this you will need denominators. But if you’ve been working with the theoretical ideal in mind, you just might find something interesting when you check your performance against the competition: you’ve blown right past the benchmark!

The IHI 100K Lives Campaign brought an unprecedented level of attention and focus to getting measured results in hospital quality and safety—specifically, 3,100+ hospitals working on 6 measures to avoid 100,000 deaths over 18 months. And the results appear to be stunning—approximately 123,000 people who would have been expected to die in the 18 months between January 1 2005 and June 14 2006, if the risk-adjusted hospital death rates that prevailed in 2004 had simply continued forward, did not die during their hospitalization during the Campaign. The confidence intervals on this estimate appear to be something like +/- 20,000.

Those of us who served as "field workers" in hospitals throughout the country during the Campaign know that this work has only just begun. For many of the measures, in many Campaign hospitals, implementation is nowhere near completion. Most observers expect significant additional impact on risk-adjusted hospital mortality rates, once the six measures are fully deployed. It appears that the 100K Lives Campaign is bringing about a seismic, positive shift in the quality and safety of care in US hospitals.

Or is it? Bob Wachter and Peter Pronovost aren’t so sure. Their paper in the November issue of the Joint Commission Journal on Quality and Safety pointedly suggests that enthusiasm might have trumped science in IHI’s estimates of lives saved, as well as in IHI’s choice of at least one of the 6 measures. Wachter and Pronovost scold IHI for a number of faults: for promoting an intervention that is not known with 100% certainty to be effective (rapid response teams); for ignoring other, perhaps more effective interventions that could have been included in the campaign; for using risk adjustment methods to drive the estimate of deaths avoided; for extrapolating data from only 86% of the hospitals in the Campaign; for using unaudited self-reported mortality rate data; for that taking credit as IHI for quality and safety improvements during these 18 months, when in fact many other things were going on at the same time, including other efforts to promote 5 of the 6 Campaign interventions; and last but not least, for not properly accounting for the fact that hospital death rates had already been dropping for some years. Don Berwick’s responsein the same issue of Jt. Comm J. on Quality and Safety is both graceful and helpful, and I recommend that anyone who has expended a lot of effort in the Campaign read both of these papers.

What’s my take on the controversy? I’m not an academic heavyweight like UCSF’s Wachter, or Johns Hopkins’ Pronovost. But I have been out in the field, every week, during the Campaign. And it seems to me that something happened during these 18 months. Here’s my analysis. There were about 800,000 deaths in US hospitals in 2004. Brian Jarman tells me that unadjusted Medicare death rates have been going down about 0.1-0.2% per year between 1996 and 2004 and his risk-adjusted "Hospital Standardized Mortality Rate" for US Medicare deaths during the same period has been dropping faster, at 3- 4% per year, either because of steadily better performance in the face of increasing risk of death in the hospitalized population, or because of more aggressive coding of the risk status of hospitalized patients, or both. Using the most optimistic of Jarman’s rates, against a baseline of 800,000 deaths, a 4% background annual rate of risk-adjusted decline might explain 48,000 fewer deaths during the Campaign. But not 123,000. And the idea that this dramatic change in trajectory is due to "coding creep" or hospital CEOs who are fudging their mortality numbers so that they can collect their bonuses? I don’t think so. Not from what I’ve observed on the ground—at the back door to many hospitals, where there has been a significant, sharp drop in the number of hearses pulling away from the hospital, during the period of the campaign. That has nothing to do with "coding creep."

And as for rapid response teams, it seems to me that the flaw in most of the published analyses is what I would call the "full implementation gap." Most organizations that implement RRTs find that they run into several types of barriers to full implementation. The two principal barriers are 1) nurses don’t want to look like they can’t handle the situation so they don’t call for help and 2) physicians don’t want the RRT called on their patients without having a chance to intervene themselves, first. So, many hospitals "implement" RRTs but really aren’t using the teams fully. Those institutions that are capable of executing these types of changes, system-wide, over a short time, typically notice a sharp, significant decline in code blues and related deaths. Park Nicollet Health Services implemented its RRT at 440-bed Methodist Hospital over 1 week, house-wide, and its data on codes is compelling. (see below) This was in a hospital, mind, that already has a very low Hospital Standardized Mortality Rate.

So it would have been nice to have several randomized controlled trials that were positive for RRTs before recommending widespread implementation, as Wachter and Pronovost would apparently have preferred, but the 100’s of individual case studies like Park Nicollet’s provide a rather convincing evidence set, albeit not RCTs, that convince me that the Campaign did NOT waste a lot of energy and effort of thousands of hospitals when in induced them to implement RRTs.

If something happened to death rates during the Campaign, why did it happen? As IHI’s leaders have said repeatedly, the Campaign was NOT the only factor in any improvement during the last couple of years, but when I look at what’s been going on in the hospitals and states I’ve been working in, the 100K Campaign is way ahead of whatever’s in second place. As for Wachter and Pronovost’s implication that hospitals would have done all these things anyway, without the Campaign (since 5/6 of the interventions were on the CMS or JCAHO measurement sets, or otherwise on some national radar policy body’s radar screen) my only response would be "Yes, but…when would hospitals have done them? In my lifetime? Before I retired?" The 100K Campaign brought about a truly unique sense of urgency to the national improvement agenda.

So I think something happened, and that the Campaign had a lot to do with it. Clearly, I’m not the one to settle either the "whether" or the "why" argument, and so I will leave the debate to the health services research experts, for whom this issue will no doubt generate lots of grant requests for years to come.

But while the experts worry about their grants, and their publications, I worry that many doctors, particularly academics, will seize upon the questions raised by Wachter and Pronovost and use them not as reasons to learn, but as reasons to avoid taking action, on ANY of the IHI Campaign Planks. In other words, just as the media might have presented an overly enthusiastic representation of the Campaign results, I worry that physicians’ natural skepticism will produce an overly pessimistic reading of Wachter and Pronovost’s paper, until every last question is answered by the academics. Again, my impatience comes through. "When will we get these perfect answers? What is the harm in NOT acting?"

Finally, I must say I was puzzled by the tone of Wachter and Pronovost’s paper. By describing Don Berwick as "chanting" the mantra of the campaign ("Some is not a Number, Soon is Not a Time"), by implying that IHI somehow had a "conflict of interest" in the Campaign (I’m still scratching my head on that one), and in a variety of other little ways throughout the paper, the authors convey a tone of distainful academic detachment at best, and a sort of eye-brow-raised disapproval at worst. A lot of people must be asking, "What was that all about?" IHI didn’t ask the academics’ opinion? IHI generated too much enthusiasm for improvement, and got too much of the limelight?

Perhaps we should all pause and paraphrase Harry Truman: "It’s amazing how many lives you can save when you don’t care who gets the credit." Our patients need both our science, and our enthusiastic application of the science.