Summary

Background

This study describes an application of the principles of CQI (continuous quality improvement) to the practice of clinical medicine. In this case, the Northern New England Cardiovascular Disease Study Group (NNECVDSG) established a CABG registry in 1987; all five hospitals performing CABG in Maine, New Hampshire and Vermont participated. In 1990 it became clear that mortality for bypass surgery varied between institutions to an extent that could not be accounted for by case-mix severity. The project described here was an attempt to analyze this variation and to improve results, using the principles of CQI.

Methods

Basic approach: A three-pronged intervention was implemented between July, 1990 and April, 1991. Mortality data was gathered before, during and after the intervention (from 1987 to 1993).

Intervention

Feedback: Surgeons received feedback on the their own mortality statistics, on other surgeons in their institution and on aggregate statistics for the other institutions.

CQI training: Training in the techniques of CQI; a 2-day session for the executive committee; two 4-hour sessions for others.

Site visits: Extensive visits by various members of the project teams, including surgeons, to other sites to observe and reflect on differences in approach to patient care and surgical techniques.

Analysis: Patients undergoing CABG (but not CABG incidental to valve surgery) were included in the study; all 5 centers and 23 surgeons in the region participated. Data were collected on 15,095 patients between July 1, 1987 and July 31, 1993; clinical data (demographics, co-morbid conditions, catheterization results, urgency of surgery) were collected up to hospital discharge or in-hospital death. Regression analysis was performed on a continuous basis to develop a predictive model for mortality, based on patients' clinical characteristics. Observed mortality was then compared to predicted mortality. Structured interviews of the five site leaders were carried out in 1994 to ascertain the changes in practice that were made as a result of the intervention.

Results

Changes in patient characteristics: Comparing the post-intervention to the pre-intervention period, patients were sicker, with more co-morbid conditions and worse hemodynamic parameters.

Mortality: During the pre-intervention period, observed mortality was close to predicted mortality. During the post-intervention period, 308 deaths were predicted, 234 observed, for a reduction of 24% (CI 33% to 10%) or 74 deaths less than predicted. These reductions were seen in all hospitals except the one with the lowest pre-intervention mortality rate.

Changes in surgical care processes: As a result of the intervention, multiple changes in surgical and peri-operative care were carried out at various centers. These changes were documented during the 1994 structured interview and are listed in the article.

Authors' Discussion

The authors discuss a number of alternative explanations for their results. They argue that confounding variables should not figure prominently, since expected mortality was adjusted for major confounders (i.e. how "sick" the patients were). In fact, the patients tended to be sicker in the post-intervention period. They discount the Hawthorne effect (mere observation of a process leading to improved outcomes) since observation started three years before the intervention actually took place. They note that some improvement may have been due to medical progress in general, but that the decrease in risk-adjusted mortality that has been noted in general is far less than that which they observed.

Finally, the authors note that, due to the design of the study, it was not possible to determine exactly which changes in practice that resulted from the intervention were actually responsible for the improvement in mortality and they review briefly similar interventions that have been reported elsewhere.

Editorial

The accompanying editorial is by Donald Berwick, MD, one of the "fathers" of the application of CQI techniques to medicine. He notes the important role of scientifically controlled randomized clinical trials, but also the need for less controlled, though not less well grounded, studies such as this one. He explains the rationale behind the CQI approach and the need for knowledge acquired by this method to be made widely available.

Comment

By its very nature, a study which compares recent outcomes to historical controls is subject to multiple sources of bias. One cannot tell exactly what would have happened during the trial period in the absence of any intervention. It is also impossible to be sure that no confounding variables have been overlooked (hypothetical example: an inept healthcare worker quits because of the increased scrutiny induced by the trial). The authors of the study deal with these issues explicitly, and make a convincing argument that even if we cannot be sure of the exact magnitude of the benefit, we can be fairly confident that a significant result was achieved.

Another issue, the fact that we cannot know exactly which interventions caused the improvement in mortality, is not a major problem. What counts is not so much the specific changes that took place, many of which might not be transferable to other institutions, as the process which analyzed the situation and led to these changes. The CQI process most definitely is transferable.

CQI, which is sometime referred to as TQM or TQM/CQI (total quality management / continuous quality improvement) is a method of continuously improving a process. Initially it was applied to industrial production (by Deming and Shewhart) and played a significant role in the success of post-war Japan. The principles of CQI are fairly straightforward, and involve a number of analytic tools (flow charts, diagrams), and a cycle of analyzing a process, changing it and re-analyzing. It is a very empirically based method, grounded in common sense, augmented with a structured analysis.

Clinical medicine has characteristics that make it a good candidate for CQI: the processes are complex, they have multiple interactions with each other and results are usually quantifiable in one form or another. The empirical, common sense aspect of CQI should also fit well with the practice of medicine. There are a number of examples of excellent results obtained by projects such as the one presented here, but why are there not more of them? What are the obstacles?

In my opinion, one obstacle lies in the language used by CQI, which often sounds like a combination of industrial engineering, sociology and business, and which is alien to most physicians. I recently took a course in TQM/CQI as part of an MPH program. My initial impression was that this was mainly corporate-style hype. I changed my mind when I began to "get" the point. Another obstacle to greater physician involvement lies in the fact that most doctors' experience with quality control comes from dealing with habitually repressive institutions such as the peer-review organizations and hospital utilization review committees. These are not likely to enhance positive feelings towards quality improvement efforts. These impressions need to be changed if we are to reap the benefits that well-designed QI projects can provide.

It will be interesting to see the effect that managed care will have on CQI. On the one hand, the economic interests of managed care are such that it should foster a systematic approach to managing quality issues. On the other hand, the intense competition between various health-care providers that managed care is accentuating is detrimental to the team-work spirit necessary for effective quality improvement projects to work.

I would strongly recommend that all interested physicians, not just cardiovascular specialists, take a look at the article and the editorial summarized here. Other references include: