Pitfalls and promising practices drawn from experimentation with quality-improvement methods and performance management in health care.

They write:

“While results have been decidedly mixed [with Lean, Six Sigma, and Continuous Quality Improvement], the field has made some advances while learning a great deal about the best use of these approaches.”

For every health system that gets great results with these methods, there is one (or likely more) who struggles and, at some point, gives up on the method instead of figuring out how to do it better.

They cite Dr. Deming first in talking about the need to understand broader systems:

“With respect to the first pillar–seeing the system–health care has long been guilty of myopia, focusing on improvement in hospitals and specialty areas, while failing to understand the larger societal factors responsible for unequal outcomes and skyrocketing costs…”

In some recent work that I did with an outpatient surgery group, one of our key strategies was to break down silos — helping people from areas like registration, pre-op, the family waiting room, and the recovery room see how their individual work fit into a broader system or value stream. This system also includes other locations, such as the individual surgeons' offices.

I'm still amazed (but not surprised) to see what happens when cross-functional teams map out processes or value streams that they are a part of. When communication, visibility, and transparency is added to that improved understanding, great things start to happen.

The authors also write about the need to adapt best practices, rather than rigidly copying:

“Much of the work of improvement in health care involves taking ideas and innovations already established in the evidence base, and adapting them to different care settings. Seeking strict adherence to implementation protocols is thus often counterproductive. In most cases, after communicating which components of an intervention are clinically sacrosanct, leaders must trust professionals to make it work within their own context, culture, and operating constraints.”

I've written before about learning from others, but not just blindly adopting what they did:

The authors then write about measurement systems (an important topic to me). To me, it's not just measurement that matters, but connecting measures to our improvement efforts.

“In health care, measurement systems too often serve the needs of regulators, administrators, academics, and other third parties who use data for research and inspection. Care providers do the arduous work of entering data into forms and spreadsheets, never to see it again unless leaders use it to rank or admonish them.”

I've long said that measurement should be used for improvement, NOT for punishment. When metrics get used as a weapon, it's natural that people will start distorting the system or gaming the numbers instead of actually improving.

“The first design principle of any effective measurement system is this: Put timely, easy-to-interpret data in the hands of those who can make day-to-day change, including doctors, nurses, patients, and families.”

I've seen “Process Behavior Charts” be very helpful as a method for accomplishing that aim. PBCs help people better understand the cause-and-effect relationships between changes they make and their results.

In that surgery center work, the main aim was increasing patient experience scores. In the past, they had a tendency to overreact to every up and down in the metric (as happens at most every organization). Without sharing the exact data, the previous 18 months' scores were just fluctuating around an average. It was a “predictable” metric… there was no reason or root cause for any small up or down within the upper and lower limits that were calculated for the chart.

One key principle, as an improvement team, was that we were NOT going to declare victory over one or two above-average data points. We weren't going to try to fool anybody with a simplistic before-and-after comparison of two data points. We were going to look for statistical signalsthat showed performance had changed enough that it couldn't be the result of fluctuation and randomness.

We had a hypothesis: If we made these changes to the system, then the scores would increase. Even if they just increase a few points, we'll look for eight consecutive data points above the old average and proof of a sustained increase. We also started looking at the scores in WEEKLY buckets, which might show a signal more quickly than monthly numbers.

“… technical work that fails to connect with the reasons people are called to their professions soon becomes drudgery.”

Pascal Dennis, a sensei of mine, always talked about the need to focus on hearts AND minds in our improvement work.

The authors also warn us, as Deming did, about incentive systems:

“Doctors and nurses are endlessly ranked and rated, and compensated accordingly, but there is very little evidence to suggest that this leads to better outcomes for patients. Possiblereasons for the failure of pay-for-performance programs include the fact that they make faulty comparisons between dissimilar organizations, induce groups to misreport their performance, and belittle and discourage care providers, resting on the problematic assumption that financial incentives are what drive their behavior. The social sector should approach payment incentives with caution, and invest more time in cultivating the intrinsic motivations we described in the previous lesson.”

I agree. “Motivational Interviewing” is a method that helps us draw out intrinsic motivations that already exist (and, again, please check out that webinar).