Authors

Mitch Keamy is an anesthesiologist in Las Vegas Nevada
Andy Kofke is a Professor of Neuro-anesthesiology and Critical Care at the University of Pennslvania
Mike O'Connor is Professor of Anesthesiology and Critical Care at the University of Chicago
Rob Dean is a cardiac anesthesiologist in Grand Rapids Michigan, with extensive experience in O.R. administration.

The Inspections Will Continue Until the Quality Improves...

Not-so-long ago, on this continent, one of us
was Chief of Staff at a local Columbia Hospital when the Joint
Commission for Accreditation of Hospitals (JCAHO or here, JC) appeared for a scheduled survey. Surveys were a little
different
then; they were announced long in advance, and, if a hospital did
exceptionally well, they would receive accreditation with commendation;
sort of a gold star. RFI's (recommendation for Improvement) were
called "Type
1" recommendations, and were taken as show-stoppers by all parties,
requiring follow-up to ensure correction/compliance. Enough Type 1s and
you lost your accreditation, which meant financial ruin. At that time,
the
Columbia hospital chain owned over 300 hospitals, and had convinced
the Joint Commission that their operating practices were so different
from other hospitals that they needed their own dedicated surveyors.
During the inspection at issue, the physician on that surveying team insisted on
a type 1 recommendation based upon his sense that medical staff leadership was marginalized by administration (or some such- the actual wording was never shared
with the clinicians). Subsequently, administration announced that this
type 1 recommendation had been "rescinded" and that the hospital
had received "Accreditation with Commendation." Rumor circulated that said
physician was henceforth dis-invited from surveying Columbia hospitals...

The JC was once a consensus building organization that followed the
lead of its subscribers and accomplished a great deal. When Congress
conveyed "Shall Deem" authority for medicare certification upon the
JC, it wrested control of the JC from it’s subscribers and
transformed it into a regulatory limb of congress and CMS. While the JC is undeniably under the direction of honorable people, their efforts have become
distorted by the
politics of money and power which surround them. It's no secret around hospitals that
practitioners see no connection
between the Joint Commission process and quality of care. Indeed, every practitioner understands
that some of the worst hospitals attain JC accreditation almost
effortlessly, while some of the best struggle to maintain their
certification. Historically,
JC inspection was centered on physical plant and policy/procedure.
Dreadful care was fine, as long as the policy and procedure manual was
up to date and concordant with the most recent guidance. Joint
Commission accreditation was, and is, a high stakes game, and
unfavorable decisions are very likely to be contested in court (or
with the threat of litigation). Consequently, JC regulatory activity
has progressively focused on inspection activities that can withstand
such litigation. This trajectory has relentlessly uncoupled Joint
Commission
inspection and accreditation from even a remote relationship to quality
of clinical care. High scores and
necessary accreditation have become contingent upon putting up a
temporary facade of strict compliance, which frequently obstructs,
rather than enhances, care. For the past two decades, the closest thing
to a Potemkin village in the American culture has been a hospital
preparing for a JC survey.

Since clinicians are primarily focused on care
of individual patients, incentives are significantly misaligned between
those
clinicians and facility administrators. Those administrators have the
unenviable task of reconciling the absolute need for accreditation
(upon which most
insurer reimbursement depends) with the uncooperative ambivalence or
even hostility of
clinicians who see the survey as a "Chinese fire drill" disrupting
routines of care, and diverting attention from more
relevant clinical concerns. This is exacerbated by the JC's frequent
utilization of tin-eared apparatchiks as surveyors.

Over the past decade, other certifying organizations have
arisen to compete with the Joint
Commission. In response, the JC has attempted to diversify its
portfolio, embracing clinical quality and safety as meaningful
additions to its mission. The Joint Commission has had limited success
with these endeavors. Why? Because, with the mindset of
inspectors, they are constitutionally incapable of this transformation.
Just as putting on a white coat does not make you a clinician,
declaration of intent does not transform the JC from an inspecting
organization into a quality/safety organization. There is an
enormous amount to know about both, and the Joint
Commission has struggled (along with all of health care) in even
understanding where the state of the art currently resides. There is
perhaps no better
example of this struggle than the Joint Commission’s Sentinel Event
policy, which has been in force for more than a decade, been through
multiple revisions, and has generated almost no meaningful reporting.
Why? Because, even with its Sentinel Event policy, institutions feel threatened by the JC. Thus, the only events reported are those unlikely to
generate any regulatory interest. Almost always, the first hospital
discussion of a sentinel event is one that justifies classification of
the particular event as
not reportable to the Joint Commission. As a result, in a world filled
with sentinel events, the Joint Commission’s database
has failed to capture most of them. Of all of the
entities in health care, none is currently better positioned than the Joint
Commission to study, analyze, and learn from such events and
to widely distribute the lessons learned. This
lost opportunity is staggering. Example? Wrong side
surgery.

Wrong side surgery happens very rarely; but, in a country of 300
million people, it happens regularly. The JC is determined to
change this, and has developed its ‘Final Verification’ Protocol to
extinguish the problem. The outcome? Absolutely no measurable change.
None. Zero. Why? Because they didn’t go out and study how such failures
occur. This would have required a different sort of approach; not a
regulatory focus, but an investigative one; an approach that requires
intellectual resources, specialists, outside
expertise, and the insight that mere proscription is insufficient. It
could be done given the will and vision, but it would require major transformation of the JC culture. Wrong
side/site surgery happens because it is very tricky to prevent 100% of
the time. Preventing it requires more than a mandate to fill out a form
(indeed, only people disconnected from bedside care could imagine that
this could be effective).

The irony is this: the Joint Commission has worked hard to develop a comprehensive database that catalogs such sentinel events but has not developed an appropriate infrastructure to understand
how such events happen;
they are not process savvy. This lack of understanding is the root cause
of the failure of final verification. Thank goodness that the National Transportation Safety Board (NTSB) does
not
take a similar approach to aviation accidents. This is important. For
thirty years, healthcare quality efforts have been primarily modelled
on the manufacturing industry; Deming, six sigma, Total Quality
Management. That's what the consultants have been selling, that's what the
healthcare business-people have been buying. Wrong model. That's a
production-oriented measurement philosophy. That's not what the NTSB
does; their model is based on an intimate understanding of process, and how it fails in specific instances. The
NTSB is deep with Human Factors Engineers, and the first object of
attention in any flight mishap is the recorder; the detailed
process record stored in a virtually impregnable, beacon alerting box. The healthcare environment needs something besides "widgets-off-the-line" thinking to
help it
improve the very difficult business of providing care to patients, and
at present,
that necessary something is not to be found within the Joint Commission, nor do they appear to be heading in a promising direction. But, as the saying goes,
"you can't beat something with nothing."

Fortunately, the University HealthSystem Consortium is just such a something. As the name implies, UHC is a group of academic
health systems collaborating to advance systems of care that make
clinical and economic sense, guided by data
and ongoing experience. They are attempting to elevate care through
careful understanding of the processes involved in the provision of
bedside care, and by helping institutions deal with
the daunting logistical effort required to support that care. In this
effort, they are enlisting the help and input of participants at all
levels in the care chain. All
of this stuff is hard; much harder than it appears to outsiders, who
imagine that caregivers should instinctively know what to do.
Ask any clinician how they define quality, for instance, and you are
likely to
get the Potter Stewart answer; " I know it when I see it." (Justice
Stewart was referring to pornography, but never mind that...) Although
it is valuable, it's not sufficient to drive improvement. The truth
is that the state of the art is elusive, variable, and continually
evolving in ways that are difficult to perceive or explain. In UHC,
everyone participating, a self-select group, sees the floor, and is
trying to get further away from it. Amongst academic practitioners, UHC
participation carries far more weight than Joint Commission
accreditation.

Quality, like the proverbial elephant, has a radically different feel
depending
upon which blind man you are and what part of the elephant you are
touching. If quality were easy to understand or measure, very little
would have been published about it, and no one would have been able to
build a career in healthcare founded upon it. For a quick introduction,
here are three resources. The original framework for discussions of
medical quality was exhaustively laid out by Donabedian in 1966 in the Milbank Quarterly. From a modern population
perspective, Berwick et al have distilled two decades of original work
by them and others into a nice summary, branded as "The Triple Aim." From the
individual patient perspective, a particularly
pragmatic definition can be found in a document published by the
AHRQ. Since what constitutes quality in healthcare remains a matter of
discussion and dispute, is it any wonder that quality improvement is a
difficult issue?

Quality improvement is at its core a translational activity. It imports
ideas from other domains, maps them to the terrain of clinical
experience, and tries to find a better, safer, cheaper, path through
the jungle of clinical medicine. The current state of the
science of quality improvement in healthcare is woefully incomplete.
Our understanding is more akin to Aristotle's
primitive notions of earth elements than it is a systematic
understanding; we have neither attained
the insightful scope of the theory of evolution, nor the immense power
and detail
of molecular biology. Most good ideas for quality are doomed to fail,
mostly for reasons that are only obvious in hindsight (there are a few
prescient practitioners who can see these failures prospectively).
Progress on quality is arduous, and indeed does require the kind of
deep understanding (or serendipity) required for progress elsewhere in
medicine. Perhaps the biggest obstacle to progress in quality is that
very few
people, inside or outside medicine, truly understand and believe this.
Until the actual process of how to investigate quality and its
improvement undergo fundamental intellectual advancement, our efforts
will be inefficient and disappointing.

In the meantime, the inspections will continue until quality improves.

I don't believe extortion is a JCAHO motive; but I do believe that their 40 year monopoly on "deeming authority" has resulted in a certain complacency. My understanding of the change you describe comes from this blog;

and suggests that regular, recurring accountability will now be a feature of their relationship to Congress. Is this good or bad? Well, since the status quo is not particularly inspiring, change ought to be good, right? On the other hand, this will bind JCAHO more tightly to the whims of political power, further decreasing what little autonomy they possess, and the move opens the field to new players willing to do the political bidding of whoever is in charge of meting out contracts; we have seen how such ideological patronage contracting has served the country in the last years...

Your argument that the Deming model is not particularly applicable to medical care makes sense - but he and his disciples do make some important points to learn from. And one of them is defining quality.

Every discussion like this invariably brings up the definition of "quality". Then invariably we meander off into how hard it is to define (you guys did it in this otherwise sharp discussion) .

Deming and Frederick Taylor before him defined "quality" in a uniform and useful way - and one perfectly applicable to our discussion.
Quality is essentially conformance to standards. Makes sense. In this respect the JCAH is right- that's what they look for, conformance to standards.

The problems with the JCAH you identify, and problems they are -the Potemkin Village reference is perfect - is not their demand for conformance to standards but the standards they set up have very little to do with clinical care.

This is the problem we face - not defining quality but identifying the standards we desire. One of the reasons you anesthesiologists have been more successful than other specialties is your ability to identify standards for certain important clinical events- e.g. misplaced intubations. But again there is more of a uniformity in certain OR regimens than in other clinical areas (not to diminsh your efforts). So it is easier to define the numbers you have, and the ones you want to reach for events that are actually important.

It's not always easy in clinical care to identify standards -it is doable and should be done but unfortunately hospitals and clinicians don't keep internal statistics about clinical events very well. (Example - ask anyone in the hospital what the overall mortality rate in their hospital is within a half a percent. Of course by itself it doesn't convey much information but it's a basic statistic of what we do - I'm willing to bet no one you ask, clinician or administrator will know. NOT ONE PERSON will know the overall mortality rate in their hospital. Seems unbelievable but in my years I never met one person who knew, unless they were guessing. Ask them how they know if they give you a figure).

You might say that's not important. I would disagree based on my experience but regardless, that's just an example to show how primitive our knowledge is. Without simple internal statistics, how can anyone identify standards they should conform to? Case-mix and practice variability make it hard to identify standards for clinical care but we will not get very far with any nonuniform procedure without much better statistical reporting from every hospital about what they do.

In essence it's not defining quality that's the problem - it's identifying the standards that go into actually attaining quality. At that point the model you describe will actually work in getting us where we should be.

Cory and Scott; each of you, in your admirable idealism, misses the gist of the argument. Cory, the reason we didn't discuss the nitty-gritty of what constitutes quality is because the argument is about the need to address techniques of quality improvement. I have no doubt that a list of standards could (and should) be established (no wrong sided surgery is just such a standard), but as the saying goes, "if wishes were horses, beggars would ride." But imagining that articulating that list would eventuate in improvement is optimistic; progress towards this goal will require the acquisition of basic understanding in healthcare quality improvement that is much more advanced in air transport, and which requires a tremendous intellectual commitment; not just the application of administrative tools and imperatives. Or, put another way, knowing where we want to go is not the same as being there. We just don't have the ruby slippers. Existing TQM-like technique may tell us how we are progressing, but they won't tell us how to get there; it's like a swim parent shouting "swim faster, swim faster" to their kid, wwithout any idea how he/she might accomplish that goal.

And Scott, while your finely crafted words reveal an obvious commitment to the goal of accomplishing higher quality, I just don't believe that the technology exists at this time in healthcare quality management to realize that lofty (and profitable) ambition. Understand that by technology, I don't mean computers and software; I mean procedural and behavioral approaches based upon a deep understanding of the process factors that lead to errors and suboptimal quality in healthcare. Just compare NTSB work product with healthcare system quality interventions. The difference is striking. They have it right. We don't. That's why commercial air travel is so darn safe, and healthcare is not. Yet.