Case Objectives

Describe unintended consequence of
public reporting and interventions to prevent these.

The Case

A 55-year-old woman with end stage renal
disease requiring hemodialysis and coronary artery disease (status
post-coronary artery bypass grafting and placement of a St. Jude
prosthetic valve) was admitted to the medical service with
palpitations and chest pain. She reported missing her scheduled
hemodialysis session, and her symptoms resolved with prompt
inpatient hemodialysis. In the course of her work-up, it was noted
that she was subtherapeutic on her anticoagulation with warfarin,
and a heparin drip was initiated with the plan to bridge her until
her INR was in the therapeutic range.

Mostly in response to increasing pressure from
the hospital's administration to improve compliance with publicly
reported quality measures, the attending physician recommended
pneumococcal vaccination, and it was administered. Later that day,
the patient complained of pain over her right upper arm. The
attending told the patient that this was a common complaint after
immunization and that it would resolve. The next day, the patient
reported that her pain was worse, and the team noted an
8-centimeter hematoma within the muscles of her upper arm. The
hematoma resolved spontaneously. There was no permanent
harm.

The Commentary

Since first proposed by pioneers like Ernest
Codman and Florence Nightingale, the public reporting of measures
of hospital quality has been viewed with skepticism and at times
outright scorn by members of the medical profession. Despite these
protests, there has been an explosion of public reporting over the
last few years, in America and around the world.

The public release of hospital quality measures
is typically undertaken with a number of explicit and implicit aims
in mind.(1)
Ideally, public reporting improves transparency and can empower
patients to make better choices about where to seek treatment. This
increases hospital accountability for quality of care and provides
a stimulus to improve for fear of losing market share to local
competitors. Public reporting also enables the stewards of the
public's health (including government, regulators, accreditors, and
payers) to track performance over time, and these data can be tied
to compensation through "pay-for-performance"
contracts.

Public reporting begins with the selection of
evidence-based processes or outcomes to be measured and the
identification of meaningful opportunities to improve practice.
There are many organizations currently involved in the design and
implementation of public reporting in the United States. The most
important of these is the Hospital Quality Alliance (HQA), a
national public-private collaboration established to encourage
hospitals to voluntarily collect and report hospital quality
performance information.(2) All
acute care hospitals in the United States have been invited to
participate in this initiative, and by linking participation in the
program to the annual Medicare payment update, the Centers for
Medicare and Medicaid Services has achieved participation rates of
greater than 98%. The HQA began in late 2003 with a limited set of
10 quality measures spread across three clinical conditions: heart
failure, acute myocardial infarction, and pneumonia. By 2006, the
program had expanded to include 21 measures (Table) related to the original three clinical areas,
with the addition of new measures focused on the quality of
surgical care.

Despite the natural appeal of public reporting,
relatively little is known about its actual impact on quality of
care or patient outcomes. After New York State began publicly
reporting hospital-specific mortality following coronary bypass
surgery, mortality rates were observed to fall, but the mechanism
through which this occurred continues to be debated.(3,4) Similarly, a report on the impact of the "Quality
Counts" program in Wisconsin found that hospitals involved in
public reporting of risk-adjusted outcomes engaged in more quality
improvement activities and were more likely to show improved
outcomes than control institutions.(5)
Although these limited data support one of the premises of
transparency—that they will catalyze hospitals' quality
improvement activities—the jury is still out about the impact
of public reporting on patient decisions regarding where to obtain
medical care. What little is known suggests that hospital market
share has been largely unaffected by these initiatives.(6)

Concerns over Public Reporting

Concerns expressed about public reporting can be
grouped into two major categories—those that question the
validity of the undertaking itself and those that raise questions
about its unintended consequences.(7-11)
Among the first group, it is often pointed out that the field of
quality measurement in health care is still in its infancy and
there are as of yet only a small number of measures based on sound
evidence. Further, measures that have been validated for one
purpose are sometimes used inappropriately for other purposes. For
example, patient safety indicators derived from administrative data
sources can be valuable tools for case identification and the
monitoring of rates at a single organization but should not be used
to compare rates across hospitals. For example, Romano and
colleagues reported that, when derived from administrative data
sources, more than half of the observed variation in risk-adjusted
rates of postoperative infections seen across hospitals could be
attributed to differences in coding practices and not actual
outcomes.(12)
Nonetheless, some health plans have begun to grant subscribers
access to data that compare hospitals in a region.

Additionally, attempts to compare outcomes across
institutions are hampered by both real and perceived challenges of
carrying out risk adjustment and by limited statistical power when
studying rare events. While this has led to a gradual shift toward
the use of process measures (what was done) instead of outcomes
(what happened to the patient), improving the performance of
process measures may fail to result in actual improvements in the
outcomes of care. This can occur when the association between the
process measure and the desired outcome is weak, or when there is
improvement in the documentation of the process but not in the
actual process itself. This latter example may lead hospitals to
believe that they have improved care when in fact they have simply
improved documentation.

The second concern expressed about the public
reporting of quality data relates to the phenomenon of unintended
consequences. One framework for classifying unintended consequences
distinguishes those that result in direct harm from those that
cause harm indirectly. There are several types of events that can
lead to direct harm. The development of an allergic reaction
following medication administration in a patient not previously
known to have an allergy is one example of direct harm. On the
other hand, direct harm is more predictable when errors are made or
guidelines are violated. The administration of a beta blocker to a
patient with heart block and the provision of multiple influenza
vaccinations to a patient during a single hospitalization are both
examples of this type of unintended consequence. Finally, direct
harm can occur when caregivers are faced with diagnostic or
therapeutic uncertainty, and this risk can be exacerbated by the
incentives created by public reporting. For example, an initiative
designed to increase the use of venous thromboembolism prophylaxis
that did not adequately specify contraindications to the use of
heparins is likely to produce excessive rates of bleeding. In
another example, the current standard that antibiotics should be
given to patients with pneumonia within 4 hours of arrival in the
ED may result in antibiotics being given to some patients who prove
to have heart failure or pulmonary embolism.(11)

While less apparent, the problem of indirect harm
may be just as great as or greater than the examples described
above. First, patients may have important health care matters
neglected when physicians or nurses shift their attention to those
aspects of care for which they are being asked to report publicly.
Consider the pyrrhic victory of achieving optimal glycemic control
during the hospitalization of a diabetic patient while failing to
intervene to correct untreated hyperlipidemia. Furthermore,
hospitals that "play to the test" by reallocating valuable human
and other resources to excel in public measures may overlook more
pressing opportunities to improve the care of patients with
conditions not subject to public reporting. For example, hospitals
may decide to hire additional clinical personnel to improve
performance from 98% to 100% on a measure such as the use of
aspirin in myocardial infarction while the quality of care for
patients with stroke or COPD is neglected. Another concern that has
been raised about public reporting, especially when outcomes such
as mortality or complications are being compared, is the risk that
hospitals will turn away high-risk patients who might tarnish their
scores. Finally, patients and payers may misinterpret quality
performance information and make poor choices about where to seek
care or direct patients. This is a significant risk given the
limited number of clinical areas for which public reporting has
been implemented and in light of the fact that strong performance
in one area does not predict such performance in other clinical
areas.(13)

Reducing the Burden of Unintended
Consequences

Given the many ways in which quality improvement
and public reporting can result in unintended consequences, no
single approach can be relied upon to prevent all problems. First
and foremost, hospitals must recognize that unintended consequences
are bound to occur following any effort to implement change.
Failure Mode Effects
Analysis (FMEA) is a systematic, proactive method for
evaluating a process to identify where and how it might fail, and
to assess the relative impact of different failures. In other
words, FMEA specifically concerns itself with attempting to
anticipate unintended consequences and should be incorporated into
the planning for any major quality improvement project.

Education is the foundation of good practice and
remains a necessary though not sufficient way to prevent errors.
Education must be supplemented with checklists, protocols, and
other reminder systems, and whenever possible, computers can and
should be employed to minimize the need to rely on human vigilance.
For example, electronic order entry systems can and are being used
to prevent the administration of medications to patients with
specific contraindications. Forcing
functions are another approach that can be used to reduce the
risk of unintended consequences. For example, a
medication-dispensing and administration system can be designed to
require the nurse to document a heart rate before releasing a beta
blocker for administration. Monitoring is an important and probably
underutilized approach to addressing the problem of unintended
consequences; underutilized because unintended consequences are not
always incorporated into the quality improvement planning process
and because quality improvement staffs are stretched thin complying
with current reporting requirements. However, the automated
detection of adverse events was described more than 15 years ago
and can dramatically reduce the burden of data
collection.(14)
For example, automated detection can be used to measure how often
patients with heart failure receive treatment with antibiotics
intended for treatment of pneumonia, or how frequently patient on
narcotics are administered opiate antagonists like naloxone to
reverse oversedation or respiratory depression.

Payers also have an important role to play in
ensuring that unintended consequences do not diminish the benefits
gained through public reporting. First, they must set and
communicate realistic goals. For many measures, a goal of 100%
performance is not possible without resulting in an unacceptable
rate of unintended effects. For example, it is unlikely that
hospitals can deliver antibiotics to 100% of patients with
pneumonia within 4 hours without exposing many other patients to
antibiotics unnecessarily. By instituting a broad range of
measures, by rotating them frequently, and by not announcing
beforehand which will be made public, payers may also be able to
prevent hospitals from gaming the system. Finally, where
appropriate, payers should consider incorporating predictable
unintended events into the standard public reporting measure
sets.

The Case Revisited

How can this particular case be viewed in light
of the previously outlined framework? First, the current
immunization measures apply only to patients with pneumonia, so
while this patient is likely to have benefited from immunization,
there was in fact no such reporting requirement, and the attending
physician was not correct in his or her belief that failure to
immunize would have affected the hospital's performance ratings.
Second, because anticoagulation is not considered a
contraindication to the administration of immunization (nor to
intramuscular injection more generally), this is probably best
viewed as a nonpreventable case of direct harm that occurred while
following clinical guidelines. Although hematoma formation and
frank bleeding are recognized risks following both subcutaneous and
intramuscular injections, the risk does not appear to be
significantly increased among patients on anticoagulants, and thus
the benefits of immunization outweighed the risk of
complication.(15-18)

Certainly, had the patient's partial
thromboplastin time (PTT) been extremely elevated, it would have
been prudent to wait until it had fallen back to a therapeutic
range before immunizing. More importantly, if there was reasonable
doubt on the part of the team responsible for this patient's care
that administration of the immunization was either relatively or
absolutely contraindicated, then professionalism dictates that they
set aside the hospital's interests in favor of the patient's. This
is especially important to appreciate given that specifications for
existing quality measures do not enumerate every possible scenario
that might make immunization contraindicated, as this is neither
possible nor practical from the standpoint of chart review. This is
another example of why performance rates of 100% are not always
possible or desirable. Finally, regulatory agencies must be willing
to modify problematic measures. For example, if a sufficient number
of hospitals reported that patients on heparins experienced
hematomas following immunization, then the measure ought to be
revised to reflect this.

Faculty Disclosure:Dr. Lindenauer has
declared that neither he, nor any immediate member of his family,
has a financial arrangement or other relationship with the
manufacturers of any commercial products discussed in this
continuing medical education activity. In addition, the commentary
does not include information regarding investigational or off-label
use of pharmaceutical products or medical devices.

References

1. Marshall MN, Shekelle PG, Leatherman S, Brook
RH. The public release of performance data: what do we expect to
gain? A review of the evidence. JAMA. 2000;283:1866-1874.
[go to PubMed]