Glossary

C

See Primer. Though a seemingly simple intervention, checklists have played a leading role in the most significant successes of the patient safety movement, including the near-elimination of central line–associated bloodstream infections in many intensive care units.

Any system designed to improve clinical decision-making related to diagnostic or therapeutic processes of care. Typically a decision support system responds to "triggers" or "flags"—specific diagnoses, laboratory results, medication choices, or complex combinations of such parameters—and provides information or recommendations directly relevant to a specific patient encounter.

CDSSs address activities ranging from the selection of drugs (e.g., the optimal antibiotic choice given specific microbiologic data) or diagnostic tests to detailed support for optimal drug dosing and support for resolving diagnostic dilemmas. Structured antibiotic order forms represent a common example of paper-based CDSSs. Although such systems are still commonly encountered, many people equate CDSSs with computerized systems in which software algorithms generate patient-specific recommendations by matching characteristics, such as age, renal function, or allergy history, with rules in a computerized knowledge base.

The distinction between decision support and simple reminders can be unclear, but usually reminder systems are included as decision support if they involve patient-specific information. For instance, a generic reminder (e.g., "Did you obtain an allergy history?") would not be considered decision support, but a warning (e.g., "This patient is allergic to codeine.") that appears at the time of entering an order for codeine would be. A recent systematic review estimated the pooled effects for simple computer reminders and more complex decision support provided at the point of care (i.e., as clinicians entered orders in computerized provider order entry systems or performed clinical documentation in electronic medical records).

An event or situation that did not produce patient injury, but only because of chance. This good fortune might reflect robustness of the patient (e.g., a patient with penicillin allergy receives penicillin, but has no reaction) or a fortuitous, timely intervention (e.g., a nurse happens to realize that a physician wrote an order in the wrong chart). Such events have also been termed near miss incidents.

Having the necessary knowledge or technical skill to perform a given procedure within the bounds of success and failure rates deemed compatible with acceptable care. The medical education literature often refers to core competencies, which include not just technical skills with respect to procedures or medical knowledge, but also competencies with respect to communicating with patients, collaborating with other members of the health care team, and acting as a manager or agent for change in the health system.

Provides an approach to understanding the behavior of systems that exhibit non-linear dynamics, or the ways in which some adaptive systems produce novel behavior not expected from the properties of their individual components. Such behaviors emerge as a result of interactions between agents at a local level in the complex system and between the system and its environment.

Complexity theory differs importantly from systems thinking in its emphasis on the interaction between local systems and their environment (such as the larger system in which a given hospital or clinic operates). It is often tempting to ignore the larger environment as unchangeable and therefore outside the scope of quality improvement or patient safety activities. According to complexity theory, however, behavior within a hospital or clinic (e.g., non-compliance with a national practice guideline) can often be understood only by identifying interactions between local attributes and environmental factors.

The tendency to focus on evidence that supports a working hypothesis, such as a diagnosis in clinical medicine, rather than to look for evidence that refutes it or provides greater support to an alternative diagnosis. Suppose that a 65-year-old man with a past history of angina presents to the emergency department with acute onset of shortness of breath. The physician immediately considers the possibility of cardiac ischemia, so asks the patient if he has experienced any chest pain. The patient replies affirmatively. Because the physician perceives this answer as confirming his working diagnosis, he does not ask if the chest pain was pleuritic in nature, which would decrease the likelihood of an acute coronary syndrome and increase the likelihood of pulmonary embolism (a reasonable alternative diagnosis for acute shortness of breath accompanied by chest pain). The physician then orders an EKG and cardiac troponin. The EKG shows nonspecific ST changes and the troponin returns slightly elevated.

Of course, ordering an EKG and testing cardiac enzymes is appropriate in the work-up of acute shortness of breath, especially when it is accompanied by chest pain and in a patient with known angina. The problem is that these tests may be misleading, since positive results are consistent not only with acute coronary syndrome but also with pulmonary embolism. To avoid confirmation in this case, the physician might have obtained an arterial blood glass or a D-dimer level. Abnormal results for either of these tests would be relatively unlikely to occur in a patient with an acute coronary syndrome (unless complicated by pulmonary edema), but likely to occur with pulmonary embolism. These results could be followed up by more direct testing for pulmonary embolism (e.g., with a helical CT scan of the chest), whereas normal results would allow the clinician to proceed with greater confidence down the road of investigating and managing cardiac ischemia.

This vignette was presented as if information were sought in sequence. In many cases, especially in acute care medicine, clinicians have the results of numerous tests in hand when they first meet a patient. The results of these tests often do not all suggest the same diagnosis. The appeal of accentuating confirmatory test results and ignoring nonconfirmatory ones is that it minimizes cognitive dissonance.

A related cognitive trap that may accompany confirmation bias and compound the possibility of
error is "anchoring bias"—the tendency to stick with one's first impressions, even in the face of significant disconfirming evidence.

Crew resource management (CRM), also called crisis resource management in some contexts (e.g., anesthesia), encompasses a range of approaches to training groups to function as teams, rather than as collections of individuals. Originally developed in aviation, CRM emphasizes the role of human factors—the effects of fatigue, expected or predictable perceptual errors (such as misreading monitors or mishearing instructions), as well as the impact of different management styles and organizational cultures in high-stress, high-risk environments. CRM training develops communication skills, fosters a more cohesive environment among team members, and creates an atmosphere in which junior personnel will feel free to speak up when they think that something is amiss. Some CRM programs emphasize education on the settings in which errors occur and the aspects of team decision-making conducive to "trapping" errors before they cause harm. Other programs may provide more hands-on training involving simulated crisis scenarios followed by debriefing sessions in which participants assess their own and others' behavior.

A term made famous by a classic human factors study by Cooper of "anesthetic mishaps," though the term had first been coined in the 1950s. Cooper and colleagues brought the technique of critical incident analysis to a wide audience in health care but followed the definition of the originator of the technique. They defined critical incidents as occurrences that are "significant or pivotal, in either a desirable or an undesirable way," though Cooper and colleagues (and most others since) chose to focus on incidents that had potentially undesirable consequences. This concept is best understood in the context of the type of investigation that follows, which is very much in the style of root cause analysis. Thus, significant or pivotal means that there was significant potential for harm (or actual harm), but also that the event has the potential to reveal important hazards in the organization. In many ways, it is the spirit of the expression in quality improvement circles, "every defect is a treasure." In other words, these incidents, whether near misses or disasters in which significant harm occurred, provide valuable opportunities to learn about individual and organizational factors that can be remedied to prevent similar incidents in the future.