Using Human Factors Research to Improve Support for Clinical Work

In software design, things that often seem obvious are not. This is especially true when paper charts are used as design hints. Too many “uses” go unnoticed because they are done unconsciously. Reading the recent NIST report, Integrating Electronic Health Records into Clinical Workflow: An Application of Human Factors Modeling Methods to Ambulatory Care, one comes to appreciate how much goes unnoted and undocumented. For example, as alluded to by the authors, the number of paper charts in a pile and the thickness of each chart gives an immediate clue to the amount of work left undone that cannot be gleaned from a simple integer that tells how many patients are waiting.

The paper chart has been the dominant design metaphor for clinical systems for quite some time. With this approach, the primary goal in system design is replication of common chart functions in electronic form. Such an approach will no doubt result in an electronic chart. However, it is much less likely to result in a system that intimately supports clinical work. In the executive summary, the authors of the report note the problems clinicians encounter when using EHR systems during patient care.

In response to workflow integration challenges with EHRs, clinicians often develop workarounds to complete clinical tasks in ways other than were intended by system designers. A frequent workaround, for example, is copying and pasting text from a previous progress note for a patient to serve as a draft for the current progress note. In this report, two human factors workflow modeling tools, process mapping and goal-means decomposition, were used to collect, visualize, and document insights and the end-user needs to improve EHR workflow for clinicians in outpatient care settings.

Designing software systems requires intimate knowledge of what users do, the data they create and produce, and the people or things they interact with. Capturing such complex information in an intelligible form is difficult and time-consuming. Of the 16 human factors modeling tools listed by the authors, they settled on two: process modeling and goal-means decomposition. Applying these tools along with discussions with clinicians, the authors created a flowchart of a return visit for an outpatient clinic. While I would have preferred that workflow patterns be used to illustrate processes, the report does a good job of getting at the fine points of clinical work.

Though it may seem odd given the amount of discussion focused on EHR adoption, workflow, and MU, very little has been done to document clinical work. For example, how many steps are there in writing a new prescription? What is the minimum amount of information required? The main reason that the details of clinical work have not been studied is that they were never deemed useful with paper charts. Writing a prescription requires a pen and pad, but the rest is up to the individual provider to determine how to be maximally efficient. Render this in software and all of those individual adaptations are disrupted and the workarounds mentioned in the report begin to appear.

In the report, visit workflow is broken up into five components: before the patient visit, during the visit, the physician encounter, discharge, and documentation. For each component, an analysis is offered of typical clinical work tasks and potential EHR functionalities that might support those tasks. Taken as a whole, this is one of the best documents that I have encountered in terms of how well it relates real clinical tasks directly to EHR functions.

Here are the physician encounter tasks from the report.

Get history, signs and symptoms, review of systems, make working or presumptive diagnosis

This is a fairly standard task list, and one that would be familiar to any clinician During EHR selection and implementation, this is the type of list that is needed to match EHR features to tasks. However, this is rarely done at the level of the provider where the disruption would be most evident. In large practices or hospitals, such individual assessments are impractical. From a software design perspective, this is exactly the type of information required, and it is usually difficult to obtain.

Here are the potential EHR features that clinicians considered might be useful in improving work completion during the physician encounter.

Supporting established diagnosis-based workflow

Supporting moving from working diagnoses to formal diagnoses

Supporting reviewing changes to medications

Notice how each of these proposed functions helps with the everyday work of patient care. Diagnosis-based workflow support could help in managing downstream work. For example, once a diagnosis is made, all typical history, labs, procedures, etc. could be queued for approval. This would act as a great reminder system as well as a safety/quality measure. In order to do so, however, it would have to be under the control of the individual provider.

The problem list is a central component in caring for patients. In most systems, they are simply lists that have a few standard capabilities (e.g., sorting). They are not designed to help clinicians manage the care process. The report mentions the frustration clinicians had with having to enter a detailed diagnosis even though the workup was still in progress. The cognitive work involved in managing problems and diagnoses is one of the main, if not the core, aspect of being a clinician. Everything else flows from here. The next question asked, the focus of the exam, the labs ordered, and medications prescribed are all keyed on the known or presumed diagnosis. There are no paper chart features that match these cognitive needs beyond the lists themselves (the same holds true for medication reviews). The contrast between cognitive needs and paper chart features rendered in electronic form highlights in the clearest manner possible the differences between designs that support clinical work (clinical care systems) and those that focus on maintaining a patient data repository (electronic health records). I have no idea whether or not the authors intended to draw attention to this divide, but they have.

Given the quality of the information contained in the report, I was surprised by the “meh-ness” of the recommendations offered to EHR developers. Here are the targeted recommendations for EHR Developers.

For EHR developers, we recommend the following to improve EHR-related workflow during the patient visit:

Increase efficiency for these tasks:

Reviewing results with the patient

Drafting pre-populated orders to be formally executed later

Supporting drafting documentation with shorthand notations without a keyboard.

Design for empathetic body positioning and eye contact with the patient while personally interacting with the EHR and while sharing information on the EHR screen with the patient and family members

Support dropping tasks and delaying completion of tasks to help with daily flow

Verification of alarms and alerts and data entry without “hard stops.”

While I am sure new functionality that helps with the above would be useful, the interesting findings that could lead to true innovations are not mentioned. Where are the cognitive support features for clinicians?

In any case, this report is a good reference work for software developers trying to understand what clinicians need in clinical care software systems. My suggestion: read the report thoroughly, and skip the targeted recommendations.