The Butler Model

There’s an iconic image of the Butler, preferably British of course, who maintains a highly respectful manner, and is clearly focused on providing service.

A friend, Dr Scott Finley, started kicking around an idea a few years back. What if clinical decision support delivery was modeled after that butler?

Instead of starting with the premise of looking for opportunities to offer alerts and warnings, tied tightly to specific applications and points in workflow, a butler would function as a highly skilled and very quiet companion. He would be able to control if, when and how he interrupted the master, with a degree of sophistication. He would recognize ways to help, offer to follow up on tasks for the master, and recognize the master’s unique needs.

Earlier this week, Scott wrote:

I found an excellent butler-related quote from Anthony Hopkins regarding his performance in Remains of the Day. He may have gotten the idea from interviewing a real-life butler:

"You begin to feel like a butler, and things begin to happen. The

tone of your voice drops until it is just above a whisper. You're

tremendously polite, tremendously courteous. You listen and you

become invisible. You make a room even emptier by being there.

You don't fiddle about or elaborate or try to act.”

I particularly like the idea of making the room emptier by being there. That’s how the darn computer should behave.

Computer as Actor

The one concept that may help people understand better is that of "computer as actor." In other words, the reason for the butler model at all is that the computer is implicitly serving as a player in the system. It advises, listens, criticizes, etc., but its role is generally unplanned, unmanaged, and (especially, as realized) inappropriate in the clinical context. So it acts like its creator, generally a young programmer. Does "someone" acting that way really belong in the exam room?

I think Scott has a really neat idea here. We know that the current model, full of reflexive, knee-jerk alerts and reminders doesn’t work. We know that most software doesn’t recognize higher order tasks very well. So, it interrupts the user as soon as a concern arises, whether it is qualified or not. Whether it’s considered to be relevant. Generally in the same loud tone. And always with an inadequate recognition of context.

What would it look like if the GPS in our cars behaved like a butler? Well, for one, if the GPS said turn left and it heard your spouse say, “no, turn right,” the butler-based GPS would surely quietly revise it’s recommendation and say “You’d best turn right, master!”

What do you think of the Butler model?

Note: The Butler photos leading this blog post come from the Disney movie, The Parent Trap - here are two more flavor savers:

It sounds like the way to go. But isn't the real problem that we currently don't have such a level of sophistication in our computer programming capabilities? This sounds like a 2010 (not far off now) vision. I can almost hear Hal.

"I don't think you want to prescribe that Joe." "I do Hal. It will be fine." "I don't think so Joe."

Joe, thanks for this thoughtful explication of the butler model. It sounds like folks may be being too literal. The model can be useful, I think, but the darn thing is still a computer and won't fix your drink for you. The key is realizing the machine is an actor with important limitations and responsibilities. If we program it to take the tone of "the boss" then we've screwed up. The tone of "the butler" is much closer to the correct social role. Ignoring the tone is an abdication of a significant responsibility: what have you let loose in the exam room?

I agree entirely that the user's (including patient's) experience of HIT needs to be as much as possible like the experience of having the perfect clerk, who

knows where the information is,

knows what you will need next, and

gives it to you at precisely the right moment

in actionable form.

In other words, once an organization has agreed together how they will address frequently occurring clinical scenarios, the care processes and every relevant HIT tool will be designed to anticipate their information needs and remind them of their previously agreed intentions.

This means that one indicator of the quality of our processes and tools will be the rarity with which warnings are needed at all. Every alert represents a residual inadequacy of our processes or tools. (Aiming to make HIT the perfect clerk can also remind us that HIT is not (and will not be) as intelligent as the butlerwho, in many cases is superior to his master.)

One key to creating decision support that behaves like the perfect clerk is the design of care processes that are patient-focused, evidence-based, and efficient. Once such processes are agreed on and clearly mapped, the design of HIT tools (and functions) that will support them seamlessly becomes much more manageable. I say more about this in a Health Affairs article due to appear March 10 (2009).

As far as actionable knowledge, it may be useful to talk about it slightly more operationally, as: (1) Standards (things you should do or document the reason why you didn't),(2) Options (things it may be appropriate to do depending on the circumstance), and(3) Non-Options (things that contradict organizational policy).

I like the way you've evolved the "should do, shouldn't do, and 'it is complicated'" framework into Standards, Options, and Non-Options. Words do matter, and those are succinct, clear, and respectful. My Butler would approve, ... probably, ... if I had one!

I like the persona of a perfect clerk. It should make it clear to a developer that getting requirements from several great clerks is essential. I suspect that's not in many developer's project plans.

I don't agree that the Butler needs to be super intelligent, just knowledgeable on the processes they're accountable for 'clerking.' I am assuming that that behavior could be 'computerized' through some combination of deterministic and probabilistic algorithms. Here is where the dramatic portrayal of a Butler, (say Mr French from Family Affair,) is distracting and defocusing. We're already doing much of those algorithms today, with duplicate checking, drug checking (dose, allergy, route, drug-drug, etc). The major new behavior of the Butler, as I understood Scott, was deliberate civility. As any human manager or executive knows, attitude and style are extremely important. (I'm using Peter Drucker's distinction between managers and executives, i.e. does things right, and does the right things. Either way, civility is more than a 'nice to have.')

We're all looking forward to your Health Affairs article.

As a side note, I've been working with or comparing notes with Jim for over a decade. Jim and I did a presentation for the Pennsylvania Medical Society in 1998, where Jim first introduced me to Coumadin passive decision support model. I've described this in my HCI July 2008 article, "Homework First."

A new dimension has unfolded, one year after the original posting of this blog.

What if that butler was trained and aware on the specific cognitive errors that each specific physician tended toward (e.g. confirmation bias, anchoring, or any of the other 28 common biases known to impact physicians)?

Thanks Anthony. (Your comment follows my response because I started writing my response before you posted.)

Truth is, people are turning off alerts today because they're equally 2001/HAL-like . . . strong armed and dictatorial. The current state can be worse than creepy.

The computer needs to be considered an actor; occasionally the superior decision maker, but usually not. For instance, when the train is about to go off the tracks, or the water heater is about to explode, or the nuclear reactor is about to melt down, you need a decisive, non-negotiable decision maker. But in the vast majority of clinical decision support situations, you need a great Butler. As Scott points out, we don't have that today, and I agree. I probably mentioned this elsewhere, but Scott has deep architectural knowledge of commercial vendors, the VA system, and others. Objectively, he's an expert.

There are several factors that make achieving better decision support practical, achievable and observable when looking at the design of any clinical decision support system. Here, in two parts - Clarity and User Experience Design, is a simple framework to help you implement a more "Butleresque" experience with any vendor or homegrown product:

ClarityThis one comes from my friend, Dr James Walker, CMIO at Geisinger. He's also editted a book on HCIT Implementation.

Most actionable knowledge (the underlying artifacts being rules, flowsheets, workflows, etc.), can be framed in terms of (1) Things You Should Do, (2) Things You Shouldn't Do, and (3) "It's complicated."

Putting things into the correct of those three buckets helps create clarity and priorities, and reduces unproductive conflicts. Others, perhaps most famously Dr. Clem McDonald, have observed that guidelines often contain "weasel words" that obscure the interpretation of a guideline. For example, "It's not uncommon that some subgroups of patients may benefit from beta blockers, at some point in their care." You get the idea.

User Experience DesignNot only is clarity important in framing decision support, so is the mechanism to deliver it. As Scott noted above, it's common that decision support is delivered concretely and harshly. When it's contextually inappropriate, it understandably leads to rage on the part of busy clinicians. It's like getting advice and critique from an elementary school student. When that student has knowledge that you don't have, and that knowledge is important, there's definite utility, but . . .

The "Your Patient Has a Penicillin Allergy and You're About To Order Penicillin," is a classic example. In this scenario the clinician didn't know or remember that the patient had a life-threatening allergy, so decision support became extremely important.

At the same time, it's also important to stop the order process then and there. We could put a great big stop sign with a skull-and-cross bones in the middle of the screen, and demand the clinician reenter their user name and password (a really long, safe password), then move their mouse to each corner of the screen and double-click in the correct sequence. And to make sure we can log that they got the message, let's change the order of which corner to visit first and last in each subsequent, similar situation just to make sure we know the clinician is paying attention.

Obviously an extreme dramatization, I hope. But you can appreciate that this isn't too far from the experience a user with a busy workload may face in the real world. Taking this a step further, and more importantly, when you start to think like a Butler, you'll obviously come to a different design.

So, the user experience needs to tell the user, before they've selected Penicillin, that there is a reason to not go there. It also needs to be sophisticated enough to appreciate that the "master" may actually be a master. If that "allergy" is not relevant, based on information that the system doesn't have or has wrong, it's more important clinically that the system behaves in a way that's Butleresque. This really requires that the User Experience Design includes richer ways to provide decision support:The New Butler Behaviors List1. Task-blocking boxes for allergies and incompatibility reactions, etc., that can lead to instant death are respected.

2. Pre-vetted choices in menus. Don't offer the Penicillin, only to slap the users hands and scream, "Wrong!" This isn't a game. Of course it's perceived as disrespectful and insulting, because it is.

4. The ability to deliver information passively that is available to the clinician and/or patient, just-in-time and gently.

5. The ability to offer an intelligent dialogue to qualify or disqualify a context, prior to prescriptively barking at the user.

These are the baseline points from which we could efficently and effectively more forward.

I know of available systems in productive use that exemplify each of the five basic design features. However, I know of no enterprise-class solutions that deliver the Butleresque experience in a necessarily integrated fashion.Anthony, let me wrap this up by returning to your question, "But isn't the real problem that we currently don't have such a level of sophistication in our computer programming capabilities?"

In the interest of clarity, no, that's not the real problem. The real problem is that we're no longer asking HCIT to operationalize lights-out behavior, like capturing a black-and-white order at location A, and moving it to locations B, C, and D.

We are asking HCIT to be an actor in an increasingly complex process that includes knowledge clarity issues (again, thanks Jim Walker), ambiguity, and basic human issues (human factors, human communication challenges including effective confrontation). If we agree on that, for the sake of argument, then it's not a sophistication of computer programming capabilities. The real solution involves at least the five items on the New Butler Behavior List.

Even more primitive and basic, as Scott opines, is to begin with a business model and personas that acknowledge the need for a Butler role. These will lead to using existing programming capabilities, such as including enough cheap processing technologies like the pre-vetting choices described in list item two. That is a better use of architecture, and a level of sophistication well within our capabilities.

I've received a few other personal emails about this post. The general nature of this feedback is The Butler Model is fun, but perhaps too whimsical. Thanks for that helpful input.

Do any of you readers out there hear of others articulating the need for a Butler persona?

A note for those who choose to post a comment: Some people write more slowly than the blog software permits! Therefore, it may time out without warning, and you can lose your text if you're not careful. Always copy your comment to your clipboard before hitting publish. Then, if the site times out, you can re-post your comment by pasting it into the comment box without having to re-write it from scratch. The other option is to write it first in a Word document, and then paste it into the comment box.

Anthony, I don't think a *good* butler would be creepy at all. I wouldn't assume any particular manner: obsequiousness, for example, would often offend. For many folks, a quiet confidence accompanied by a willingness to be corrected (and learn as much as possible from that correction) would probably be closer to the mark. Imagine the sophisticated version of a "Don't show me this again" checkbox that can learn the particular context for which the information was unwelcome. Even if we don't give the system a manner it will have one.

Joe, what I miss is the "Butlerian" actionable response: "Your coat, sir" or "Will you be needing the car, sir". Both are polite queries that provoke a specific decision. If one chooses to take action the guided query makes compliance or rejection simple and final. Far too often within the EMR practitioners are faced with a multi-step process of clicks and windows to comply with alerts.Anthony Hopkins is far better at acting the part than we are at programming it. I suspect the iconic butler has already become an anachronism. In The Razors Edge W. Somerset Maugham wrote "American women expect to find in their husbands a perfection that English women only hope to find in their butlers." Maybe we too expect too much.