If one jumbo jet crashed in the US each day for a week, we’d expect the FAA to shut down the industry until the problem was figured out. But in our health care system, roughly 250 people die each day due to preventable error. A vice president at a health care quality company says that “If we could focus our efforts on just four key areas — failure to rescue, bed sores, postoperative sepsis, and postoperative pulmonary embolism — and reduce these incidents by just 20 percent, we could save 39,000 people from dying every year.” The aviation analogy has caught on in the system, as patient safety advocate Lucian Leape noted in his classic 1994 JAMA article, Error in Medicine. Leape notes that airlines have become far safer by adopting redundant system designs, standardized procedures, checklists, rigid and frequently reinforced certification and testing of pilots, and extensive reporting systems. Advocates like Leape and Peter Provonost have been advocating for adoption of similar methods in health care for some time, and have scored some remarkable successes.

But the aviation model has its critics. The very thoughtful finance blogger Ashwin Parameswaran argues that, “by protecting system performance against single faults, redundancies allow the latent buildup of multiple faults.” While human expertise depends on an intuitive grasp, or mapping, of a situation, perhaps built up over decades of experience, technologized control systems privilege algorithms that are supposed to aggregate the best that has been thought and calculated. The technology is supposed to be the distilled essence of the insights of thousands, fixed in software. But the persons operating in the midst of it are denied the feedback that is a cornerstone of intuitive learning. Parameswaram offers several passages from James Reason’s book Human Error to document the resulting tension between our ability to accurately model systems and an intuitive understanding of them. Reason states:

[C]omplex, tightly-coupled and highly defended systems have become increasingly opaque to the people who manage, maintain and operate them. This opacity has two aspects: not knowing what is happening and not understanding what the system can do. As we have seen, automation has wrought a fundamental change in the roles people play within certain high-risk technologies. Instead of having ‘hands on’ contact with the process, people have been promoted “to higher-level supervisory tasks and to long-term maintenance and planning tasks.” In all cases, these are far removed from the immediate processing. What direct information they have is filtered through the computer-based interface. And, as many accidents have demonstrated, they often cannot find what they need to know while, at the same time, being deluged with information they do not want nor know how to interpret.

A stark choice emerges. We can either double down on redundant, tech-driven systems, or we can try to restore smaller scale scenarios where human judgment actually stands a chance of comprehending the situation. We will need to begin to recognize this regulatory apparatus as a “process of integrating human intelligence with artificial intelligence.” (For more on that front, the recent “We, Robot” conference at U. Miami is also of great interest.)

Another recent story emphasized the importance of filters in an era of information overload, and the need to develop better ways of processing complex information. Kerry Grens’s article “Data Diving” emphasizes that “what lies untapped beneath the surface of published clinical trial analyses could rock the world of independent review.”

[F]or the most part, [analysts] rely simply on publications in peer-reviewed journals. Such reviews are valuable to clinicians and health agencies for recommending treatment. But as several recent studies illustrate, they can be grossly limited and misleading. . . . [There is] an entire world of data that never sees the light of publication. “I have an evidence crisis,” [says Tom Jefferson of the Cochrane Collaboration]. “I’m not sure what to make of what I see in journals.” He offers an example: one publication of a Tamiflu trial was seven pages long. The corresponding clinical study report was 8,545 pages. . . .

Clinical study reports . . . are the most comprehensive descriptions of trials’ methodology and results . . . . They include details that might not make it into a published paper, such as the composition of the placebo used, the original protocol and any deviations from it, and descriptions of all the measures that were collected. But even clinical study reports include some level of synthesis. At the finest level of resolution are the raw, unabridged, patient-level data. Getting access to either set of results, outside of being trial sponsors or drug regulators, is a rarity. Robert Gibbons, the director of the Center for Health Statistics at the University of Chicago, had never seen a reanalysis of raw data by an independent team until a few years ago, when he himself was staring at the full results from Eli Lilly’s clinical trials of the blockbuster antidepressant Prozac.

There will be a growing imperative to open up all of the data as concerns about the reliability of publications continue to grow.

Comments (1)

I’m stuck in the middle and need direction that legal services are not able to either understand or directly help with and believe it is a simple yes/no situation.

1) EHR Software Vendor “ABC” is based in Colorado and Rhode Island
2) PCP Practice in California “XYZ” is a two person office of one Family Med and one Internal Med PCP’s
3) EHR Implementation Vendor “Joe” is also based in California and also works at the IPA where #2 is a member.
===========================
I) #1 & #2 have already initiated an implementation agreement and #2 contacts #3 for help to manage the project. I am #3

II) #2 has requirements that #1 said they could fulfill but are NOT being timely about the process and have continued to bill full price for the implementation (which is at 80% right now and patients have been loaded and system used for over a year now), plus #1 has improperly billed for setup activity that was not needed.

III) Therefore, #2 is tired of the runaround and has stopped payment to #1 until they can fulfill their commitment.

IV) #1 turned off the EHR and now #2’s practice is at jeopardy for quality patient care and is “stuck”

V) #3, Joe needs to fulfill his commitment to #2 (which is now well past due, due to the fact #1 is so difficult to work with) and Joe needs clarification on the legality of #1 just “shutting off the system” and not letting us download the electronic patient records in some usable form… either excel, word, PDF, MS Access- at least something usable.

VI) The EHR Software Vendor #1 says “too bad” the practice is behind on payments, therefore the system stay off until they are current.
So… Is it legal for #1 to not provide away for #2 (and/or #3) a way to obtain the patient records and thus move to a more reputable and reliable EHR vendor?
I’ve asked various legal providers and they are quite confused to the HIPAA laws regarding this and have wished me luck in my venture!