As media reports about shortages of ventilators and hospital beds show, the COVID-19 pandemic will most probably lead to rationing of care. In this opinion piece, Gregory P. Shea, Krzysztof “Kris” Laudanski and Cassie A. Solomon explore the likely impact of care rationing in the absence of the best possible information on decision quality, patients and care providers. They also consider the potential benefits of artificial intelligence (AI) in guiding decisions about how care can be rationed. Shea and Solomon are co-authors of Leading Successful Change, published by Wharton School Press. Laudanski is a faculty member at the University of Pennsylvania, focusing on anesthesiology and critical care.

Now how many steps behind are we? That is perhaps the most feared question for any leader in a crisis, and one that has proved to be an ongoing issue in the management of COVID-19. People in many quarters continue to labor mightily to catch up, and yet the question persists. Late to contain and delayed in converting to mitigation, we have yet to embrace the next step — care rationing. Thinking through this question could benefit us today and anyone considering artificial intelligence (AI) today or tomorrow.

Let us work some numbers on the back of an envelope. The estimated percentage of the population that the novel coronavirus is likely to infect has remained in the 40% to 70% range for several months. Let us be conservative and say 50%. That means the virus will infect some 165 million Americans. Of that total, data suggest that about 5% will need hospitalization, which adds up to about 8 million people. Again, apparently, about 2% of those infected will need an ICU bed and about 1% will need ventilator support. That means about 1.65 million people will require ventilators. The United States has about 200,000 ventilators, according to the Society of Critical Care Medicine.

Such a large mismatch means that only massive changes in the napkin math would matter. What’s more, these numbers mean that intensive care units, the places most likely to employ ventilators — which normally run close to capacity with the deathly ill — could very well find bed demand for newly arrived COVID-19 patients potentially filling all beds — not just open beds, but all beds — every day. In other words, an ICU could disgorge all its patients at 8 a.m. and refill during the day by only admitting COVID-19 patients, leaving no room for a patient with a heart attack or stroke or acute sepsis or pulmonary embolism.

So far the virus seems to run its idiosyncratic course regardless of treatment, seemingly overpowering, at least for now, most treatment. Data from China suggest morbidity rates for COVID-19 ventilator patients running as high as 86%. The numbers were not too different for patients requiring oxygen delivered alternatively (79%). Again, let’s be conservative and say that the percentage is 80. Can we, given the likely pronounced and prolonged deficit in available ICU beds and ventilators, identify the 20% who will most likely benefit from being in an ICU breathing through a ventilator? Not yet. Do we therefore risk lives through the misallocation of resources? Absolutely. Can we accelerate our ability to decrease, perhaps daily, that risk? Probably. That brings us to the use of AI to try to catch up with this pandemic.

An ICU could disgorge all its patients at 8 a.m. and refill during the day by only admitting COVID-19 patients, leaving no room for a patient with a heart attack or stroke or acute sepsis or pulmonary embolism.

In recent days, several articles have covered the various ways of deploying AI in order to ameliorate the current pandemic: to forecast the spread of the virus, fight misinformation, scan through existing drugs to see if any can be repurposed, and speed design of anti-viral treatments and a vaccine. We see another critically important application of this technology — to augment physician decision making in the all-too-likely event of care rationing portrayed above.

Recent articles have also detailed the way that care for COVID-19 patients has had to be rationed in Italy, where the healthcare system has been overwhelmed by the need. Some three weeks ago, Italy had 2,502 cases of the virus. A week later, Italy had 10,149 cases — too many patients for each one to receive adequate care. The Italian College of Anesthesia, Resuscitation and Intensive Care (SIAARTI) published guidelines for the criteria that doctors should follow under these extraordinary circumstances. The document compares the choices Italian doctors make to the forms of wartime triage required in the field of “catastrophic medicine,” according to an opinion piece published in The New York Times.

Care cannot be provided to all patients who need it, so it becomes necessary to accept that “agonizing choices may be required to determine which patients get lifesaving treatments and which do not,” the article noted. Pause and consider the profundity of this statement, the courage to utter it and its jarring applicability to the U.S. (and elsewhere) today, especially since the U.S. now leads the world in confirmed coronavirus cases, according to The New York Times.

Critical Questions

Clinicians will face several questions as COVID-19 patients come looking for care. These questions qualify as only marginally medical when applied to the seriously ill. Supply and demand prompt them, not acuity of need. The supply and demand realities will occur at various points along a patient’s journey from the ER to the ICU. The questions include:

Who should be admitted to a hospital? Who should be turned away?

Who can be accommodated in the ICU? Who should be placed on ventilation support?

Who should be withdrawn from ventilation support to make a place for someone whose chances of survival are greater?

And then, depending on the answers to any of questions one to three,

Knowledge@Wharton High School

Who should be provided only with palliative care?

Answering these questions will likely determine the efficacious application of scarce resources or, restated, whether we squander them through ill-informed or even random distribution.

There’s something else at stake here. In combination, articles note the all-too-likely comingneed to ration care along with the impact of rationing care on the providers doing the rationing. We run the risk of damaging those providers for life even as we speak increasingly of our dependence on and our gratitude to them. Let’s take a moment to try to convey that reality before offering a way both to lessen its likelihood and to enhance our ability to ration care.

View from an ER

Let’s begin with a fictitious but all-too-possible scene to lay out the way a reality of sickness and scarcity created by policy and system failure could play itself out in very personal and long-lasting fashion for care recipients and providers alike:

A bone-tired ER physician pauses amidst the near chaos to wash her hands for seemingly the thousandth time today … and to collect herself. At an epidemiological level, she knows that she staffs the front lines of a pandemic. At an individual level, she knows that she is performing battlefield triage. She chokes back a gasp. She had not signed up to make bed allocation choices to the ICU based on her best estimate of likely survival rates. Where was the objective data? Who was reviewing, converting it to information, and then updating care, let alone triage, protocols? Where was the protection against common decision-making biases? How is she supposed to function in these conditions, especially given her own exhaustion, anxiety and ever narrowing cognitive abilities, propelled as she was by high-test caffeine and, perhaps soon, by the Ritalin she had stashed in her white coat?

The physical burnout does not faze her much. For better or worse, she had experience with that well before the pandemic. Endless preaching about work-life balance or integration combined with resilience training had yielded some benefit. No, it is her anticipation of long shadows across the trail ahead that worries her — shadows born of repetitive, traumatic choices, the substance for the memories, flashbacks, and perhaps even PTSD that would reach out from those shadows, perhaps for the rest of her life, shadows born of a mounting number of fate-making but only best guessed rationing decisions.

Nothing theoretical here … these are her decisions. Did she do harm in holding that ICU bed, in not allotting it to someone who, perhaps in her ignorance, she believed would die regardless? How likely was it that this person would die regardless of care received? Should she factor the possibility of a miracle into her triage? Was a 95% likelihood good enough … or 85% or 75%? How should she factor patient age, number of kids, race, gender, ethnicity or their socioeconomic status? What about the person whom she denied a bed in what she guessed — yes, guessed — would soon fill with a patient more likely to benefit from now rationed ICU care? Was this choice the lesser of two evils or were both options equally bad? Who will second guess her and to what effect?

Seemingly long ago in medical school, they had covered such scenarios albeit in a somewhat other worldly way. That was long before COVID-19. Today, however, yesterday, and for as far as she could see, these questions were vividly and starkly hers. She owned them and they possessed her. She knew that her answers would stay with her, perhaps for the remainder of her life and the lives of all whom they affected. She straightens her white lab coat as if she were straightening herself, smiles softly at a red-eyed nurse who wipes down her visor, and whispers words of support to a tech who mists her hazmat uniform with disinfectant. She changes exam gloves as she enters the next ER bay, ‘Hi, I’m doctor….’

AI offers the prospect of improved and improving decision-making, not perfect decision-making, not at all.

How likely is it that such a scene will unfold not once but regularly over the days ahead? How real and how deep is the struggle portrayed in the scene?

One of the co-authors of this article, Krzysztof (“Kris”) Laudanski, a critical care intensivist at the University of Pennsylvania, explains: “I decide to withdraw ventilation support in the ICU maybe once a week, always in consultation with my colleagues and, of course, with the family. I have time to think and to collaborate and to prepare. The family and I reach that decision together. I need time to guide them compassionately through the process of letting a loved one die in dignity and without haste. It takes time.”

Regardless of their values, “decisions like these mean we are allowing their loved one to die. But with COVID-19, we are looking at a situation where physicians will be asked to make this kind of decision in the ED and in the ICU at least hourly. We won’t have time for our usual careful and consensual process. The family may not even be available. The medical staff is not trained for this kind of decision making or to manage the price it will extract. The consequences for everyone will be devastating.”

What is to be done? In a pandemic, masses of data emerge rapidly, too much, too varied, and too fast for humans to process into information. AI can mine that data moment by moment looking for information such as the impact upon recovery of underlying medical conditions, age, and frailty and generate a prognosis far more comprehensively and with greater precision than any exhausted, front line physician. AI could also, potentially, sort through the effect of practice biases such as what ventilation pressure physicians employ, a practice that varies, for example, by country. AI offers the prospect of improved and improving decision-making, not perfect decision-making, not at all.

AI and Human Judgment

Properly trained, an AI algorithm can augment physician judgment about when to offer or subtract life-saving care. Human judgment will remain important and ultimate, but it can be supported with the dispassionate independent score-keeping capabilities of AI. AI is a logical extension of the current risk-assessment tools used intermittently today in medicine. AI can assess risk and illuminate a set of guidelines that supports clinicians as they must decide who receives care and who does not, and it can become even more accurate and “smart” as new data is added. AI cannot express compassion, but its potential impartiality may better allow us to apply ours. AI cannot hold our hand, but it may well direct us to whose hand to hold by telling us who can likely heal and who most likely cannot.

How quickly can we develop such AI tools, time being of the essence? Training an AI algorithm requires data — but not as much as one might think. We could have access to data from China, (but may not trust its applicability or veracity), but other countries are collecting data too. At Penn, Kris is developing an effective AI tool with a small data set, but training AI on a bigger data set would yield greater accuracy and enable less bias. The Veterans Administration database will soon have enough patients to use to create this kind of AI algorithm; the National Health System in England will undoubtedly soon have high quality data too.

AI cannot hold our hand, but it may well direct us to whose hand to hold by telling us who can likely heal and who most likely cannot.

AI should not supplant the judgment of a human doctor. Even with the support of an AI prognosis augmenting the capability of doctors, physicians will ultimately make the final choices. Humans will supervise the inputs that create the original learning algorithm, and they will check on what it is learning as it evolves. Humans will discern if the AI decision tool is confusing artifact and finding. Ascribing sickness on February 1, 2020, to living in Wuhan Province would be accurate but of precious little use.

Amidst a pandemic and shortages of medical resources triage will occur. The cold and undeniably heartless consequence of supply however stretched and pulled and demand. AI can serve humans in ameliorating this hateful reality, but triage will occur. Care will be rationed. Only the issue of how remains. Humans – physicians – would still be the ones to tell a patient (and their family) that the patient will not or will no longer be afforded ventilation support or perhaps even hospitalization. But the physician could do so based on the most informed protocols possible resting atop the most current data available probed and analyzed in the most sophisticated manner possible. By looking backwards at the real data about who survives on ventilation support and who does not, we believe the AI can be built based on these facts and (relatively) free of the bias inherent in much human medical judgment.

We humans like not just being in the loop but being the loop. With COVID-19, however, our loop just isn’t fast enough.

Employing an AI tool to aid physician decision-making as a pandemic spits out not just the infected but also data can mean both higher quality decisions, however knee buckling, and greater assurance of all involved regarding the quality of those unwanted decisions. Small solace? To be sure. But likely solace nonetheless while angst and pain bathe all involved — solace to the family that the decision was indeed as skillfully approached as possible, to the society that the scarce, oh so dear resources were employed as effectively as possible, and to the physician that he or she can take greater surety in their judgment. The evolving algorithm should help afford increased emotional and psychological well-being for a struggling patient (and their family), a society already gnashing its collective teeth, and several generations of those who provide care and comfort to the sickest among us.

The idea of an algorithm helping humans both cognitively and emotionally to deal with a crisis may at this moment seem novel and perhaps outright unsettling. We humans like not just being in the loop but being the loop. With COVID-19, however, our loop just isn’t fast enough.

We “accept” AI in other facets of our lives, especially in aspects of business. It is time to put our pride aside and step into the future of the interface and interaction between humans and AI. We must move to develop this kind of clinical support AI as soon as humanly possible. People will die as we misapply essential resources and scar our healthcare providers, particularly our physicians. Time is “a wastin’.”

Citing Knowledge@Wharton

For Personal use:

MLA

"Triage in a Pandemic: Can AI Help Ration Access to Care?."
Knowledge@Wharton. The Wharton School, University of Pennsylvania,
27 March, 2020. Web. 07 June, 2020 <https://knowledge.wharton.upenn.edu/article/triage-in-a-pandemic-can-ai-help-ration-access-to-care/>

APA

Triage in a Pandemic: Can AI Help Ration Access to Care?.
Knowledge@Wharton
(2020, March 27).
Retrieved from https://knowledge.wharton.upenn.edu/article/triage-in-a-pandemic-can-ai-help-ration-access-to-care/

Chicago

"Triage in a Pandemic: Can AI Help Ration Access to Care?"
Knowledge@Wharton, March 27, 2020,
accessed June 07, 2020.
https://knowledge.wharton.upenn.edu/article/triage-in-a-pandemic-can-ai-help-ration-access-to-care/

Sponsored Content

Founded in 2003, Hong Kong-listed Sunac China Holdings Ltd. has achieved meteoric growth over the past decade to become one of the country’s largest residential property developers. The company rapidly expanded in an era when a diminished supply of attractive[…]

Join The Discussion

2 Comments So Far

TodaysSimple AI

People and machines will defect this challenge and we will be stonger and hopefully more caring in future. Best AI = old maths + new compute power + open source software so it would be silly not to use. Google is in its Kaggle Ai community.

Anumakonda Jagadeesh

Artificial intelligence (AI) in healthcare is the use of complex algorithms and software to emulate human cognition in the analysis of complicated medical data. Specifically, AI is the ability of computer algorithms to approximate conclusions without direct human input.
What distinguishes AI technology from traditional technologies in health care is the ability to gain information, process it and give a well-defined output to the end-user. AI does this through machine learning algorithms. These algorithms can recognize patterns in behavior and create their own logic. In order to reduce the margin of error, AI algorithms need to be tested repeatedly. AI algorithms behave differently from humans in two ways: (1) algorithms are literal: if you set a goal, the algorithm can’t adjust itself and only understand what it has been told explicitly, (2) and algorithms are black boxes; algorithms can predict extremely precise, but not the cause or the why.
The primary aim of health-related AI applications is to analyze relationships between prevention or treatment techniques and patient outcomes. AI programs have been developed and applied to practices such as diagnosis processes, treatment protocol development, drug development, personalized medicine, and patient monitoring and care. Medical institutions such as The Mayo Clinic, Memorial Sloan Kettering Cancer Center, and the British National Health Service, have developed AI algorithms for their departments. Large technology companies such as IBM and Google, and startups such as Welltok and Ayasdi, have also developed AI algorithms for healthcare. Additionally, hospitals are looking to AI solutionsto support operational initiatives that increase cost saving, improve patient satisfaction, and satisfy their staffing and workforce needs. Companies are developing predictive analytics solutions that help healthcare managers improve business operations through increasing utilization, decreasing patient boarding, reducing length of stay and optimizing staffing levels.
There are many diseases and there also many ways that AI has been used to efficiently and accurately diagnose them. Some of the diseases that are the most notorious such as Diabetes, and Cardiovascular Disease (CVD) which are both in the top ten for causes of death worldwide have been the basis behind a lot of the research/testing to help get an accurate diagnosis. Due to such a high mortality rate being associated with these diseases there have been efforts to integrate various methods in helping get accurate diagnosis’.
An article by Jiang, et al (2017) demonstrated that there are several types of AI techniques that have been used for a variety of different diseases. Some of these techniques discussed by Jiang, et al include: Support vector machines, neural networks, Decision trees, and many more. Each of these techniques is described as having a “training goal” so “classifications agree with the outcomes as much as possible…”.
To demonstrate some specifics for disease diagnosis/classification there are two different techniques used in the classification of these diseases include using “Artificial Neural Networks (ANN) and Bayesian Networks (BN)”. From a review of multiple different papers within the timeframe of 2008-2017 observed within them which of the two techniques were better. The conclusion that was drawn was that “the early classification of these diseases can be achieved developing machine learning models such as Artificial Neural Network and Bayesian Network.” Another conclusion Alic, et al (2017) was able to draw was that between the two ANN and BN that ANN was better and could more accurately classify diabetes/CVD with a mean accuracy in “both cases (87.29 for diabetes and 89.38 for CVD).
The increase of telemedicine, has shown the rise of possible AI applications. The ability to monitor patients using AI may allow for the communication of information to physicians if possible disease activity may have occurred. A wearable device may allow for constant monitoring of a patient and also allow for the ability to notice changes that may be less distinguishable by humans.
Electronic health records are crucial to the digitalization and information spread of the healthcare industry. However, logging all of this data comes with its own problems like cognitive overload and burnout for users. EHR developers are now automating much of the process and even starting to use natural language processing (NLP) tools to improve this process. One study conducted by the Centerstone research institute found that predictive modeling of EHR data has achieved 70–72% accuracy in predicting individualized treatment response at baseline.Meaning using an AI tool that scans EHR data it can pretty accurately predict the course of disease in a person.
Improvements in natural language processing led to the development of algorithms to identify drug-drug interactions in medical literature. Drug-drug interactions pose a threat to those taking multiple medications simultaneously, and the danger increases with the number of medications being taken. To address the difficulty of tracking all known or suspected drug-drug interactions, machine learning algorithms have been created to extract information on interacting drugs and their possible effects from medical literature. Efforts were consolidated in 2013 in the DDIExtraction Challenge, in which a team of researchers at Carlos III University assembled a corpus of literature on drug-drug interactions to form a standardized test for such algorithms. Competitors were tested on their ability to accurately determine, from the text, which drugs were shown to interact and what the characteristics of their interactions were. Researchers continue to use this corpus to standardize the measurement of the effectiveness of their algorithms.
Other algorithms identify drug-drug interactions from patterns in user-generated content, especially electronic health records and/or adverse event reports. Organizations such as the FDA Adverse Event Reporting System (FAERS) and the World Health Organization’s VigiBase allow doctors to submit reports of possible negative reactions to medications. Deep learning algorithms have been developed to parse these reports and detect patterns that imply drug-drug interactions.
DSP-1181, a molecule of the drug for OCD (obsessive-compulsive disorder) treatment, was invented by artificial intelligence through joint efforts of Exscientia (British start-up) and Sumitomo Dainippon Pharma (Japanese pharmaceutical firm). The drug development took a single year, while pharmaceutical companies usually spend about five years on similar projects. DSP-1181 was accepted for a human trial.
The subsequent motive of large based health companies merging with other health companies, allow for greater health data accessibility. Greater health data may allow for more implementation of AI algorithms.
A large part of industry focus of implementation of AI in the healthcare sector is in the clinical decision support systems. As the amount of data increases, AI decision support systems become more efficient. Numerous companies are exploring the possibilities of the incorporation of big data in the health care industry.
The following are examples of large companies that have contributed to AI algorithms for use in healthcare.
IBM’s Watson Oncology is in development at Memorial Sloan Kettering Cancer Center and Cleveland Clinic. IBM is also working with CVS Health on AI applications in chronic disease treatment and with Johnson & Johnson on analysis of scientific papers to find new connections for drug development. In May 2017, IBM and Rensselaer Polytechnic Institute began a joint project entitled Health Empowerment by Analytics, Learning and Semantics (HEALS), to explore using AI technology to enhance healthcare.
Microsoft’s Hanover project, in partnership with Oregon Health & Science University’s Knight Cancer Institute, analyzes medical research to predict the most effective cancer drug treatment options for patients. Other projects include medical image analysis of tumor progression and the development of programmable cells.
Google’s DeepMind platform is being used by the UK National Health Service to detect certain health risks through data collected via a mobile app. A second project with the NHS involves analysis of medical images collected from NHS patients to develop computer vision algorithms to detect cancerous tissues.
Intel’s venture capital arm Intel Capital recently invested in startup Lumiata which uses AI to identify at-risk patients and develop care options.
Kheiron Medical developed deep learning software to detect breast cancers in mammograms.
Medvice provides real time medical advice to clients, who can access and store their Electronic Health Records (EHRs) over a decentralized blockchain. Medvice uses machine learning aided decision making to help physicians predict medical red flags (i.e. medical emergencies which require clinical assistance) before serving them. Predictive Medical Technologies uses intensive care unit data to identify patients likely to suffer cardiac incidents. Modernizing Medicine uses knowledge gathered from healthcare professionals as well as patient outcome data to recommend treatments. “Compassionate AI Lab” uses grid cell, place cell and path integration with machine learning for the navigation of blind people. Nimblr.ai uses an A.I. Chatbot to connect scheduling EHR systems and automate the confirmation and scheduling of patients.
Infermedica’s free mobile application Symptomate is the top-rated symptom checker in Google Play. The company also released the first AI-based voice assistant symptom checker for three major voice platforms: Amazon Alexa, Microsoft Cortana, and Google Assistant.[64]
A team associated with the University of Arizona and backed by BPU Holdings began collaborating on a practical tool to monitor anxiety and delirium in hospital patients, particularly those with Dementia. The AI utilized in the new technology – Senior’s Virtual Assistant – goes a step beyond and is programmed to simulate and understand human emotions (artificial emotional intelligence). Doctors working on the project have suggested that in addition to judging emotional states, the application can be used to provide companionship to patients in the form of small talk, soothing music, and even lighting adjustments to control anxiety.
Fractal Analytics has incubated Qure.ai which focuses on using deep learning and AI to improve radiology and speed up the analysis of diagnostic x-rays.
Digital consultant apps like Babylon Health’s GP at Hand, Ada Health, and Your.MD use AI to give medical consultation based on personal medical history and common medical knowledge. Users report their symptoms into the app, which uses speech recognition to compare against a database of illnesses. Babylon then offers a recommended action, taking into account the user’s medical history. Entrepreneurs in healthcare have been effectively using seven business model archetypes to take AI solutionto the marketplace. These archetypes depend on the value generated for the target user (e.g. patient focus vs. healthcare provider and payer focus) and value capturing mechanisms (e.g. providing information or connecting stakeholders).
The use of AI is predicted to decrease medical costs as there will be more accuracy in diagnosis and better predictions in the treatment plan as well as more prevention of disease.
Other future uses for AI include Brain-computer Interfaces (BCI) which are predicted to help those with trouble moving, speaking or with a spinal cord injury. The BCIs will use AI to help these patients move and communicate by decoding neural activates.
As technology evolves and is implemented in more workplaces, many fear that their jobs will be replaced by robots or machines. The U.S. News Staff (2018) writes that in the near future, doctors who utilize AI will “win out” over the doctors who don’t. AI will not replace healthcare workers but instead, allow them more time for bedside cares. AI may avert healthcare worker burn out and cognitive overload. Overall, as Quan-Haase (2018) says, technology “extends to the accomplishment of societal goals, including higher levels of security, better means of communication over time and space, improved health care, and increased autonomy” (p. 43). As we adapt and utilize AI into our practice we can enhance our care to our patients resulting in greater outcomes for all.
With an increase in the use of AI, more care may become available to those in developing nations. AI continues to expand in its abilities and as it is able to interpret radiology, it may be able to diagnose more people with the need for fewer doctors as there is a shortage in many of these nations. The goal of AI is to teach others in the world, which will then lead to improved treatment and eventually greater global health. Using AI in developing nations who do not have the resources will diminish the need for outsourcing and can use AI to improve patient care. For example, Natural language processing, and machine learning are being used for guiding cancer treatments in places such as Thailand, China, and India. Researchers trained an AI application to use NLP to mine through patient records, and provide treatment. The ultimate decision made by the AI application agreed with expert decisions 90% of the time(Wikipedia).