Artificial intelligence is coming to medicine — don’t be afraid

Automation could replace one-third of U.S. jobs within 15 years. Oxford and Yale experts recently predicted that artificial intelligence could outperform humans in a variety of tasks by 2045, ranging from writing novels to performing surgery and driving vehicles. A little human rage would be a natural response to such unsettling news.

Artificial intelligence (AI) is bringing us to the precipice of an enormous societal shift. We are collectively worrying about what it will mean for people. As a doctor, I’m naturally drawn to thinking about AI’s impact on the practice of medicine. I’ve decided to welcome the coming revolution, believing that it offers a wonderful opportunity for increases in productivity that will transform health care to benefit everyone.

Groundbreaking AI models have bested humans in complex reasoning games, like the recent victory of Google’s AlphaGo AI over the human Go champ. What does that mean for medicine?

advertisement

To date, most AI solutions have solved minor human issues — playing a game or helping order a box of detergent. The innovations need to matter more. The true breakthroughs and potential of AI lie in real advancements in human productivity. A McKinsey Global Institute report suggests that AI is helping us approach an unparalleled expansion in productivity that will yield five times the increase introduced by the steam engine and about 1 1/2 times the improvements we’ve seen from robotics and computers combined. We simply don’t have a mental model to comprehend the potential of AI.

Across all industries, an estimated 60 percent of jobs will have 30 percent of their activities automated; about 5 percent of jobs will be 100 percent automated.

What this means for health care is murky right now. Does that 5 percent include doctors? After all, medicine is a series of data points of a knowable nature with clear treatment pathways that could be automated. That premise, though, fantastically overstates and misjudges the capabilities of AI and dangerously oversimplifies the complexity underpinning what physicians do. Realistically, AI will perform many discrete tasks better than humans can which, in turn, will free physicians to focus on accomplishing higher-order tasks.

If you break down the patient-physician interaction, its complexity is immediately obvious. Requirements include empathy, information management, application of expertise in a given context, negotiation with multiple stakeholders, and unpredictable physical response (think of surgery), often with a life on the line. These are not AI-applicable functions.

I mentioned AlphaGo AI beating human experts at the game. The reason this feat was so impressive is due to the high branching factor and complexity of the Go game tree — there are an estimated 250 choices per move, permitting estimates of 10 to the 170th different game outcomes. By comparison, chess has a branching factor of 35, with 10 to the 47th different possible game outcomes. Medicine, with its infinite number of “moves” and outcomes, is decades away from medical approaches safely managed by machines alone.

We still need the human factor.

That said, more than 20 percent of a physician’s time is now spent entering data. Since doctors are increasingly overburdened with clerical tasks like electronic health record entry, prior authorizations, and claims management, they have less time to practice medicine, do research, master new technology, and improve their skills. We need a radical enhancement in productivity just to sustain our current health standards, much less move forward. Thoughtfully combining human expertise and automated functionality creates an “augmented” physician model that scales and advances the expertise of the doctor.

Physicians would rather practice at the top of their licensing and address complex patient interaction than waste time entering data, faxing (yes, faxing!) service authorizations, or tapping away behind a computer. The clerical burdens pushed by fickle health care systems onto physicians and other care providers is both unsustainable and a waste of our best and brightest minds. It’s the equivalent of asking an airline pilot to manage the ticket counter, count the passengers, handle the standby and upgrade lists, and give the safety demonstrations — then fly the plane. AI can help with such support functions.

But to radically advance health care productivity, physicians must work alongside innovators to atomize the tasks of their work. Understanding where they can let go to unlock time is essential, as is collaborating with technologists to guide truly useful development.

Perhaps it makes sense to start with automated interpretation of basic labs, dose adjustment for given medications, speech-to-text tools that simplify transcription or document face-to-face interactions, or even automate wound closure. And then move on from there.

It will be important for physicians — and patients — to engage and help define the evolution of automation in medicine in order to protect patient care. And physicians must be open to how new roles for them can be created by rapidly advancing technology.

If it all sounds a bit dreamy, I offer an instructive footnote about experimentation with AlphaGo AI. The recent game summit proving AlphaGo’s prowess also demonstrated that human talent increases significantly when paired with AI. This hybrid model of humans and machines working together presents a scalable automation paradigm for medicine, one that creates new tasks and roles for essential medical and technology professionals, increasing the capabilities of the entire field as we move forward.

Physicians should embrace this opportunity rather than fear it. It’s time to rage with the machine.

Jack Stockert, M.D., is a managing director and leader of strategy and business development at Health2047, a Silicon Valley-based innovation company.

It’s important to note that when dealing with human problems, there are two kinds of complexity: there is the comprehensible complexity (the ability to see all the possible moves and to make the best one), and the perceivable complexity (the ability to simply perceive that complexity). If an expert is unable to first perceive the complexity, any statement about “medicine, with its infinite number of “moves” and outcomes” becomes trite. And while people like to flatter themselves about their capacity to handle complexity, models have repeatedly shown that experts invariably fall back on straight forward rule-based strategies. Unless you are dealing with the 1 in a million genius, when you give an expert a set of conditions from a population of diverse situations, the best fit models invariably turn out to be straight forward.
The algorithms that should, in my opinion, already be in use are those which increase a doctor’s appreciation of the perceivable complexity. Yes, this is complex but are you looking at the complexity that matters? Then based on a wide range of data and particularly relationships beyond what the doctor might instantly be aware of, what are the next steps with the best possible outcomes? And then finally, how can the computer present the “reasoning” behind recommended strategies in order to convince the expert.
In terms of questions about whether we have anything to fear with artificial intelligence, Yes, there are reasonable to fear but not because AI will go out on its own and take over the world but rather because of the unscrupulous. For instance, have you noticed how difficult it has become to differentiate between a legitimate bill and an “opportunity” to sign up for a new policy/support/service? Imagine the capacity through AI to set up a string of emails / letters / warnings, to a network of clients whose addresses have been hacked from a favorite government website. The program can fake messages for a very complex set of conditions to multiple recipients over decades. Now that is something to be afraid of.