AI is Improving Healthcare... But Will Benefits Be Widely Shared?

The study identified eight domains where AI is already having or is projected to have the greatest impact. In this series, #AskAboutAI, we’re working our way through these categories attempting to identify questions that teachers and parents should be discussing with young people.

Why? We think AI and machine learning (the practical subset of algorithms getting smarter as they munch their way through terabytes of our data) will have a significant impact on the lives and livelihoods of young people--probably more than any other factor. Some of those impacts will be wonderful….others potentially dangerous.

The rise of AI also has significant implications for what grads should know and be able to do and the kinds of learning experiences most likely to produce those outcomes. Big changes to the employment landscape and learning--two good reasons why it’s time to ask about AI.

Last week we considered safety and security (particularly the ethics and economics of recognition). This week we consider how AI is promoting health and changing healthcare.

Good News: AI Will Extend Healthy Lives

Let’s start with the good news. Stanford’s AI100 study found that “Recent successes, such as mining social media to infer possible health risks, machine learning to predict patients at risk, and robotics to support surgery, have expanded a sense of possibility for AI in healthcare.”

Here’s the rub, “AI-based applications could improve health outcomes and quality of life for millions of people in the coming years—but only if they gain the trust of doctors, nurses and patients, and if policy, regulatory and commercial obstacles are removed.”

It’s not just a bunch of computer scientists that have taken notice. According to recent a survey of 122 founders, executives and investors in health-tech companies by Silicon Valley Bank, big data and artificial intelligence will have the greatest impact on the industry in the year ahead.

Whole-genome sequencing, high-resolution imaging technologies, automation and miniaturization have triggered an explosion in data production that will soon reach exabyte proportions.

All this new data--including physiological, behavioral, molecular, clinical, environmental exposure, medical imaging, disease management, medication prescription history, nutrition and exercise parameters--will be used to track the health of individuals and populations in considerably more detail than ever before.

Over the next fifteen years, the number of elderly in the United States will grow by over 50%. Improving the quality of life will be automated transportation, smart nudges (e.g., “take your meds”), smart devices, mobile monitoring, hearing and vision aids and assistive devices.

Stanford’s inaugural AI100 report noted five current challenges:

Research and deployment have been slowed by outdated regulations and incentive structures;

Poor human-computer interaction methods;

Difficulties and risks of implementing technologies in big complex systems;

Integrating the human dimensions of care with automated reasoning processes; and

Lack of widely accepted methods and standards for privacy protection.

It’s clear that, like other industries, many tasks in healthcare will be augmentation, some will be fully automated. This should reduce errors, improve care, but lead to some dislocation.

Automation also raises tough new legal issues: who is responsible when a self-driven car crashes or an intelligent medical device fails?

Big Data Has Big Implications

A June paper authored by 57 global scholars said, “One of the biggest bottlenecks and challenges is the availability of healthcare professionals and clinical researchers that are able to use the latest information technologies developed in the big data analytics era.”

More interdisciplinary studies combining biology and medicine, engineering, the social sciences. Projects fail more often because of the underappreciation of the complexities of ethical, legal, and social factors than for technological reasons.

Launching big data pilots to inform healthcare including open and citizen science

Our deep dive into big data suggests that all schools, not just med schools, should adopt integrated project-based learning and encourage students to struggle with the issues of our time; armed with design thinking and facility with data analytics, they should develop quality public products and informed opinions. It won’t be what they memorize that will matter, it will be their confidence in addressing novel and complex situations that will be the equivalent of swimming lessons for the choppy seas of the automation economy.

Research: how to ensure that minority populations are included in AI-aided research?

Diagnostics and healthcare delivery: services and outcomes are already unequally distributed, how will we avoid amplifying inequities with AI?

Patient relationships: smart systems will shift some care and monitoring directly to patients; will patients now well equipped to manage their own data fall through the cracks?

Economics: what happens when your insurer wants access to your FitBit and health agent AI about your status before setting the price of your premium?

Ethics: How will AI challenge core values such as confidentiality, dignity, continuity of care, avoiding conflicts of interest, and informed consent? How will we contest AI-based clinical decisions or AI health services outcomes?

These questions can’t be limited to health policy classes in medical schools. We should grapple with them in chambers of commerce, mayoral task forces, high school social studies classrooms and in statewide elections. These are not future considerations, they are issues of life and death for us and our children. It’s a good time to #AskAboutAI.

Categories:

Tags:

Notice: We recently upgraded our comments. (Learn more here.) If you are logged in as a subscriber or registered user and already have a Display Name on edweek.org, you can post comments. If you do not already have a Display Name, please create one here.

Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

The opinions expressed in Vander Ark on Innovation are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.