The use of Artificial Intelligence (AI) is on the rise in the technology sector and has become a buzz-worthy topic in many corners of our digital world. The application of AI in the medical field holds great promise for improving patient health, but will doctors and patients feel comfortable using it? Young startups have begun leveraging this technology to prove better health outcomes, but there's still a lot to do before we'll see AI used pervasively in the clinic.

Current Landscape

To date, the sweet spot in healthcare AI has been pairing algorithms with structured exercises in reading patient data and medical images to train machines to detect abnormalities. This training is called “deep learning.” In the same way, algorithms are being used to sift through vast amounts of medical literature to inform treatment decisions where it would be too onerous a task to have a human read through the same journals.

Companies like MedyMatch and Viz are doing just that. They’re using proprietary algorithms and applying deep learning to aid physicians in making faster diagnoses of strokes in emergency treatment situations. Their algorithms produce an output by ingesting patient CT scans and using the programmed deep learning to aid in the diagnosis of a stroke. Advancement in this particular instance is especially significant because receiving appropriate treatment quickly has a big impact on patient outcomes.

The annual Radiological Society of North America (RSNA) conference was held in Chicago at the end of November, and overwhelmingly the topic of the week was the use of AI in radiology and medical images. I heard firsthand accounts that most scientific speaking sessions involving AI were standing room only and researchers presented on the many promising applications of AI in areas of care of stroke patients, finding and classifying the risk of lung nodules, and identifying imaging cases that need priority review by a radiologist.

While these technologies and approach to AI in the clinical setting hold promise, there has been a recent backlash in the marketplace from the failure to live up to the great hype of IBM's Watson. Watson was to play a central role in establishing an Oncology clinical decision support system at the MD Anderson Cancer Center, but the well-publicized breakup of the partnership with IBM has given some in the industry pause about the great promise of AI in the healthcare setting.

Facing the Challenges

Companies developing AI and machine learning are forging ahead with the understanding that they face uncertainty as they navigate the FDA clearance or approval pathway needed to commercialize these quickly-changing technologies. Many of these technologies fall under the clinical decision support software classification with FDA. There is new guidance for those classifications, but a significant gray area remains in understanding how FDA is going to regulate AI offerings.

FDA has recognized that the existing commercialization paradigm quickly becomes too burdensome to continue to innovate at such a rapid pace. The agency has created the Digital Health Innovation Action Plan to address these concerns and create the new regulatory pathway for these emerging technologies. With this action plan, FDA is partnering with some of the world’s most innovative companies (Apple, J&J, Roche, Samsung, Verily) to create a new and tailored approach to regulating digital health technologies like AI. The proposed output will likely be a new way for FDA collaborate with the industry and ensure that the focus is on clearing and approving the highest-risk technologies.

Clinicians and regulators may find it difficult to trust a deep learning algorithm that doesn't share any information about how it arrived at a certain diagnosis. This “black box” of information makes it difficult to provide transparency to regulators as well as the physicians relying on it. Where this black box exists, it is going to become ever more important that FDA is comfortable with the technology behind it as well as the company producing it.
The fact that many AI technologies leverage the cloud and are working with Protected Health Information (PHI) opens these technologies up to security concerns. Items that need to be addressed by manufacturers, like HIPAA regulations and cybersecurity measures, come in to play. Both HIPAA compliance and cybersecurity come at a cost to manufacturers and require dedicated staff to attend to the myriad challenges.

Ultimately, what is driving the growth and adoption of AI is the desire to do things better than we can today. The success of AI relies on the understanding that humans are imperfect and the use of computers is a way to try to remove bias in healthcare. As software-based AI technologies and our reliance on them deepens, I believe there is valid hope being placed in these technologies beyond the hype.