Artificial Intelligence Pinpoints Nine Different Abnormalities in Head Scans

New algorithms could help emergency clinics identify serious head trauma cases faster

A brain scan (left) showing an intraparenchymal hemorrhage in left frontal region and a scan (right) of a subarachnoid hemorrhage in the left parietal region. Both conditions were accurately detected by the Qure.ai tool. Image courtesy of Nature Medicine.

January 7, 2019 — The rise in the use of computed tomography (CT) scans in U.S. emergency rooms has been a well-documented trend1 in recent years2. At the same time, the diagnosis of life-threatening conditions using these head scans has risen only slightly in emergency rooms. One problem ER doctors face is trying to separate out serious cases of head trauma from less serious injuries.

A new study suggests that deep-learning algorithms could help automate the triage process for some of these head trauma cases, specifically for patients with brain injury who require immediate attention. The study3, which appeared recently in The Lancet, found that deep-learning algorithms were able to accurately identify as many as nine different critical abnormalities in CT head scans.

The study is the latest in a slew of new research that uses artificial intelligence (AI) to analyze medical images. Eric Topol, a physician at and the executive vice president of Scripps Research who wasn’t involved in the research, said that this study represents a step forward because most previous reports of AI in medical imaging gave a yes-or-no answer for one type of abnormality, like a brain lesion. But the algorithms in this study were trained to parse multiple kinds of brain trauma.

“It’s one of the best radiology–AI efforts to date, because it widens the deep-learning interpretation task to urgent referral of many different types of head CT findings,” Topol said.

In the new study, funded by the Mumbai-based company Qure.ai, which seeks to use AI for radiology, scientists employed by the company and their collaborators collected more than 313,000 anonymized head CT scans from 20 hospital and outpatient radiology centers in India. They then used these scans to develop and train their algorithms. Next, they randomly selected 21,000 scans in this sample representing more than 9,000 patients to validate the algorithms.

The system was able to identify skull fractures and five different types of intercranial hemorrhage. It was also able to detect mass effect and midline shift, both used as indicators of brain injury severity. “These are critical results that need to be communicated to the doctor really fast,” said Sasank Chilamkurthy, the lead author of the study.

The study authors asked three senior radiologists to independently analyze the CT scans. They found that the reviewers agreed with the algorithms’ diagnoses 86–99 percent of the time, depending on the type of brain abnormality.

Chilamkurthy said one of the challenges of developing these types of algorithms is that a large volume of scans is needed in order to train an AI model and validate the findings. “You have to have a huge sample size because the abnormalities in the dataset are usually of low prevalence,” he said.

Ideally, Chilamkurthy said, the system could diagnose patients with head trauma faster so that patients in critical condition could be treated as soon as possible. The authors say that their automated system could also be useful in remote locations where a radiologist is not immediately available.

Chilamkurthy said Qure.ai is pursuing regulatory clearance through the U.S. Food and Drug Administration (FDA) for its automated system. Earlier this year, the company won approval in Europe to market its AI-based chest X-ray product that can evaluate 15 different abnormalities.

Eric Oermann, a neurosurgeon at Mount Sinai in New York who recently published similar research4 on using AI to analyze CT head scans, said the biggest challenge of applying AI to medical scans is generalizability. “Images come from significantly different distributions between hospitals, and deep learning models easily over-fit to these local generators,” Oermann said. “Getting models that work everywhere is a notable and open problem.”

A clinical trial would be needed to determine if Qure.ai’s triage system could improve radiologist efficiency and patient care. The real test for these algorithms will be in a real, prospective clinical environment, Topol said.

Henry Ford Hospital's ViewRay MRIdian linear accelerator system allows real-time MRI-guided radiotherapy. Shown is the support staff for this system. In the center of the photo is Benjamin Movsas, M.D., chair of radiation oncology at Henry Ford Cancer Institute. Second from the right is Carri Glide-Hurst, Ph.D., director of translational research, radiation oncology.