Stanford University researchers have developed a deep learning algorithm that analyzes chest X-rays and can diagnose pneumonia better than expert radiologists.

While chest X-rays are currently the best available method for diagnosing pneumonia, interpreting these images is very challenging because the appearance of the condition in these images is often vague, overlaps with other diagnoses and mimics benign abnormalities.

However, the algorithm—called CheXNet—outperformed four Stanford radiologists in terms of pneumonia diagnoses for both sensitivity and specificity.

“The motivation behind this work is to have a deep learning model to aid in the interpretation task that could overcome the intrinsic limitations of human perception and bias, and reduce errors,” says Matthew Lungren, MD, an assistant professor of radiology at Stanford University School of Medicine and co-author of a paper about CheXNet published earlier this month.

Developed by the Stanford Machine Learning Group, CheXNet is a 121-layer convolutional neural network which has been trained on the largest publicly available chest X-ray dataset.

In late September, the National Institutes of Health released to the scientific community a de-identified dataset that includes more than 100,000 images labeled with as many as 14 possible pathologies. By releasing the dataset containing scans from more than 30,000 patients, many with advanced lung disease, NIH had hoped that researchers would be able to teach computers how to read and process extremely large amounts of scans, leading clinicians to making better diagnostic decisions.

“The project really started with the release of the NIH frontal-view chest X-ray dataset,” says Jeremy Irvin, a graduate student in the Stanford Machine Learning Group and co-lead author of the paper, who notes that after a little more than a month of development, their algorithm outperformed expert radiologists at diagnosing pneumonia.

According to Irvin, CheXNet now has the highest performance of any research that has come out so far related to NIH’s chest X-ray dataset. As he and his co-authors state in their paper, the algorithm “can automatically detect pneumonia from chest X-rays at a level exceeding practicing radiologists” as well as diagnosing up to 14 types of medical conditions.

In addition to the algorithm, Stanford researchers have developed a computer-based tool that produces what Lungren describes as a sort of “heat map” of the chest X-rays, showing color-coded areas that represent visually where pneumonia appears most likely in the images. He contends that the tool could help reduce the amount of missed cases of pneumonia and significantly accelerate radiologist workflow by showing them where to look first, leading to faster diagnoses.

Lungren says the goal ultimately is to implement the algorithm in the clinical environment to see how it impacts performance.

“This was kind of a planting of a flag in the ground to show the ability of the model to target one of 14 pathologies that were in the dataset,” he concludes. “More broadly, we believe that a deep learning model for this purpose could improve healthcare delivery across a wide range of settings.”

“We plan to continue building and improving upon medical algorithms that can automatically detect abnormalities and we hope to make high-quality, anonymized medical datasets publicly available for others to work on similar problems,” adds Irvin.