Search form

Artificial intelligence (AI) algorithms perform significantly better when they include a radiologist’s opinion, according to a new study published in the Journal of the American College of Radiology.

“A radiologist-augmented approach has been conspicuously lacking in major machine learning research in literature,” wrote author Adarsh Ghosh, MD, Department of Radiodiagnosis and Imaging at the All India Institute of Medical Sciences in Delhi, India. “The algorithms that have been evaluated usually work independent of the radiologist’s opinion—with an objective to replace the radiologist. Although this may help decrease the workload of radiologists and improve patient workflow, a question that remains unattended is whether we can better ourselves.”

The authors used Breast Imaging Reporting and Data System (BI-RADS) data from the University of California, Irvine Machine Learning Repository (data set 1) and the Digital Database for Screening Mammography repository (data set 2). Two sets of models were trained: M1 and M2. M1 used lesion shape, margin, density and patient age information from data set 1 and image texture parameters from data set 2. M2 used the same image parameters as M1, but also used BI-RADS classification provided by radiologists.

Overall, the model that used BI-RADS classification from radiologists (M2) outperformed the model that did not (M1). The area under the curve (AUC) of the M1 models ranged from 0.90-0.92. The AUC of the M2 models ranged from 0.85-0.88.

“The models using only the BI-RADs descriptors of lesion margin, density, and shape along with the patients’ age were less accurate than models using both the BI-RADS category along with the BI-RADS descriptors,” Ghosh wrote. “Though the parameters used in these models are very simplistic and the data size is small, the results have successfully demonstrated that a radiologist-augmented workflow is feasible in AI, allowing better management of patients and disease classification.”