AI May Not Be Better Than Experts at Reading Medical Scans

FRIDAY, March 27, 2020 (HealthDay News) -- A new study casts doubt on claims that artificial intelligence (AI) equals or surpasses the ability of human experts to interpret medical images.

Many previous studies were of poor quality and may have exaggerated the benefits of AI, which could pose a risk to the safety of millions of patients, the study authors claimed.

The investigators reviewed two randomized clinical trials and 81 non-randomized studies published over the past decade that compared the performance of a deep learning algorithm for interpreting medical imaging with expert clinicians.

Deep learning is a branch of AI that has shown particular promise in medical imaging, the study authors noted.

Of the non-randomized studies, only nine were prospective (they tracked and collected information about patients over time), and just six involved actual ("real-world") clinical settings, the researchers said.

More than two-thirds (58 of 81) of non-randomized studies had a high risk of bias due to study design problems, and adherence to recognized reporting standards was often poor.

Three-quarters of the studies stated that the performance of AI was at least comparable to or better than that of expert clinicians, and only 31 (38%) said further prospective studies or trials were needed, according to the review published March 25 in the BMJ.

The findings suggest that "many arguably exaggerated claims exist about equivalence with (or superiority over) clinicians, which presents a potential risk for patient safety and population health at the societal level," according to study author Myura Nagendran, an academic clinical fellow at Imperial College London, and colleagues.

Overly positive language "leaves [these] studies susceptible to being misinterpreted by the media and the public," the researchers warned in a journal news release.

"Maximizing patient safety will be best served by ensuring that we develop a high quality and transparently reported evidence base moving forward," the team concluded.