Site navigation

Deep Learning Tech Could Be Contributing to Medical Misdiagnosis

David Paul

13 May 2020,
01.55pm

Tools using deep learning algorithms designed to enhance medical image reconstruction could be making minor errors and producing flawed results.

Deep learning and artificial intelligence (AI) technology used in image reconstruction could be potentially misdiagnosing patients, according to a paper produced by the University of Cambridge and Simon Fraser University.

A series of tests were carried out for medical image reconstruction algorithms based on AI and deep learning, and found that these techniques result in a series of errors in the final images.

The errors were visible across different types of artificial neural networks, suggesting that the problem would not be easy to fix.

AI has potential to improve the quality of MRI scans or other types of medical imaging, helping to solve the issue of getting the highest quality image in the smallest amount of time. However, the new research raises questions about the reliance on AI-based image reconstruction techniques to make diagnoses, which could ultimately harm patients.

Research lead, Dr Anders Hansen from Cambridge’s Department of Applied Mathematics and Theoretical Physics, commented: “There’s been a lot of enthusiasm about AI in medical imaging, and it may well have the potential to revolutionise modern medicine: however, there are potential pitfalls that must not be ignored.

“We’ve found that AI techniques are highly unstable in medical imaging, so that small changes in the input may result in big changes in the output.”

AI algorithms can ‘learn’ to reconstruct images based on training from previous data, and through this training, the aim is to optimise the quality of the reconstruction.

However, the data indicates that while AI can accurately determine what something contained in an image may be, any minor change to that image could produce a flawed response, such as a tumour being blurred or removed altogether.

“We developed the test to verify our thesis that deep learning techniques would be universally unstable in medical imaging,” Hansen explained.

“The reasoning for our prediction was that there is a limit to how good of a reconstruction can be given restricted scan time. In some sense, modern AI techniques break this barrier, and as a result become unstable. We’ve shown mathematically that there is a price to pay for these instabilities, or to put it simply: there is still no such thing as a free lunch,” he added.

The researchers say that they are now focusing on “providing the fundamental limits to what can be done” with artificial intelligence in this field.

Ben Adcock, co-author of the paper and an associate professor working at the department of mathematics at Simon Fraser University in Canada, told The Register: “There is a tremendous level of activity right now on developing deep learning algorithms for medical image reconstruction.

“But these algorithms are poorly understood mathematically – in particular, we have no guarantees on whether or not they are robust. Hence, it’s vital to have procedures that can detect potential instabilities, so that unstable algorithms do not percolate into clinical applications.”