The future of breast density Tracy Accardi, vice president, global research and development, breast and skeletal health solutions at
Hologic, on how artificial intelligence could be a game-changer for density interpretation

The future of AI in radiology

These waves of promise and excitement invariably make their way into medicine, but in the past, they have been tempered by the realities of medicine, when evidence of real-world performance is sought. Now, there is a palpable sense that this time around, things are different, that we are on the precipice of a revolution, rather than mere incremental evolution of previous technologies. The reason, of course, is deep learning. Broadly speaking, deep learning is not a single technological breakthrough, but rather a collection of accumulated mathematical principles, data structures and optimization algorithms, which when applied to the right data, produce results on certain tasks that far outperform previous methods. While it has seen broad application across almost all data types, visual data is where it has had the greatest tangible successes. Radiology is, therefore, one of its most obvious applications.

Ad Statistics

SoftVue Can Bring It. In today's healthcare environment, a positive patient experience is central in providing excellent patient care. This is where SoftVue comes in - click to read more>>>

One of the attractions of deep learning is that less intensive data preparation is typically required. There is a perception that one can just feed the neural network raw pixels of say, any chest X-ray. In practice, it is not quite this magical. Good data science and engineering practices are still paramount in building such systems. One such data science practice is ensuring the input data is of sufficient quality and quantity. Almost all practical applications of machine learning today are supervised, meaning accurate labels of your ultimate objective is required to train your models on. Not only is obtaining these labels a laborious process, it is an expensive one given the human costs.

We are only in the very early phases of applying deep learning to medical imaging, though the pace of abstracts and papers being published on the topic is rapidly picking up. We are seeing applications of all types, from classification of normal versus pathology, to higher-level tasks such as localization, segmentation and quantification. Most of these current applications are relatively simple and restricted to single-task problems. An article published by Lakhani and Sundaram in Radiology earlier this year demonstrated a 96 percent accuracy rate in classifying tuberculosis on 150 plain chest X-rays in a holdout test set. The authors took off-the-shelf neural networks developed for general image recognition, trained them on this new task and obtained excellent results. One can imagine hundreds of such algorithms that can be trained today in this straightforward manner. This is before we even think about building up the complexity with higher-order reasoning, multi-modal models such as images plus text or images plus genomics, or composition of neural networks in a modular fashion. There are so many potential applications that we can already create using simple off-the-shelf neural networks, so what are the bottlenecks?