Inside the heart of deep learning in healthcare

Nowadays, doctors are facing a serious problem of complex data. As the quality and quantity of this data are increasing day-by-day, the memory usage and the space required to store the patient’s details are also growing. It is estimated to grow over 50-fold this decade, to 25,000 petabytes worldwide by 2020. So, medical professionals and data scientists are trying to improve patient outcomes by using this data to its maximum potential. In finding a solution to this problem, deep learning in healthcare comes into the picture.

What is deep learning?

It is a technology which is inspired by the workings of the human brain. Large datasets are analyzed by the networks of artificial neurons to discover underlying patterns automatically, without human intervention. The deep learning systems examine millions of images to learn to identify disease automatically. Unlike conventional computer aided diagnostics, deep learning networks can scout for many diseases at once.

What are its applications and goals in medicine?

Deep learning is used to develop state-of-the-art clinical decision support products to distill actionable insights from billions of clinical cases. The purpose of each instance is to arrive at an optimal treatment decision based on many forms of clinical information, such as

Deep learning in healthcare has a broad range of applications including medical imaging and diagnostics:

Tumor detection

Tracking tumor development

Blood flow qualification and visualization

Medical interpretation

Diabetic retinopathy

Deep learning is one of the applications of most traditional early healthcare machine learning which extracts and represents data from the images more efficiently. Many companies are stepping towards this technology.

IBM WATSON & GOOGLE

IBM introduced one of the most promising near-term applications of automated image processing in detecting Melanoma, the dangerous class of skin cancer which is highly curable if diagnosed early and treated appropriately. To identify the tumor, the DL algorithm learns essential features related to the disease from a group of medical images and then makes detection based on that learning. John Smith worked as the senior manager for this intelligent information system at IBM research.

One thing that deep learning algorithms need is a lot of data, and the current influx in data is one of the main reasons for putting the machine and deep learning back on the picture in the last half decade. The scarcity of medical image data in the wider field is one hurdle that still needs to be overcome. IBM was conscious of this issue when it got Merge Healthcare, a company that helps hospitals store and examine medical images, for $1 billion in 2015. IBM has enunciated its plans to train Watson on Merge’s collection of 30 billion images to help doctors in medical diagnosis.

IBM Watson is also trying to fight with different types of cancer by the application of DL in the near future.

As part of the effort in the “war on cancer,” Google DeepMind has associated with UK’s National Health Service (NHS) to help doctors manage head and neck cancers more quickly with DL technologies. This research is being carried in coordination with the University College London Hospital. Google also uses deep learning to help pathologists detect cancer.

In 2011, IBM Watson triumphed against two of Jeopardy’s greatest champions. In 2016, AlphaGo, a computer program developed by Google DeepMind to play the board game Go, won against Lee Se-dol, who is estimated the strongest human Go player in the world.

While games function as major labs for testing DL technologies, IBM Watson and Google DeepMind have both moved over such solutions into the health care and medical imaging realms. It seems likely that as the technology develops further, many companies and startups will enter bigger players in using ML/DL to help determine different medical imaging concerns. Big vendors like GE Health care and Siemens have already made significant investments, and a recent analysis by Blackford shows 20+ startups are also using machine intelligence in medical imaging solutions.

While the potential benefits are considerable, so are the initial efforts and costs, which is a reason for large enterprises, hospitals, and research labs to come together in solving important medical imaging matters. IBM Watson, for instance, is companioning with more than 15 hospitals and companies using imaging technology to study how cognitive computing can serve in the real-world, a service Watson Health is supposed to launch in 2017.

GE has also declared a 3-year partnership with UC San Francisco to generate a set of algorithms that help its radiologists differentiate between a typical result and one that needs further attention. This work is in addition to another GE partnership with Boston’s Children Hospital to create smart imaging technology for identifying pediatric brain disorders.

There are, and will continue, discussions about radiology disruption and what it suggests for the future roles of medical practitioners; nevertheless, the potential benefits of applying deep learning in health care to combat and detect illnesses and cancer seem likely to outrun the foreseeable costs.

Practical problems solved by deep learning in healthcare sector

This technology has made to become medical imaging very easier for doctors to detect the patient’s problem.

The most commonly diagnosed cancer i.e. skin cancer which diagnoses over 5 million cases each year in the US and costs $8 billion annually for US health care system can now be detected easily.

The technology was able to create an algorithm capable of identifying relevant characteristics of lung tumors with a higher accuracy rate than radiologists.

Candidate areas in extracted tissues with proliferative activity, frequently represented as edges of a tissue abnormality, are identified. The DL algorithm generates tumor probability heat maps, which show overlapping tissue patches arranged for tumor probability. Such images present informative data on different tumor characteristics such as shape, density, area, and location, thus promoting the tracking of tumor changes.

The Researchers at Fraunhofer Institute for Medical Image Computing (MEVIS) unveiled a new tool in 2013 that employs DL to show changes in tumor images, enabling physicians to circumscribe the course of cancer therapy. “The software can, for example, determine how the volume of a tumor changes over time and supports the detection of new tumors,” said Mark Schenk from Fraunhofer MEVIS. One of such offer also has the potential to enable automatic progress monitoring.

Arterys’ DL software techniques have built it possible for cardiac evaluations on GE MR Systems to appear in a fraction of the time of routine cardiac MR scans.

Diabetic retinopathy (DR) is granted the most severe ocular complication of diabetes and is one of the swiftest growing and leading causes of blindness throughout the world, with about 415 million diabetic patients in danger worldwide. Report from the US Census Bureau and the National Health Interview Survey have to point to projections that the estimate of Americans 40 yrs or older having DR will triple from 5.5M in 2005 to 16M in 2050. As with many debilitating illnesses, if detected early DR can be handled efficiently. A recent study declared in 2016 by a group of Google researchers in the Journal of the American Medical Association (JAMA), revealed that their DL algorithm, which was trained on a significant funds image dataset, has been able to detect DR with more than 90 percent accuracy.

Future & market

It is clear that Google and IBM Watson have data rich opportunities in healthcare through this deep learning technology. One of the most innovative future applications of DL would be in combating most types of cancer. From the recent Quora and Reddit threads, you’ll find that people appear to be concerned about the possibility for radiology to be disturbed by deep learning. Many experts express optimism at the chances for DL-based solutions in the medical imaging platform.

Dr. Bradley Erickson of Mayo Clinic in Rochester, Minnesota, supposes that computers will do most diagnostic imaging in the next 15 to 20 years. But he believes that instead of taking radiologists’ jobs, DL will expand their roles in foretelling disease and guiding surgery.

“I’m concerned that some people may dig in their heels and say, ‘I’m just not going to let this happen.’ I would say that noncooperation is also counterproductive, and I hope that there’s a lot of physician engagement in this revolution that’s going on in deep learning so that we implement it most optimally,” Erickson said. A Retired Professor of Radiology at Penn Medicine, Dr.Nick Bryan, seems to agree with Erickson, predicting that within ten years no medical imaging exam will be reviewed by a radiologist until it has been pre-analyzed by a machine.

They are also on a plan to extend their business in the market by making partners academic research institutions, including health care providers and the pharmaceutical industries to develop their deep learning solutions.

The richness in the algorithms obtained from deep learning offers higher accuracy and more profound insights for every patient. Who knows, diagnosing of a problem in future is completely reviewed and analyzed by a machine.