Deep Learning Defined and How It’s Different from Machine Learning?

Deep learning, which falls under the umbrella of artificial intelligence and is an advancement of machine learning, is capable of remarkable things.

While it relies on a steady flow of data which a neural network analyzes to make decisions about other data, just like machine learning, deep learning is significantly more sophisticated.

Machine learning uses principles of artificial intelligence and merely mimics human thought. It is being applied to real world problems today with promising results.

Deep learning, on the other hand, is a newer field of research that uses an even smaller set of principles in a more complex neural network that yields results which can blur the lines between artificial and human thought.

But what really sets deep learning apart from machine learning is the network’s ability to analyze new data and continuously learn new things. For example, if a deep learning system is analyzing images and fails to identify a characteristic it has been asked to identify, it can analyze where it went wrong. If there is a key differentiator that has proven to impact the result and explains why the system came to the wrong conclusion, the system will learn to understand that differentiator and include it in its decision-making process from then on. In other words, it can learn from mistakes, just like a human being.

Deep Learning Successes

Last year, a team from Stanford University introduced a deep-learning algorithm that diagnosed “potentially cancerous skin lesions as accurately as a board-certified dermatologist,” according to an article on the MIT Technology Review website.

Researchers from not-for-profit healthcare system Sutter Health and Georgia Institute of Technology teamed to create a method using deep learning that can predict heart failure as much as nine months sooner than doctors can.

Deep learning structures algorithms into an artificial neural network that mimics how the human brain works. The network “is designed to continually analyze data with a logic structure similar to how a human would draw conclusions,” according to an article from Zendesk.

Big data and powerful graphics processing units (GPUs) are used to feed information to the deep learning network. The Stanford project involved almost 130,000 images of skin conditions being integrated with the team’s software.

In fact, healthcare is particularly well suited to benefit from deep learning since medical images such as X-rays and MRIs “are a nearly perfect match for the strengths of deep-learning software,” according to the MIT article.

Not Everyone is Excited

What’s fascinating about deep learning is also what troubles some observers about it: Deep learning algorithms learn patterns, rules and parameters on their own, rather than having them fed into the system by a programmer. However, once the algorithm begins making decisions – for example, which of the images in the Stanford project were of potentially cancerous lesions – it leaves no map of how its decisions were made.

Some don’t see this as a problem. Healthcare legal scholar Nicholas Price points out that the particular mechanisms of some frequently prescribed drugs such as lithium aren’t understood.

Researchers in Europe, though, are bracing for the possibility of restrictions, regulations and perhaps even legal battles. The European Union’s General Data Protection Regulation, which goes into effect this year, allows users to demand and receive an explanation of how decisions affecting them are made by automated or artificial intelligence systems.

The new regulations could have a chilling effect on research in EU countries, with some critics claiming GDPR will make deep learning “illegal.”

In the U.S., though, deep learning’s potential to transform healthcare is greeted with far more enthusiasm than trepidation.

Dr. Christopher Bouton, CEO of Vyasa Analytics, expresses that enthusiasm when he talks about expanding machine learning’s ability to recognize objects into a deep-learning ability to recognize concepts.

“It’s the same idea, completely analogous,” Bouton said at a symposium last year. “But the application of it means that we can turn the way we handle data completely on its head.”