I gave a talk titled “Introduction to Deep Neural Networks” recently. The goal was to give the audience of engineers the information they needed to understand what types of problems can be solved using a DNN, and what tools and libraries they can use to implement a DNN.

The term deep neural network has multiple meanings. DNN can refer to a specific type of neural network that is the same as a simple NN except the DNN has two or more hidden layers. DNN can also refer to one of many, more exotic forms of neural networks that have multiple hidden layers.

I described ordinary DNNs for classification and numeric prediction, convolutional NNs for image recognition, simple recurrent NNs (mostly of historical interest), and long short-term memory (LSTM) networks for natural language processing. I also speculated a bit about generative adversarial networks, and quantum computing.

In terms of tools and libraries, I explained how there ae many alternatives for non-deep NNs, but for DNNs, the only Microsoft approach I was aware of was the CNTK library (well other than coding from scratch which is very difficult).

Moral of the story: Maybe eight years ago, knowledge of simple neural networks wasn’t needed in many developer situations. Now that knowledge is almost essential. And knowledge of DNNS is quickly becoming a critically important skill for many developer scenarios.