Extreme Heterogeneity in Deep Learning Architectures

WithJeff Anderson, Armin Mehrabian, Jiaxin Peng, Tarek El-Ghazawi

The success of voice-activated electronics can be attributed to the field of Machine Learning (ML), and more specifically to the development of Convolutional Neural Networks (CNNs) and Deep Learning. This chapter reviews Deep Neural Networks and advances in ML. It summarizes hardware architectures that are likely to be useful for NN implementations in embedded systems. Different types of NNs, called NN models, such as CNNs, Recurrent Neural Networks (RNNs), and Long Short-Term Memory (LSTM) Neural Networks, have been shown to be efficient for specific classifications and have their own sets of operations with diverse computational and communication requirements. RNN and LSTM, while similar to a CNN from the standpoint of network architecture, introduce a time dependency to the network, where a neuron’s output is stored and then fed back into the neuron during subsequent calculations. Pooling and normalization operations are among the most common types of operations found in CNNs.