Approaches of Deep Learning : Part 3

Advertisement

In our previous articles, namely Approaches of Deep Learning : Part 1 and Deep Learning : Part 2, we explained the most essetial basics of artificial intelligence, machine learning, deep learning.
Those were among the fundamentals which are important for further understanding of the scientific work topic. In this Approaches of Deep Learning : Part 3, we will focus around Benefits and use cases, Artificial neural network, Overview of use-cases of Deep Learning.

Approaches of Deep Learning : Current Approaches

The goal of supervised learning is to learn from classifiable examples. Inputs x1, x2, x3,…, Xn as well as expected outputs y1, y2, y3,…, Yn are made available to the learner from a subject area between which associations are to be established. It is assumed that the objects come from a common environment (universe) such as plants, animals, farm. After the learner has seen a number of classified examples, it should be able to independently formulate hypotheses about the examples and continues to submit and to approach the goal concept as well as possible. According to some examples, after the learning phase, the algorithm should also be able to independently formulate hypotheses on submitted examples. Monitored learning is therefore well-suited to improve handwriting recognition of algorithms. While machine-made block letters require only the font and uppercase and lowercase letters to be learned, a man’s handwriting follows the same principles, but is very individual in design.

For example, after seeing a series of classified example images such as “dog,” “cat,” and “mouse,” the student should be able to classify unclassified examples (universe = animals) as accurately as possible. First, all training patterns are processed after the series. After each training pattern, the weighting is adjusted based on the selected learning method. If all training patterns have been run, it will be checked if this was the best error rate so far. If so, the weighting is saved as the best set. If it was not yet the best error rate, the lap counter is increased by one. These steps are performed until one hundred rounds are reached.

Recurrent neural network

Recurrent networks (feedback neural networks) are characterized in that there are feedbacks of neurons of one layer to other neurons of the same or a previous layer. Only when a large number of neurons, the special properties of recurrent neural networks occur. Applied is a neural network with feedback, for example when a prognosis for the future is to be determined.

Direct feedback

In direct feedback, connections exist from the output to the input of the same neuron. This means that the output of the unit becomes an input of the same unit. These compounds often cause neurons to accept the limit states of their activations because they can self-reinforce or inhibit.

Indirect feedback

In indirect feedback, the output of the neuron is returned as the input of a neuron in the previous layer of the neural network. This type of feedback is necessary if attention control to particular areas of input neurons or to certain input features is to be achieved through the network. Networks with feedbacks within the same layer are often used for tasks where only one neuron in a group of neurons is to become active. Each neuron then receives inhibitory connections to other neurons and often direct feedback from itself. The most potent neuron then inhibits the other neurons.

Lateral feedback

In the case of lateral feedback, the feedback of the information of a unit takes place at neurons which are located in the same slice. An example of such lateral feedbacks are the horizontal cells in the human eye.

Complete connection

In fully networked layers (Hopfield networks), each neuron is connected to every other neuron in the neural network, except itself.

Attractor Networks

Attractor networks represent, among other things, a field of application of recurrent networks. Attractor networks have an input and operate in cycles until a stable output / state has arisen. The nets consist of a certain number of these stable states (attractors). Over several cycles, the output moves in the direction of a state (attractor). Which attractor is reached at the “end” depends on which “catchment area” of the individual attractors the input falls into.
The farther the input from a “catchment area” is, the more cycles it takes to reach an attractor. If the input falls exactly on the boundary between two “catchment areas” and it can not be made a unique assignment, the assignment can take place, for example, by adding a random algorithm.

Convolutional Neural Network (CNN) is a deep learning approach, which is especially used for image recognition and speech analysis. It is a multi-layered neural network. Each of the layers contains independent neurons. Each neuron of a layer receives signals / information from the previous layer.

Layers

A convolutional network consists of two alternating layers. The CNN consists of nodes arranged in layers. As these layers repeat several times, they are called “deep convolutional neural networks” (DCNN). The layers are connected by weighted connections or edges. Each connection thus has a source and destination node. Each trainable layer (a hidden layer or an output layer) has at least one connection bundle.

Advertisement

---

The following requirements needed for CNN:

There must be exactly one output layer, at least one input layer, and zero or more hidden layers.

Each layer has a fixed number of nodes that are conceptually arranged in a rectangular array of arbitrary dimensions.

Input layers are not assigned any trained parameters; they represent the entry point of instance data into the network.

Connections must be acyclic, in other words, they must not form a chain of connections leading back to the original source node.

Backpropagation

Backpropagation, also called error feedback, is a method to learn about artificial neural networks. The input values ​​are already known for the neurons. After the input values ​​have passed through the neural network, the actual output is compared with the desired output. The difference between the actual output and the target output is regarded as a delta error. The erroneous values ​​are passed from the output back to the input and the weights are recalculated. This is how the target output is approached. This is carried out until the actual output corresponds to the desired output or only has a very small delta.

Forward Propagation

In forward propagation, all weights of an artificial neuron are randomly assigned. The values ​​are between -1 and +1. The input values ​​used in this example are binary numbers, 0 or 1. Then there is a summation by the sum function. Subsequently, the output is effected by the activation function. This is the simod function. In this process, the input values ​​and output values ​​are propagated to the next neuron.

Backward Propagation

In backward propagation, the actual learning process begins. The actual output of the Foward propagation is considered and compared with the target output. The weights that calculated the actual output too large, these weights are slightly reduced. The weights that calculated the actual output too small are slightly increased.

Current Approaches : Unsupervised Learning

In contrast to supervised learning, unsupervised learning does not provide already classified outputs (y1, y2, y3, …, yn). Instead, it is up to the algorithm itself to identify patterns and classify them. All input data is searched for found patterns and classifications, or the existing patterns and classifications are expanded. Example: In the mid-1980s, the US Army commissioned a system to analyze aerial photography. The aim was to determine whether or not military vehicles can be seen on a shot. As training data served unclassified recordings, which were made on a military training ground once without vehicles and once with vehicles. For many tests, the system worked flawlessly. But in a test, the system failed completely. The reason for this was that the shots were taken without vehicles on a cloudy day and the other shots on a sunny day. The system has learned to distinguish between a cloudy day and a sunny day.

Deep Autoencoder

Autoencoder network

An autocoder is a particular form of neural network that contains three or more layers. The input layer and output layer are standard. Between these two layers is always the hidden layer. This can be one or more layers. For multiple layers, the process is called Deep Auto-Recorder. The main task of Autoencoder is to detect compressed images or to denoise them. That would be, for example, in the image recognition a displayed pixelated image that is not immediately recognizable, what is on the image.

The autocoder consists of two parts, the encoder and decoder. The encoder is active during the training phase and the development phase. The decoder is only active during the training phase. For compression, there is a bottleneck in the neural network. This bottleneck compresses the input values ​​enormously. The output layer always orientates itself at the input layer. For deeper layers, the process just mentioned is not quite so trivial. Here each hidden layer is trained until the values ​​are equal to the input layer. Then this hidden layer passes the values ​​to the next hidden layer. The second hidden layer also learns until the values ​​match the previous layer.

Survey Propagation

The survey propagation algorithm is based on ideas from statistical physics.
The algorithm performs a stochastic message exchange process: edges send “warnings” to nodes, which colors should or should not select the nodes, so that the two nodes of the edge receive different colors. The algorithm attempts to compute the probability distribution of the warnings, then sets the colors of those nodes that receive particularly “clear” warnings and repeats until all nodes are colored. Although this approach is remarkably successful and efficient in experiments, an exact mathematical analysis has so far failed.

Deep Belief Network

The Deep Belief Network, called DBN in the following, consists of several layers and is one of the artificial neural networks. Each layer consists of a Restricted Boltzmann Machine. The goal of a Restricted Boltzmann machine is to generate an input vector with the highest possible probability. The DBN is used as a preparer for a Deep Neural Network, called DNN in the following. Here, the learned is used by DBN as initial knowledge in a DNN. Discriminative algorithms, such as backpropagation, refine this knowledge. Refining the data is important if there is little data available.

BayesNP

If a machine is to learn from experience, it can use conditional probabilities to better predict future actions.
A Bayesian network consists of event variable nodes whose connections (edges) are weighted by conditional probabilities. From this, the probabilities of events under the condition of other events can be calculated. Bayesian networks allow for effective prediction and decision models.

Hierarchical temporal memory

Hierarchical Temporal Memory (HTM) is a machine learning technology that seeks to capture the structural and algorithmic properties of the neocortex. The neocortex is the brain’s part of mammals, responsible for listening, seeing, surveying, moving, speaking and planning. Unfortunately, all of these capabilities can not be implemented by neural algorithms. The neocortex consists of a remarkably uniform pattern of neural interconnections. The biological evidence suggests that the neocortex implements a common set of algorithms to perform many different cognitive functions. HTM provides the theoretical frameworkto understand the neocortex with its many abilities. programming HTMs works differently than with normal computers. HTMs are trained by means of sensors. The capabilities of HTMs are determined by the system with which they are in contact. As the name suggests, HTMs are basically memory-based systems. HTM networks are trained with many time-variant data and rely on storing a large number of patterns and sequences. Never before have we seen that machine learning or artificial intelligence technologies have had such a rapid impact on the economy. It is very impressive.