ANN are an architectional
structure – a network – that is comprised of a plethora of interconnected
units, which are called artificial neurons. Each such unit is characterised by
inputs and outputs and executes locally a very simple calculation. Each
connection between two neurons contains a "weight" value. The network's
information, "knowledge" and efficiency is stored in the values of these
weights. Each neuron's output is defined by its type, the connections to other
neurons and possibly by some external inputs. It is possible that some network
have some form of efficiency on construction, but in general they can achieve
certain goals only after training (Haykin,
1999; Bishop,
1995; Ham et al,
2001; Perlovsky,
2001).

An Artificial Neural Network can
have or many layers of neurons. Networks with only one layer, which is called
the output layer, are known as "Single Layer Networks" (Inputs are not
considered to be a layer because no calculations are carried out within them).
However, there are networks that have one or more layers interposed between the
inputs and the output layer (these are called Hidden Layers). These Hidden
Layers give the ability to ANNs to solve complex problems. Regarding the
connectivity among the neurons, each one can only be connected with those on the
previous and next layer (if these layers exist of course) and the direction of
the information is almost always restricted from the inputs towards the outputs (there is
feedback of course but it is accomplished in a more advanced way). An ANN has
full connectivity when all possible connections are in place and it is called
feedforward when there is no feedback between layers.

The overall
performance of an ANN depends on the neurons' characteristics, the training
method and (naturally) the data on which the training takes place. Finally
because Artificial Neurons don't function serially and their numbers can be
great, ANNs are a characteristic
example of mass parallel computation. Their characteristics are summed up in the
following table:

ANN characteristics

General

Example

Input Size

30 point "window"

Number of layers

4 in total

Layer sizes

30×20×10×1

Weight Connectivity

Full & Feedforward

Output size

One

Training Algorithm

Levenberg - Marquardt

Below are useful pictures regarding ERPs. These are samples of my research, in most cases.

Hover over for info and left click to select. More pictures can be found in the ERP section.

Fully connected ANN with a hidden layer

The Artificial Neuron

Neuron's Definition Functions

Activation Functions

Time Delay Neural Network (TDNN)

TDNN Step 1

TDNN Step 2

TDNN Step 3

TDNN Step 101

Crossvalidation technique

Representation of prediction method using ANNs

Training graph

Prediction with ANN - Control

Prediction with ANN - OCD

One can observe the neuron's inputs (p), the weights (w), the adder (Σ), the activation function (f) and the ouptut (a)

Functions that describe neuron k. For more details look at the previous image

A: Linear

B: Hard Limit

Γ: Log-Sigmoid

Δ: Hyperbolic Tangent Sigmoid

Elementary TDNN with m inputs and k delays for each input.

An example of Time Delay Neural Network in time series prediction.

The network is able to predict the future values of a time-series when appropriately trained.

After each prediction a shift occurs in the inputs and each prediction is re-feeded to the network.

Look as Steps 2, 3 & 101 for more details

An example of Time Delay Neural Network in time series prediction.

The network is able to predict the future values of a time-series when appropriately trained.

After each prediction a shift occurs in the inputs and each prediction is re-feeded to the network.

Look as Steps 1, 3 & 101 for more details

An example of Time Delay Neural Network in time series prediction.

The network is able to predict the future values of a time-series when appropriately trained.

After each prediction a shift occurs in the inputs and each prediction is re-feeded to the network.

Look as Steps 1, 2 & 101 for more details

An example of Time Delay Neural Network in time series prediction.

The network is able to predict the future values of a time-series when appropriately trained.

After each prediction a shift occurs in the inputs and each prediction is re-feeded to the network.

Look as Steps 1, 2 & 3 for more details

This technique is widely used in almost all fields of science. It is employed...

...whenever the data size is relatively small

...as a mean of validation of results

1st column:

Building appropriate matrices for each data group.

2nd column:

Building and training networks with crossvalidation technique.

i.e. for 30 subjects there would be 30 different networks.

3rd column:

Part of each subject is feeded to the appropriate network of each group.

The subject is classified to a group according to Mean Square Error.

Graphical representation of training.

The blue line shows the output error of the network when feeded with the training data, in relation to the number of training epochs.

The green line shows the output error of the network when feeded with the validation data (data used to avoid overtraining the network for a specific data set).

Predicting the ERP signal of an OCD using the Controls NN.

Ideally the Mean Square Error will be large so that the subject will be classified corectly.

(Wait some secs - the picture is an animated gif)

Predicting the ERP signal of an OCD using the OCD NN.

Ideally the Mean Square Error will be smaller than the MSE resulted from the Controls NN, so that the subject will be classified corectly.