ANNiML

Presentation

The code is mainly inpired from the following book
[Bishop96]:
"Neural Networks for Pattern Recognition", Christopher M. Bishop,
Oxford University Press, 1996. ISBN 0198538642.

ANNiML can be used for regression (interpolation of an
unknown function), or classification (assign an input vector to a
class). The network is trained on patterns provided as input of the
program.

A neural network can be seen as a function F of an input vector x.
It computes an output y = F(x,w_0,w_1,w_2,...,w_p), where the w_i
are the weights assigned to the connections between the units.
The biases of the network are considered as additionnal connections
from bias units with a constant output value 1.

The weights must be tuned by training the neural network on a set of
patterns (x,t), where t is the target vector, so as to minimize an
error depending on the difference between the output y and the target t.

To do that, we need to

compute the gradient of the error (partial derivatives with respect
to the weights), when this gradient is used in the minimization method,

find the weights vector w* which minimizes the error on the training set.

Currently, the error gradient is computed by backpropagation of the error
in the network, and the following optimization methods can be used:

simple gradient descent with constant step eta,

gradient descent with step eta and momentum mu,

BFGS quasi-Newton minimization.

An important issue in neural networks is to avoid overfitting
problems, when the network fits perfectly the training data, but is
unable to generalize on fresh inputs. One should be careful to have a
sufficiently large training set (much larger than the number of
weights), and to choose the most adequate training method. When using
gradient methods, the one with a momentum term should perform better
in avoiding overfitting problems. BFGS with weights decay (or another
kind of regularization) would be better than the simple BFGS but the
weights decay has not been implemented yet.

The training methods iteratively search the weights space, and
stop either when the error is small enough (below an absolute tolerance), or
when it cannot be improved (relative tolerance), or when a maximum
number of iterations is reached.
For minimization methods with a constant step (gradient descent), it may be
better not to use the relative tolerance (use reltol=0) as the
step may accidentally lead to a point where the relative improvement of the
objective function is less than reltol, stopping the training too early.

The stop criterion is also an important issue in avoiding overfitting
problems: the training should stop early enough to avoid overfitting
the training data, to the detriment of the generalization on fresh
input data.

As only local optimization methods are used, the results highly depend
on the initial weights, randomly chosen before the training starts.
Several runs with different initial weights should be performed.

Availability

ANNiML is not in the public domain yet.
The source code is under CVS.

Installation

Retrieve the MATH and ANNiML modules under CVS:

> cvs checkout MATH
> cvs checkout ANNiML

Compile the MATH libraries:

> cd MATH
> make

Go to the ANNiML directory and compile:

> cd ../ANNiML
> make

Usage

anniml [layers] -train <fpatterns> [other_options]

or

anniml -run <input_vector> [other options]

or

anniml -test <fpatterns> [other options]

or

anniml -predict <finputs> [other options]

Options

Main options

-train <fpatterns> train the network on a patterns file.

-test <fpatterns> test the trained network on a patterns file.

-run <input_vector> run the trained network on a new input vector passed as argument.

-predict <finputs> make predictions on a new input data file.
If -p option is not used, results are saved by default in a new file
with a .pred extension.

Printing options

Other options

-rand root for the random generator (default 0).

-help display this list of options.

--help display this list of options.

Patterns

Each line of the patterns file must contain first an input vector
x, and second a target vector t.
Be cautious to use a network's topology that is consistent with your
patterns file (dimension of x must be equal to the number of input
units, and dimension of target vector t must be equal to the number
of output units).

There is an option (-c) allowing to select the columns that you really
want to use in your patterns file.

When performing classification, the dimension of your target vector
must be equal to the number of classes. For example, if you have
three classes, then

t= (1,0,0) will mean that the vector x belongs to class 0,

t= (0,1,0) will mean that the vector x belongs to class 1,

t= (0,0,1) will mean that the vector x belongs to class 2.

In this case, the network's output will be a vector y=(y_0,y_1,y_2)
where y_i can be seen as the probability to belong to class i.

As already said at the beginning of this page, you must have a
sufficiently large number of patterns (compared to the number of
weights in the network) if you want to avoid overfitting (see
[Bishop96]).

It is recommended to use only a part of your data to train the neural
network, and to test the trained network on the rest of the data.
Overfitting can be observed when the network fits very well on the
training data, and badly on the test data.

Ideally, a cross-validation would even be better, where only a part
of the training data is actually used to train the network, and the rest
(validation set) is used to evaluate its performance and stop
the training before overfitting arises.
Several different splits of training and validation sets should be tried,
as well as several random initial values for the weights, before selecting
the network with best average performance.
Cross-validation is not implemented yet.

Be careful to normalize the inputs of the neural network before
training it.
There is a -norm option to do that, which removes the average value and
divides each input by the standard deviation.
However, it operates only on the pattern file given as argument of the
command line. So you may not normalize the same way on your training set
and your test set.
You should better pre-process your initial data set, and normalize the
inputs before splitting it into a training set and a test set.

A few examples of patterns files can be found in the examples/ directory.
See module Ann_patterns for the functions that read, write, and
normalize patterns.

Functions

When using anniml, the user must choose the error function being minimized
during the training, and also the transfer functions of the different units.

logistic activation function for the hidden units.
Another possible choice for the activation functions is the hyperbolic
tangent (tanh).

In order to perform classification with the network, you should use
the cross-entropy as error function, and the softmax function for
the output layer. You can use either logistic or tanh activation functions
for the hidden layers.

Examples

Non-linear regression

As an illustration, here is a regression on noisy data that was produced
using the following function:
y(x)= 0.5 + 0.4 sin(2*pi*x)
to which was added a gaussian noise (standard deviation: 0.05)

Train a network with one hidden layer (1 input unit, 5 hidden units,
and 1 unit in the output layer) on the pattern file 'sinus_train.pat',
using simple gradient descent with step 0.6, with 20000 iterations
at most and default absolute tolerance 1E-7.
The resulting weights are saved in 'examples/sinus_train.wts'.
The network's topology (fully connected) is saved in
'examples/sinus.net'.

Example of classification

Train the network for classification, with a cross-entropy error
function, a softmax transfer function for the output units.
For a change, we chose the hyperbolic tangent activation function
for the hidden units.