More About

Gradient Descent with Adaptive Learning Rate Backpropagation

With standard steepest descent, the learning rate is held constant throughout training.
The performance of the algorithm is very sensitive to the proper setting of the learning rate.
If the learning rate is set too high, the algorithm can oscillate and become unstable. If the
learning rate is too small, the algorithm takes too long to converge. It is not practical to
determine the optimal setting for the learning rate before training, and, in fact, the optimal learning rate changes during the training process, as the algorithm moves across
the performance surface.

You can improve the performance of the steepest descent algorithm if you allow the
learning rate to change during the training process. An adaptive learning rate attempts to keep
the learning step size as large as possible while keeping learning stable. The learning rate is
made responsive to the complexity of the local error surface.

An adaptive learning rate requires some changes in the training procedure used by
traingd. First, the initial network output and error are calculated. At
each epoch new weights and biases are calculated using the current learning rate. New outputs
and errors are then calculated.

As with momentum, if the new error exceeds the old error by more than a predefined ratio,
max_perf_inc (typically 1.04), the new weights and biases are discarded. In
addition, the learning rate is decreased (typically by multiplying by lr_dec
= 0.7). Otherwise, the new weights, etc., are kept. If the new error is less than the old
error, the learning rate is increased (typically by multiplying by lr_inc =
1.05).

This procedure increases the learning rate, but only to the extent that the network can
learn without large error increases. Thus, a near-optimal learning rate is obtained for the
local terrain. When a larger learning rate could result in stable learning, the learning rate
is increased. When the learning rate is too high to guarantee a decrease in error, it is
decreased until stable learning resumes.

Try the Neural Network Designdemonstration nnd12vl [HDB96] for an illustration of the performance of the variable learning rate algorithm.

Backpropagation training with an adaptive learning rate is implemented with the function
traingda, which is called just like traingd, except for
the additional training parameters max_perf_inc, lr_dec,
and lr_inc. Here is how it is called to train the previous two-layer
network:

Algorithms

traingda can train any network as long as its weight, net input, and
transfer functions have derivative functions.

Backpropagation is used to calculate derivatives of performance dperf
with respect to the weight and bias variables X. Each variable is adjusted
according to gradient descent:

dX = lr*dperf/dX

At each epoch, if performance decreases toward the goal, then the learning rate is
increased by the factor lr_inc. If performance increases by more than the
factor max_perf_inc, the learning rate is adjusted by the factor
lr_dec and the change that increased the performance is not made.

Training stops when any of these conditions occurs:

The maximum number of epochs (repetitions) is reached.

The maximum amount of time is exceeded.

Performance is minimized to the goal.

The performance gradient falls below min_grad.

Validation performance has increased more than max_fail times since
the last time it decreased (when using validation).