Monthly predictions are made. The net input
is the difference between
the current month's data and last month's data.
The goal is to predict the sign of next month's corresponding DAX difference.

There are 228 training examples and 100 test examples.

The target is the percentage of DAX change scaled in the interval
[-1,1] (outliers are ignored).

Performance of WD and FMS is also tested
on networks ``spoiled''
by conventional backprop (``WDR'' and ``FMSR'' - the ``R'' stands
for Retraining).

Results are shown in
table 3.
Average performance of our method
exceeds the ones of weight decay, OBS,
and conventional backprop.

Table 3 also shows
superior performance of our approach
when it comes to retraining
``spoiled'' networks
(note that OBS is a retraining method by nature).
FMS led to the best improvements
in generalization performance.

Parameters:
Learning rate: 0.01.
Architecture: (5-8-1).
Number of training examples: 20,000,000.
Method specific parameters:
FMS:
;
; if
then
is set to 0.001.
WD: like with FMS, but .
FMSR: like with FMS, but
;
number of retraining examples: 5,000,000.
WDR: like with FMSR, but .
OBS:
.
See section 5.6 for parameters common to all experiments.