Most conventional supervised
algorithms for multi-layer neural nets
are not local in space and time.
Backprop, for instance, requires a global control
mechanism that first propagates activation signals
through all successive layers, then waits until
the error signals come back, then changes the weights.
Many suspect, however, that the brain does
use an entirely local algorithm.
One advantage of truly local algorithms
is that their parallel implementation
is trivial.
The method to be described below
is designed to be entirely local while
still being able to deal with hidden units and
non-linearities [5].
See [4] for another local alternative.