Artificial neural networks are most commonly trained with the
back-propagation algorithm, where the gradient for learning is provided by
back-propagating the error, layer by layer, from the output layer to the hidden
layers. A recently discovered method called feedback-alignment shows that the
weights used for propagating the error backward don't have to be symmetric with
the weights used for propagation the activation forward. In fact, random
feedback weights work evenly well, because the network learns how to make the
feedback useful. In this work, the feedback alignment principle is used for
training hidden layers more independently from the rest of the network, and
from a zero initial condition. The error is propagated through fixed random
feedback connections directly from the output layer to each hidden layer. This
simple method is able to achieve zero training error even in convolutional
networks and very deep networks, completely without error back-propagation. The
method is a step towards biologically plausible machine learning because the
error signal is almost local, and no symmetric or reciprocal weights are
required. Experiments show that the test performance on MNIST and CIFAR is
almost as good as those obtained with back-propagation for fully connected
networks. If combined with dropout, the method achieves 1.45% error on the
permutation invariant MNIST task.