Greedy Deep Transform Learning

Abstract:

We introduce deep transform learning – a new
tool for deep learning. Deeper representation is learnt by
stacking one transform after another. The learning proceeds in
a greedy way. The first layer learns the transform and features
from the input training samples. Subsequent layers use the
features (after activation) from the previous layers as training
input. Experiments have been carried out with other deep
representation learning tools – deep dictionary learning,
stacked denoising autoencoder, deep belief network and PCANet
(a version of convolutional neural network). Results show
that our proposed technique is better than all the said
techniques, at least on the benchmark datasets (MNIST,
CIFAR-10 and SVHN) compared on.