Deep Belief Networks

Deep Belief Networks (here referred to as DBNs) are deep neural networks, trained in a greedy layer-wise fashion. They were introduced by Hinton et al. (2006). See this review paper for a detailed introduction to such models.

Each layer of the network tries to model the distribution of its input,
using unsupervised training in a Restricted Boltzmann Machine:

For more detailed equations, and extension to Gaussian and truncated-exponential units, please see DBNEquations.

Implementation

The detailed procedure for training a whole DBN is detailed on DBNPseudoCode.

It was implemented using the PLearn machine learning library. The class for a DBN is DeepBeliefNetwork, and it uses different other classes, deriving from RBMLayer (RBMBinomialLayer, RBMGaussianLayer, RBMTruncExpLayer, and RBMMixedLayer) and RBMConnection (RBMMatrixConnection, RBMMixedConnection).

You will need to change the path to your data and the hyper-parameters (and possibly the network architecture), either by editing the args class at the beginning of the script, or by passing the appropriate command-line arguments (see the scripts for more details).