Of course entirely the same framework can be applied to other general and usual datasets - including Kaggle competitions. For just a curiosity, I were looking for a free MNIST dataset and fortunately I found Kaggle provides it as below.

I know Convolutional NN (ConvNet or CNN) better works for such a 2D image classification task than Deep Belief Net... there are some well-known and well-established libraries such as Caffe, CUDA-ConvNet, Torch7, etc., but they may take a little more to implement for (lazy) me. Here I ran a brief and quick trial with a MNIST dataset for h2o.deeplearning in order to check its performance.

MNIST dataset from Kaggle

Our first mission here is to try h2o.deeplearning briefly, so let's divide it into a train and test dataset. MNIST dataset has 10 categories of dependent variables and we have to divide them with balancing all of 10 categories.

Now we have a customized dataset with "prac_train.csv" and "prac_test.csv" files. By the way, if you are unwilling to prepare the dataset by yourself, I uploaded them on my GitHub repository. You can get them from there.

But I think there must be more efficient set of parameters for h2o.deeplearning... although Candel's setting may be the best one. Anybody knows the best one for h2o.deeplearning elsewhere? Please help me!!!