Research projects

Deep learning

Since most artificial intelligence systems don't come
close to the ability of the human brain to solve many
problems related to vision, speech recognition and natural
language understanding, a lot of research has been trying
to draw inspiration from the human brain to design machine
learning solutions to such tasks. One obvious property of
the brain is its deep, layered connectivity, particularly
apparent in
the visual
cortex. Yet, until the mid-2000s, attemps to train
artificial neural networks with several hidden layers have
mostly failed, i.e. would generally not yield
performances higher than non-deep or shallow neural
networks.

In
2006, Geoffrey
Hinton, Simon Osindero
and Yee Whye
Teh designed the
deep
belief network, a probabilistic neural network, along
with an efficient greedy procedure for successfully
pre-training (i.e. initializing) it. This procedure relies
on the learning algorithm of the restricted Boltzmann
machine (RBM) for layer-wise training of the hidden layers, in
an unsupervised fashion.

Ever since, I've been interested in further studying the
original pre-training procedure and deriving new deep
learning algorithms as well. I give here a quick overview of
some of the work I've been doing.