A Deep Learning Pipeline for Image Understanding and
Acoustic Modeling

Tuesday, January 7th at 2:30pm
NYU, 715/719 Broadway, Room 1221

Abstract

One of the biggest challenges artificial intelligence faces is making sense
of the real world through sensory signals such as audio or video.
Noisy inputs, varying object viewpoints, deformations and
lighting conditions turn it into a high-dimensional problem
which cannot be efficiently solved without learning from data.

This thesis explores a general way of learning from high dimensional data
(video, images, audio, text, financial data, etc.) called deep learning.
It strives on the increasingly large amounts of data available to learn robust
and invariant internal features in a hierarchical manner directly from
the raw signals.

We propose an unified pipeline for feature learning, recognition, localization
and detection using Convolutional Networks (ConvNets) that can obtain
state-of-the-art accuracy on a number of pattern recognition tasks,
including acoustic modeling for speech recognition and object recognition
in computer vision. ConvNets are particularly well suited for learning from
continuous signals in terms of both accuracy and efficiency.

Additionally, a novel and general deep learning approach to detection
is proposed and successfully demonstrated on the most challenging
vision datasets. We then generalize it to other modalities such as speech data.
This approach allows accurate localization and detection objects in images
or phones in voice signals by learning to predict boundaries from internal
representations. We extend the reach of deep learning from classification
to detection tasks in an integrated fashion by learning multiple tasks
using a single deep model. This work is among the first to outperform human
vision and establishes a new state of the art on some computer vision
and speech recognition benchmarks.