Regularization by the L1-norm has attracted a lot of interest in recent
years in statistics, machine learning and signal processing. In the
context of least-square linear regression, the problem is usually
referred to as the Lasso [1] or basis pursuit [2]. Much of the early
effort has been dedicated to algorithms to solve the optimization
problem efficiently, either through first-order methods [3, 4], or
through homotopy methods that lead to the entire regularization path
(i.e., the set of solutions for all values of the regularization
parameters) at the cost of a single matrix inversion [5, 6]. A
well-known property of the regularization by the L1-norm is the
sparsity of the solutions, i.e., it leads to loading vectors with many
zeros, and thus performs model selection on top of regularization.
Recent work (e.g., [7, 8]) has looked precisely at the model
consistency of the Lasso, i.e., if we know that the data were generated
from a sparse loading vector, does the Lasso actually recover the
sparsity pattern when the number of observations grows? Moreover, how
many irrelevant variables could we consider while still being able to
infer correctly the relevant ones?

The objective of the tutorial is to give a unified overview of the
recent contributions of sparse convex methods to machine learning, both
in terms of theory and algorithms. The course will be divided in three
parts: in the first part, the focus will be on the regular L1-norm and
variable selection, introducing key algorithms [3, 4, 5, 6] and key
theoretical results [7, 8, 9]. Then, several more structured machine
learning problems will be discussed, on vectors (second part) and
matrices (third part), such as multi-task learning [10, 11], sparse
principal component analysis [12], multiple
kernel learning [13, 14], structured sparsity [15, 16] and sparse
coding [17]. Throughout the tutorial, applications to data from various
domains (computer vision, image processing, bioinformatics, speech
processing, recommender systems) will be considered.

Relationship with NIPS 2009 tutorial by F. Bach
The proposed tutorial will be based on the tutorial given at NIPS in
2009. Since there is more time (3 hours instead of 2 hours) and an
additional presenter, more details will be given, with more focus on
applications in data mining from different domains. Slides of the NIPS
tutorial may be found at http://www.di.ens.fr/~fbach/nips2009tutorial/.

Target audience
Sparse methods have generated a lot of new work recently and the goal
of the tutorial is to present these new advances to researchers and
graduate students with a general knowledge of machine learning. In
particular, we do not assume strong prior knowledge in convex
optimization or statistics.

Presenters

• Francis Bach is
a researcher in the Willow INRIA project-team, in the Computer Science
Department of the Ecole Normale Sup´erieure, Paris, France. He
graduated from the Ecole Polytechnique, Palaiseau, France, in 1997, and
earned his PhD in 2005 from the Computer Science division at the
University of California, Berkeley. His research interests include
machine learning, statistics, optimization, graphical models, kernel
methods, sparse methods and statistical signal processing. He has been
awarded a starting investigator grant from the European Research
Council in 2009. He has given tutorials at several conferences (ECCV
2008, ICCV 2009, NIPS 2009, CVPR 2010).

• Guillaume Obozinski is a
researcher in the Willow INRIA project-team, a research group
affiliated with the Computer Science department of the Ecole Normale
Sup´erieure, Paris, France. A former student of the Ecole Normale
Superieure de Cachan, he earned his PhD in 2009 from the Statistics
department of the University of California at Berkeley. His research
interests include machine learning, statistics, optimization and their
applications to computer vision and computational biology.