In practice, convex optimization tasks often appear in a form dependent on some additional parameter, such as e.g. a regularization parameter in many machine learning applications. Our proposed continuity approach here allows to obtain solutions with an approximation guarantee for the whole range of the (regularization) parameter for such problems. We obtain a piecewise constant solution path, which is often called the regularization path in the machine learning community. Applications include support vector machines, multiple kernel learning, nuclear norm regularized problems, matrix completion, and various other parameterized optimization problems.

An extended discussion of the approach is given in my thesis, where the applications to vector and matrix problems are explained in chapters 6 and 7 respectively.

Update [November 2011]:A journal version of the ESA paper for the vector case has been accepted at the ACM Transactions on Algorithms, and is scheduled to appear soon.

Update [March 2011]:Below are the slides of my seminar talk on this topic, generalizing this approach also for matrix problems and semidefinite optimization (e.g. regularization paths for matrix factorizations).