Primary tabs

[eng] The goal of this thesis is to extend the analysis and the scope of first-order methods of smooth convex optimization. We consider three challenging difficulties: inexact first-order information, lack of smoothness and presence of linear constraints.
When used with inexact information, we show that the Gradient Method (GM) is slow but robust, whereas the Fast Gradient Method (FGM) is fast but sensitive to errors. This trade-off between speed and sensitivity to errors is unavoidable: the faster a first-order method is, the worse its robustness must be. Between the existing methods, we develop a novel scheme, the Intermediate Gradient Method (IGM), which seeks an optimal compromise between speed and robustness and significantly accelerates the generation of accurate solutions. We also show how much strong convexity and stochastic first-order information can decrease the sensitivity to errors of first-order methods.
When the objective function is not as smooth as desired, we show that first-order methods initially developed for smooth problems can still be applied. This result breaks the wall between smooth and nonsmooth optimization. In particular, FGM can be seen as a universal optimal first-order method.
When linear constraints prevent the use of usual first-order methods, we propose a new approach, the double smoothing technique. We dualize the linear constraints, transform the dual function into a smooth strongly convex function and apply FGM. This technique efficiently generates nearly optimal and feasible primal solutions with accuracy guarantees.