Peter Bartlett, Elad Hazan and Alexander Rakhlin

We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between $\sqrt{T}$ and $\log T$. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.

BibTeX citation:

@techreport{Bartlett:EECS-2007-82,
Author = {Bartlett, Peter and Hazan, Elad and Rakhlin, Alexander},
Title = {Adaptive Online Gradient Descent},
Institution = {EECS Department, University of California, Berkeley},
Year = {2007},
Month = {Jun},
URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-82.html},
Number = {UCB/EECS-2007-82},
Abstract = {We study the rates of growth of the regret in online convex
optimization. First, we show that a simple extension of the
algorithm of Hazan et al eliminates the need
for a priori knowledge of the lower bound on the second
derivatives of the observed functions. We then provide
an algorithm, Adaptive Online Gradient Descent, which
interpolates between the results of Zinkevich
for linear functions and of Hazan et al for strongly convex
functions, achieving intermediate rates between $\sqrt{T}$
and $\log T$. Furthermore, we show strong optimality of
the algorithm. Finally, we provide an extension of our results to general norms.}
}