We study two rates of convergence of AdaBoost: the first with respect to an arbitrary solution, and the second with respect to the optimal solution.
In the first case, we achieve a rate that depends only on the approximation parameter and some notion of complexity of the reference solution.
Further, we show the dependence on both parameters is polynomial, thereby answering positively the conjecture posed by Schapire in COLT 2010.
In the second case, we show a rate of convergence that has optimal dependence on the approximation parameter.
The two results require different techniques, and do not follow from each other.
Unlike previous work, our rates do not require any assumption, such as weak learnability, finiteness of the optimal solution, or a compact space to work on.
We also study the constants in our bounds and show that, in certain worst case situations, their values may be very large.
Finally, we also construct many lower bound instances showing that our rates are nearly tight.

We create a broad and general framework within which we identify the correct weak learning conditions for multiclass boosting, and design the most effective, in a game-theoretic sense, boosting algorithms that assume such requirements.

We prove that the improvement of collapsed mean-field inference, aka collapsed variational inference,
for LDA over ordinary mean-field inference decays inversely with
the length of the documents. Our work suggests one should use
collapsed inference for short texts, and the more efficient
mean-field inference for longer documents.

Complex experts predict more accurately, but are also harder
to learn from. Learning from binary experts has been
thoroughly studied previously. We provide a master
strategy achieving tightly optimal regret bounds against
the powerful class of continuous/random experts.