Computer Science > Learning

Title:
A Bayesian Perspective on Generalization and Stochastic Gradient Descent

Abstract: We consider two questions at the heart of machine learning; how can we
predict if a minimum will generalize to the test set, and why does stochastic
gradient descent find minima that generalize well? Our work responds to Zhang
et al. (2016), who showed deep neural networks can easily memorize randomly
labeled training data, despite generalizing well on real labels of the same
inputs. We show that the same phenomenon occurs in small linear models. These
observations are explained by the Bayesian evidence, which penalizes sharp
minima but is invariant to model parameterization. We also demonstrate that,
when one holds the learning rate fixed, there is an optimum batch size which
maximizes the test set accuracy. We propose that the noise introduced by small
mini-batches drives the parameters towards minima whose evidence is large.
Interpreting stochastic gradient descent as a stochastic differential equation,
we identify the "noise scale" $g = \epsilon (\frac{N}{B} - 1) \approx \epsilon
N/B$, where $\epsilon$ is the learning rate, $N$ the training set size and $B$
the batch size. Consequently the optimum batch size is proportional to both the
learning rate and the size of the training set, $B_{opt} \propto \epsilon N$.
We verify these predictions empirically.