Condensed Matter > Disordered Systems and Neural Networks

Title:
Why does deep and cheap learning work so well?

Abstract: We show how the success of deep learning could depend not only on mathematics
but also on physics: although well-known mathematical theorems guarantee that
neural networks can approximate arbitrary functions well, the class of
functions of practical interest can frequently be approximated through "cheap
learning" with exponentially fewer parameters than generic ones. We explore how
properties frequently encountered in physics such as symmetry, locality,
compositionality, and polynomial log-probability translate into exceptionally
simple neural networks. We further argue that when the statistical process
generating the data is of a certain hierarchical form prevalent in physics and
machine-learning, a deep neural network can be more efficient than a shallow
one. We formalize these claims using information theory and discuss the
relation to the renormalization group. We prove various "no-flattening
theorems" showing when efficient linear deep networks cannot be accurately
approximated by shallow ones without efficiency loss, for example, we show that
$n$ variables cannot be multiplied using fewer than 2^n neurons in a single
hidden layer.