Despite their undeniable success in practical applications deep neural networks lack a thorough mathematical understanding. Important questions for neural networks, where we can hope to find mathematical bounds, are:

What classes of functions can be efficiently approximated?

How does this depend on the number of layers (i.e. the deepness)?

How many samples are needed?

How many training steps are needed for specific tasks?

How sparsely connected can the network be?

Can we give a meaning to filters in different layers?

Are there provably better algorithms than regular gradient descent?

What are problem classes where neural networks will not give good results?

These questions are relevant in practice if we don't want to rely on blind trust and heuristics and want to close the gap between practical success and theoretical understanding. This blog serves the following purposes:

Give an easily accessible overview over the current state of mathematical understanding through our literature database and 'new-paper'-announcements on our blog.

Advertise conferences and job opportunities related to the mathematics of deep learning on our blog.

Collect ideas that could lead to further research.

Highlight new publications that offer particular insight through featured articles on our blog.

If you are a researcher in this area, you are very welcome to contribute to this effort. For ways to do so, please see the contact page.