Recent advances in Deep Learning algorithms have triggered a race to build larger and lager neuronal networks to process an even vaster growing amount of training data. The successful application of larger models to many applications turns the focus on the computational feasibility of neuronal network training. Working on Tera bytes of data and moving from single host multi-GPU to distributed compute clusters, Deep Learning is currently entering the HPC domain. This day will give an overview on current developments in Deep Learning, how it can profit from established HPC methods and what new kind of new challenges it will bring to HPC.