Fundamentals of Machine Learning

PD Dr. Ullrich Köthe, WS 2017/18

This lecture belongs to the Master in Physics (specialisation Computational Physics, code "MVSpec") and the Master of Applied Informatics (code "IFML") programs, but is also open for students of Scientific Computing and anyone interested.

Summary:

Machine learning is one of the most promising approaches to address difficult decision and regression problems under uncertainty. The general idea is very simple: Instead of modeling a solution explicitly, a domain expert provides example data that demonstrate the desired behavior on representative problem instances. A suitable machine learning algorithm is then trained on these examples to reproduce the expert's solutions as well as possible and generalize it to new, unseen data. The last two decades have seen tremendous progress towards ever more powerful algorithms, and the course will cover the fundamental ideas from this field.

Three ways to derive PCA:
(i) minimize the squared difference between the kernel matrices in the original and reduced feature space,
(ii) find uncorrelated features that best explain the data variability,
(iii) minimize the reconstruction error from the reduced to the original space