II. Neural networks

II.1. The birth of neural networks: the Perceptron and Adaline models. Basic statistical predictive models: linear regression and logistic regression.
This is an intuitive introduction to the problems of data analysis, to the general field of machine learning and a definition of the formal framework to be used throughout the course. The main concepts of the Neural Network domain will be introduced via simple algorithms developed in the sixties. This will allow us to introduce, using easily understandable concepts, the notions of Neural network, adaptive algorithm, generalization. A second part of this session will be dedicated to basic predictive models used in statistics for regression and classification.

II. 2. Optimization basics: gradient and stochastic gradient methods.
Optimization methods are at the core of the learning algorithms for Neural networks. We will introduce the general ideas of gradient methods and focus then on stochastic optimization via stochastic gradient methods. We will introduce several heuristics which are currently used for training large deep architectures.

III. 2. Dealing with sequences: recurrent Neural Networks
Recurrent Neural Networks are today the key technology in domains like speech recognition, language processing, translation and more generally sequence processing. We introduce the main concepts of these methods on different variants of this family of models.

III.4. Unsupervised learning: generative models - follow up of course 6.
III.5. Applications in the domains of vision, natural language processing, complex signal analysis.
We illustrate the algorithms introduced so far using a series of application examples in different domains.