In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.

I have read and understood the above.

Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.

Lille Grand Palais

Please submit the questions you would like to raise in the panel discussion at this site.

08:30
→
10:00

Session 1

08:30

Invited Talk: Open Research Problems in AutoML40m

Speaker:
Rich Caruana
(Microsoft Research)

Slides

09:10

Invited Talk: Bandits and Bayesian optimization for AutoML40m

Complex optimization and decision making tasks are beginning to play an
increasingly crucial role across a wide variety of scientific fields. This is
becoming more and more evident as entire research programs are being automated.
In this talk I'll describe a set of methods, known as Bayesian optimization,
which provide a very sample efficient approach to this problem. Much of the
gains of these methods are obtained by building a posterior model of a function
during optimization in order to efficiently explore its surface. I will
further describe a number of advanced search mechanisms and models and show how these can be used for automating Machine Learning problems. Finally, I will also briefly provide links to related bandit literature.

Most machine learning researchers focus on domain-specific learning algorithms. Can we also construct meta-learning algorithms that can learn better learning algorithms, and better ways of learning better learning algorithms, and so on, restricted only by the fundamental limitations of computability? In 1965, J. Good already made informal remarks on an intelligence explosion through such recursive self-improvement (RSI).
I will discuss various concrete algorithms (not just vague ideas) for RSI: 1. My diploma thesis (1987) proposed an evolutionary system that learns to inspect and improve its own learning algorithm, where Genetic Programming (GP) is recursively applied to itself, to invent better learning methods, meta-learning methods, meta-meta-learning etc. 2. RSI based on the self-referential Success-Story Algorithm for self-modifying probabilistic programs (1997) was already able to solve complex tasks. 3. My self-referential deep recurrent neural networks (since 1993) run and inspect and change their own weight change algorithms. Back in 2001, my former student Hochreiter (now prof) already had a practical implementation of such an RNN that meta-learns an excellent learning algorithm, at least for a limited domain. 4. The Goedel machine (2006) is the first RSI that is mathematically optimal in a particular sense. Will RSI finally take off in the near future?

How could an artificial intelligence do statistics? It would need an open-ended language of models, and a way to search through and compare those models. Even better would be a system that could explain the different types of structure found, even if that type of structure had never been seen before. This talk presents a prototype of such a system, which builds structured Gaussian processes regression models by combining covariance kernels to build a custom model for each dataset. The resulting models can be broken down into relatively simple components, and surprisingly, it's not hard to write code that automatically describes each component, even for novel combinations of kernels. The result is a procedure that takes in a dataset, and outputs a report with plots and English descriptions of the different types of structure found in that dataset.

OpenML is an online machine learning platform where scientists can automatically log and share data sets, code, and experiments, organize them online, and collaborate with researchers all over the world. It helps to automate many tedious aspects of research, is readily integrated into several machine learning tools, and offers easy-to-use APIs. It also enables large-scale and real-time collaboration, allowing researchers to build directly on each other's latest results, and track the wider impact of their work. Ultimately, this provides a wealth of information for building systems that learn from previous experiments, to either assist people while analyzing data, or automate the process altogether.