BayesOpt 2017

BayesOpt2015Bayesian Optimization: Scalability and Flexibility

Bayesian optimization has emerged as an exciting subfield of machine learning
that is concerned with the global optimization of noisy, black-box functions
using probabilistic methods. Systems implementing Bayesian optimization
techniques have been successfully used to solve difficult problems in a diverse
set of applications. There have been many recent advances in the methodologies
and theory underpinning Bayesian optimization that have extended the framework
to new applications as well as provided greater insights into the behaviour of
these algorithms. Bayesian optimization is now increasingly being used in
industrial settings, providing new and interesting challenges that require new
algorithms and theoretical insights.

At last year’s NIPS workshop on Bayesian optimization the focus was on the
intersection of “academia and industry”. Following up on this theme, the
workshop this year will focus on scaling existing approaches to larger
evaluation budgets, higher-dimensional search spaces, and more complex input
spaces. While the computational complexity of common probabilistic regression
models used in Bayesian optimization have confined it to relatively
low-dimensional problems and small evaluation budgets, there have, in recent
years, been several advances in scaling these probabilistic models to more
demanding application domains. Furthermore, many applications of Bayesian
optimization only make sense when considering concurrent evaluations, which
departs from the traditional, strictly sequential Bayesian optimization
framework. Recent theoretical and practical efforts have addressed the
mini-batch, or parallel, evaluation framework.

The goal of this workshop is to bring together advances in scalable and flexible
probabilistic modelling, and batch exploration strategies to establish the state
of the art in Bayesian optimization capabilities. Specifically, we will invite
participants of the workshop to share their experiences and findings in applying
Bayesian optimization at new scales and in new application domains. In addition,
we wish to attract researchers from the broader scientific community in order to
demonstrate the flexibility of Bayesian optimization and invite them to consider
including it in their own experimental methodology. The key questions we will
discuss are: how to successfully scale Bayesian optimization to large evaluation
budgets? How to tackle high-dimensional or complex search spaces? How to apply
Bayesian optimization in massive, distributed settings?

The target audience for this workshop consists of both industrial and academic
practitioners of Bayesian optimization as well as researchers working on
theoretical advances in probabilistic global optimization. To this end we have
invited many industrial users of Bayesian optimization to attend and speak at
the workshop. We expect this exchange of industrial and academic knowledge will
lead to a significant interchange of ideas and a clearer understanding of the
challenges and successes of Bayesian optimization as a whole.

A further goal is to encourage collaboration between the diverse set of
researchers involved in Bayesian optimization. This includes not only
interchange between industrial and academic researchers, but also between the
many different sub-fields of machine learning which make use of Bayesian
optimization or its components. We are also reaching out to the wider global
optimization and Bayesian inference communities for involvement.