A General Approach to Multi-Armed Bandits Under Risk Criteria

Abstract

Different risk-related criteria have received recent interest in learning problems, where typically each case is treated in a customized manner. In this paper we provide a more systematic approach to analyzing such risk criteria within a stochastic multi-armed bandit (MAB) formulation. We identify a set of general conditions that yield a simple characterization of the oracle rule (which serves as the regret benchmark), and facilitate the design of upper confidence bound (UCB) learning policies. The conditions are derived from problem primitives, primarily focusing on the relation between the arm reward distributions and the (risk criteria) performance metric. Among other things, the work highlights some (possibly non-intuitive) subtleties that differentiate various criteria in conjunction with statistical properties of the arms. Our main findings are illustrated on several widely used objectives such as conditional value-at-risk, mean-variance, Sharpe-ratio, and more.

Related Material

@InProceedings{pmlr-v75-cassel18a,
title = {A General Approach to Multi-Armed Bandits Under Risk Criteria},
author = {Cassel, Asaf and Mannor, Shie and Zeevi, Assaf},
booktitle = {Proceedings of the 31st Conference On Learning Theory},
pages = {1295--1306},
year = {2018},
editor = {Bubeck, S\'ebastien and Perchet, Vianney and Rigollet, Philippe},
volume = {75},
series = {Proceedings of Machine Learning Research},
address = {},
month = {06--09 Jul},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v75/cassel18a/cassel18a.pdf},
url = {http://proceedings.mlr.press/v75/cassel18a.html},
abstract = {Different risk-related criteria have received recent interest in learning problems, where typically each case is treated in a customized manner. In this paper we provide a more systematic approach to analyzing such risk criteria within a stochastic multi-armed bandit (MAB) formulation. We identify a set of general conditions that yield a simple characterization of the oracle rule (which serves as the regret benchmark), and facilitate the design of upper confidence bound (UCB) learning policies. The conditions are derived from problem primitives, primarily focusing on the relation between the arm reward distributions and the (risk criteria) performance metric. Among other things, the work highlights some (possibly non-intuitive) subtleties that differentiate various criteria in conjunction with statistical properties of the arms. Our main findings are illustrated on several widely used objectives such as conditional value-at-risk, mean-variance, Sharpe-ratio, and more.}
}

%0 Conference Paper
%T A General Approach to Multi-Armed Bandits Under Risk Criteria
%A Asaf Cassel
%A Shie Mannor
%A Assaf Zeevi
%B Proceedings of the 31st Conference On Learning Theory
%C Proceedings of Machine Learning Research
%D 2018
%E Sébastien Bubeck
%E Vianney Perchet
%E Philippe Rigollet
%F pmlr-v75-cassel18a
%I PMLR
%J Proceedings of Machine Learning Research
%P 1295--1306
%U http://proceedings.mlr.press
%V 75
%W PMLR
%X Different risk-related criteria have received recent interest in learning problems, where typically each case is treated in a customized manner. In this paper we provide a more systematic approach to analyzing such risk criteria within a stochastic multi-armed bandit (MAB) formulation. We identify a set of general conditions that yield a simple characterization of the oracle rule (which serves as the regret benchmark), and facilitate the design of upper confidence bound (UCB) learning policies. The conditions are derived from problem primitives, primarily focusing on the relation between the arm reward distributions and the (risk criteria) performance metric. Among other things, the work highlights some (possibly non-intuitive) subtleties that differentiate various criteria in conjunction with statistical properties of the arms. Our main findings are illustrated on several widely used objectives such as conditional value-at-risk, mean-variance, Sharpe-ratio, and more.