Slides

Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.

Description

In a standard Markov decision process (MDP), rewards
are assumed to be precisely known and of quantitative
nature. This can be a too strong hypothesis in some situations.
When rewards can really be modeled numerically,
specifying the reward function is often difficult
as it is a cognitively-demanding and/or time-consuming
task. Besides, rewards can sometimes be of qualitative
nature as when they represent qualitative risk levels for
instance. In those cases, it is problematic to use directly
standard MDPs and we propose instead to resort
to MDPs with ordinal rewards. Only a total order over
rewards is assumed to be known. In this setting, we explain
how an alternative way to define expressive and
interpretable preferences using reference points can be
exploited.