Abstract

Explainable Artificial Intelligence (XAI), i.e., the development of moretransparent and interpretable AI models, has gained increased traction over thelast few years. This is due to the fact that, in conjunction with their growthinto powerful and ubiquitous tools, AI models exhibit one detrimentialcharacteristic: a performance-transparency trade-off. This describes the factthat the more complex a model's inner workings, the less clear it is how itspredictions or decisions were achieved. But, especially considering MachineLearning (ML) methods like Reinforcement Learning (RL) where the system learnsautonomously, the necessity to understand the underlying reasoning for theirdecisions becomes apparent. Since, to the best of our knowledge, there existsno single work offering an overview of Explainable Reinforcement Learning (XRL)methods, this survey attempts to address this gap. We give a short summary ofthe problem, a definition of important terms, and offer a classification andassessment of current XRL methods. We found that a) the majority of XRL methodsfunction by mimicking and simplifying a complex model instead of designing aninherently simple one, and b) XRL (and XAI) methods often neglect to considerthe human side of the equation, not taking into account research from relatedfields like psychology or philosophy. Thus, an interdisciplinary effort isneeded to adapt the generated explanations to a (non-expert) human user inorder to effectively progress in the field of XRL and XAI in general.