Workshop on Human-In-the-Loop Data Analytics (HILDA 2017)

Paolo Tamagnini, Josua Krause, Aritra Dasgupta, Enrico Bertini

Abstract

To realize the full potential of machine learning in diverse real-world domains,
it is necessary for model predictions to be readily interpretable and actionable
for the human in the loop. Analysts, who are the users but not the developers of
machine learning models, often do not trust a model because of the lack of
transparency in associating predictions with the underlying data space. To address
this problem, we propose Rivelo, a visual analytics interface that
enables analysts to understand the causes behind predictions of binary classifiers
by interactively exploring a set of instance-level explanations. These explanations
are model-agnostic, treating a model as a black box, and they help analysts in
interactively probing the high-dimensional binary data space for detecting features
relevant to predictions. We demonstrate the utility of the interface with a case
study analyzing a random forest model on the sentiment of Yelp reviews about doctors.