Local explanations of machine learning models describe, how features contributed to a single prediction.
This package implements an explanation method based on LIME
(Local Interpretable Model-agnostic Explanations,
see Tulio Ribeiro, Singh, Guestrin (2016) <doi:10.1145/2939672.2939778>) in which interpretable
inputs are created based on local rather than global behaviour of each original feature.