Description

Post the model build process, we often have a black-box which can be used for prediction.

The usefulness of the model could still be questionable unless one understands the true behavior of the algorithm. As machine learning models are actively getting adopted in solving real world problems, one needs to look beyond just wins and losses. One needs more detailed information about model’s behavior.

Abstract:

The adoption of Machine Learning or Statistical Models in solving real world problems has increased exponentially but users still struggle to derive full potential of the predictive models. There is still a dichotomy between explainability and model performance while making the choice of the algorithm. Linear Models/Simple Decision Trees are often preferred over more complex models such as Ensembles or Deep Learning models when operationalizing models for ease of interpretation which often results in loss in accuracy. But, is it necessary to accept a trade-off between model complexity and interpretability ?

Being able to faithfully interpret a model locally, using LIME(Local Interpretable Model-Agnostic Interpretation), and globally, using Partial Dependence Plots(PDP) and relative feature importance, helps in understanding feature contribution on predictions and model variability in non-stationary environment. This enables trust in the algorithm which drives better collaboration and communication among peers. The need to understand the variability in the predictive power of a model in human interpretable way, becomes even more important for complex models e.g. text, image, machine translations.

In this talk, we demonstrate the usefulness of our Model Agnostic framework(Skater) for interpreting model and how it could help practitioners - analysts, data scientists,

statisticians - understand the model behavior better without compromising on the choice of algorithm.