Unpacking the Machine Learning Black Box in Healthcare Applications

Machine learning models are being quickly adopted in various industry applications because of their outstanding performance compared to more traditional approaches like linear regression. However, machine learning models are also known as ‘black boxes’ because they provide little explanations for their predictions.

The black-box nature of many machine learning models frequently casts doubt on its usability in practice, especially in the healthcare field- “The doctor just won’t accept that,” as noted by Zachary Lipton [1]. When health or life is at stake, it’s hard to trust a black-box algorithm that does not provide sufficient explanations for its decisions.

Because of the call for interpretations, there has been a surge of interest in interpretable machine learning, a field which attempts to unpack the machine learning black box. The goal of such efforts is not to understand the detailed inner workings of the models, the feasibility of which is under debate, but to understand them sufficiently for downstream tasks [2].

Interpretable Machine LearningThe benefits of interpretable machine learning are multifold [3]. For developers and researchers, understanding the model’s internal mechanisms can help them improve the model in cases of unexpected failure, help ward against the potential for discrimination in the algorithms, and increase system safety.

It was reported that Amazon scraped their machine learning tool that showed bias against women in recruiting [4]. Though, if we know how the model processes gender information, this type of bias could be prevented beforehand. For end-users like doctors, interpretable outputs can reduce the discomfort with machine learning and establish trust. Additionally, the explanations themselves present opportunity for generating actionable recommendations.

The TechniquesThere are a number of different techniques for interpreting models. When appropriate, we can deliberately build interpretable models by choosing simple algorithms such as linear regression or enforcing interpretability constraints in the modeling process [5], but a pitfall of such approaches is that the interpretability constraints might compromise model performance for many types of problems.

To achieve better performance, we can utilize more complex approaches (e.g. boosting or deep learning) and interpret the models in a post-hoc fashion. For example, using local interpretable model-agnostic explanations (LIME), we can mimic the local behaviors of complex models with linear models that are easy to interpret [6]. Another tool, SHAP (SHapely Additive ExPlanations), determines the contribution of each feature to predictions based on game theory [7].

These techniques have been actively developed and maintained in various software packages. For example, in Python, there are tools such as skater [8], SHAP [9], and lime [10]. In R, there are packages such as lime [11], DALEX [12], and iml [13].

Applying Machine Learning to Our SolutionsAt axialHealthcare, we are continuously developing machine learning models to assist with clinical decision making. For example, one of our models can predict a patient’s risk for an opioid use disorder diagnosis using numerous descriptions from his or her life. Another model focuses on predicting opioid overdose incidents.

For the predictive opioid overdose model, we utilized LIME via the iml R package to generate local, individual explanations for some patients’ predictions, which is an important step of our model evaluation. For example, when the model correctly predicted a patient to overdose on opioids in the next month, the LIME method listed recent drug abuse incidents, the number of unique diagnoses, and the number of drug tests as the most prominent contributors for this patient’s prediction.

This kind of explanation not only helps providers trust the results, but can also be used as a basis for individualized treatment recommendations. In this case, the provider could have intervened by referring the patient to drug abuse treatment, as abuse was listed as one of the prediction explanations.

Share

By David Simon, Ph.D., Research Data Scientist Predicting a patient’s risk for adverse outcomes is an important part of delivering precision medicine and improving the lives of patients. In order to achieve these goals, axialHealthcare has developed a number of machine learning models that quantify patient risk. An exciting example is our machine learning model…