UROP Openings

Interpretable Machine Learning for Human-AI Collaborative Systems

Term:

Summer

Department:

MAS: Media Arts and Sciences

Faculty Supervisor:

Deb Roy

Faculty email:

dkroy@media.mit.edu

Apply by:

6/15/2020

Contact:

Eric Chu: echu@mit.edu

Project Description

Help study and design human-AI systems. We believe AI models and explanations produced by interpretable methods should be evaluated within the context of how humans use them. This project aims to better understand and optimize “machine-in-the-loop” settings, where the human is assisted by a machine learning model, through randomized control trials (RCTs) of various scenarios.
We are wrapping up our first RCT and looking to continue to explore the following in a second RCT:
- The effect of different types of explanations (saliency maps, prototypes, natural language explanations, counterfactual explanations, etc.)
- Different machine-in-the-loop settings, such as collaboration (to achieve a task), pedagogy (the model teaches the human), and certification (to identify faulty or biased models)
- The design of the human-AI system, separate from model predictions and explanations (e.g. how and when to show predictions and explanations)
- Understanding *human* explanations
- Factors that affect human trust and agency
Your work may include:
- Training / using state of the art deep learning models and implementing various interpretable methods
- Designing the randomized control trial to be run online in a setting such as Mechanical Turk
- Conducting statistical modeling and data analysis
- Building / modifying the current webapp (Django, Javascript / React)

Pre-requisites

The ideal UROP should have some combination of 1) experience in the above skills (deep learning, designing experiments, data analysis, building webapps), 2) an interest in AI ethics, safety, transparency, 3) experience or interest in human-computer interaction research.