Explainable AI systems aim to make decisions that are easily understood by humans—a laudable goal, but what makes a good explanation?

Testing the best: There’s only one way to figure that out: ask some users. So that’s what researchers from Harvard and Google Brain did, in a series of studies. Test subjects looked at different combinations of inputs, outputs, and explanations for a machine-learning algorithm that was designed to learn the dietary habits or medical conditions of an alien (yes, seriously—alien life was chosen to keep the test subject’s own biases from creeping in). Users then scored the different combinations.

Keep it short: Longer explanations were found to be more difficult to parse than shorter ones—though breaking up the same amount of text into many short lines was somehow better than making people read a few longer lines. As you can tell, the tests examined some pretty basic elements of how to deliver information—but at least it’s a start.

Share

Link

Author

Jackie SnowI am MIT Technology Review’s associate editor for artificial intelligence. I cover stories about where AI is currently, where it’s headed, and what’s wrong with the hype around the technology. I also put together The Algorithm, our daily newsletter on the latest in artificial intelligence. Previously I worked for Fast Company and have been published by the New York Times, National Geographic, Wall Street Journal, and others.

ImageTom Waterhouse | Flickr

Share

Link

Author

Jackie SnowI am MIT Technology Review’s associate editor for artificial intelligence. I cover stories about where AI is currently, where it’s headed, and what’s wrong with the hype around the technology. I also put together The Algorithm, our daily newsletter on the latest in artificial intelligence. Previously I worked for Fast Company and have been published by the New York Times, National Geographic, Wall Street Journal, and others.

ImageTom Waterhouse | Flickr

Sign up for The Download — your daily dose of what's up in emerging technology