Artificial intelligence isn’t known for its bedside manner, but that could be changing.

In a paper published in Nature Medicine on Monday, Google’s DeepMind subsidiary, UCL, and researchers at Moorfields Eye Hospital showed off their new AI system. The researchers used deep learning to create algorithm-driven software that can identify common patterns in data culled from dozens of common eye diseases from 3D scans. The result is an AI that can identify more than 50 diseases with incredible accuracy and can then refer patients to a specialist. Even more important, though, is that the AI can explain why a diagnosis was made, indicating which part of the scan prompted the outcome. It’s an important step in both medicine and in making AIs slightly more human.

AIs typically work in a black box, absorbing data and spitting out an answer without spelling out the reasoning behind a certain outcome. That’s all well and good when an AI helps you cry at a movie or write a Yelp review, but when it comes to diagnosing medical conditions, patients (and doctors) would prefer a little more context.

“Doctors and patients don’t want just a black box answer, they want to know why,” Ramesh Raskar, an associate professor at MIT, told Stat. “There is a standard of care, and if the AI technique doesn’t follow that standard of care, people are going to be uncomfortable with it.”