AI systems are currently being developed and deployed for a variety medical purposes. A common objection to this trend is that medical AI systems risk being ‘black-boxes’, unable to explain their decisions. How serious this objection is remains unclear. As some commentators point out, human doctors too are often unable to properly explain their decisions. In this paper, we seek to clarify this debate. We (i) analyse the reasons why explainability is important for medical AI, (ii) outline some of the features that make for good explanations in this context, and (iii) compare how well humans and AI systems are able to satisfy these. We conclude that while humans currently have the edge, recent developments in technical AI research may allow us to construct medical AI systems which are better explainers than humans.