Due to impressive results, (deep) machine learning approaches have become of interest for many application domains. However, for many real world applications purely data-driven blackbox approaches are not suitable. Sensitive domains such as medical decision making or autonomous driving require comprehensibility and transparency of machine learned models for legal as well as for ethical reasons. Furthermore, in many domains available data are sparse, class distributions are imbalanced, and ground truth labeling is either costly or not possible. In consequence, explainability has become a new research focus in machine learning, addressing not only the design of explanation interfaces but also integration of knowledge and lerning to overome the data engineering bottleneck. In my talk, I present inductive logic programming (ILP) as a highly expressive approach to interpretable machine learning where models are represented in symbolic form. ILP can make use of background knowledge and allows to combine reasoning and learning in a natural way. I will present extensions of the ILP system Aleph for explanation generation and interactive learning and show how ILP can be combined with blackbox classifiers such as convolutional neural networks. As application domains facial expresssion classification, image based medical diagnosis, and identifying irrelevant digital objects are shown.