Many learning algorithms are struggling with large data sets, and miss information present in the data simply for computational reasons. A larger and mostly hidden problem is that many algorithms learn (unintentionally and unnoticed) triggering patterns that are not supported by the data. Using such classifiers, we may jump to conclusions that are unjustifiable based on our existing data sets. Both missing important triggers and/or using unsupported ones may result in serious practical/legal problems. This brings up both ethical and legal questions. In this talk we demonstrate these issues with a small example. We propose the notion of a "justifiable" classifier, and on the positive side, we show some results about the existence of learning algorithms that always produce a "justifiable" classifier.