Abstract

Training accurate classifiers requires many labels, but each label providesonly limited information (one bit for binary classification). In this work, wepropose BabbleLabble, a framework for training classifiers in which anannotator provides a natural language explanation for each labeling decision. Asemantic parser converts these explanations into programmatic labelingfunctions that generate noisy labels for an arbitrary amount of unlabeled data,which is used to train a classifier. On three relation extraction tasks, wefind that users are able to train classifiers with comparable F1 scores from5-100 faster by providing explanations instead of just labels. Furthermore,given the inherent imperfection of labeling functions, we find that a simplerule-based semantic parser suffices.