Computer Science > Computation and Language

Title:SenseBERT: Driving Some Sense into BERT

Abstract: Self-supervision techniques have allowed neural language models to advance
the frontier in Natural Language Understanding. However, existing
self-supervision techniques operate at the word-form level, which serves as a
surrogate for the underlying semantic content. This paper proposes a method to
employ self-supervision directly at the word-sense level. Our model, named
SenseBERT, is pre-trained to predict not only the masked words but also their
WordNet supersenses. Accordingly, we attain a lexical-semantic level language
model, without the use of human annotation. SenseBERT achieves significantly
improved lexical understanding, as we demonstrate by experimenting on SemEval,
and by attaining a state of the art result on the Word in Context (WiC) task.
Our approach is extendable to other linguistic signals, which can be similarly
integrated into the pre-training process, leading to increasingly semantically
informed language models.