Efficient Attention using a Fixed-Size Memory Representation

Venue

Publication Year

Authors

BibTeX

Abstract

The standard content-based attention mechanism typically used in
sequence-to-sequence models is computationally expensive as it requires the
comparison of large encoder and decoder states at each time step. In this work, we
propose an alternative attention mechanism based on a fixed size memory
representation that is more efficient. Our technique predicts a compact set of K
attention contexts during encoding and lets the decoder compute an efficient lookup
that does not need to consult the memory. We show that our approach performs on-par
with the standard attention mechanism while yielding inference speedups of 20% for
real-world translation tasks and more for tasks with longer sequences. By
visualizing attention scores we demonstrate that our models learn distinct,
meaningful alignments.