Sequence to sequence learning has recently emerged as a new paradigm in
supervised learning. To date, most of its applications focused on only one task
and not much work explored this framework for multiple tasks. This paper
examines three multi-task learning (MTL) settings for sequence to sequence
models: (a) the oneto-many setting - where the encoder is shared between
several tasks such as machine translation and syntactic parsing, (b) the
many-to-one setting - useful when only the decoder can be shared, as in the
case of translation and image caption generation, and (c) the many-to-many
setting - where multiple encoders and decoders are shared, which is the case
with unsupervised objectives and translation. Our results show that training on
a small amount of parsing and image caption data can improve the translation
quality between English and German by up to 1.5 BLEU points over strong
single-task baselines on the WMT benchmarks. Furthermore, we have established a
new state-of-the-art result in constituent parsing with 93.0 F1. Lastly, we
reveal interesting properties of the two unsupervised learning objectives,
autoencoder and skip-thought, in the MTL context: autoencoder helps less in
terms of perplexities but more on BLEU scores compared to skip-thought.

Captured tweets and retweets: 1

Made with a human heart + one part enriched uranium + four parts unicorn blood