Abstract

Generative Adversarial Networks (GANs) have experienced a recent surge inpopularity, performing competitively in a variety of tasks, especially incomputer vision. However, GAN training has shown limited success in naturallanguage processing. This is largely because sequences of text are discrete,and thus gradients cannot propagate from the discriminator to the generator.Recent solutions use reinforcement learning to propagate approximate gradientsto the generator, but this is inefficient to train. We propose to utilize anautoencoder to learn a low-dimensional representation of sentences. A GAN isthen trained to generate its own vectors in this space, which decode torealistic utterances. We report both random and interpolated samples from thegenerator. Visualization of sentence vectors indicate our model correctlylearns the latent space of the autoencoder. Both human ratings and BLEU scoresshow that our model generates realistic text against competitive baselines.