Due to the powerful capacity of feature learning and representation, deep

neural networks (DNNs) have made big breakthroughs in speech recognition

and image processing. Following recent success in signal variable processing,

researchers want to figure out whether DNNs can achieve similar progress insymbol variable processing, such as natural language processing (NLP). As one of the

more challenging NLP tasks, machine translation (MT) has become a testing ground for

researchers who want to evaluate various kinds of DNNs. MT aims to find for the source language sentence the most probable target language sentence

that shares the most similar meaning. Essentially, MT is a sequence-to-sequence prediction task. This article gives a comprehensive overview of applications of DNNs in MT from two views: indirect application, which attempts to improve standard MT systems, and direct application, which adopts DNNs to design a purely neural MT model. We can elaborate further:

• Indirect application designs new features with DNNs in the framework of standard

MT systems, which consist of multiple submodels (such as translation selection and language models). For example, DNNs can be leveraged to represent the source language context’s semantics and better predict translation candidates.

• Direct application regards MT as a sequence- to-sequence prediction task and, without using any information from standard MT systems, designs two deep neural networks—an encoder, which learns continuous representations of source language sentences, and a decoder, which generates the target language sentence with source