Deep Learning Simplified

RECURRENT NEURAL NETWORKS (RNN) – PART 3: Encoder-Decoder

In this post, I will cover the basic encoder-decoder which we use to process seq-seq tasks such as machine translation. We will not be covering attention in this post but we will implement it in the next one.

Here we will feed in the input sequence into the encoder which will generate a final hidden state that we will feed into a decoder. The final hidden state from the encoder is the new initial state for the decoder. We will use the decoder outputs with softmax and compare it to the targets to calculate our loss. You can find our more about the paper this model comes from in this post. The main difference is that I do not add an EOS token to the encoder inputs and I do not reverse the encoder inputs.

Data:

I wanted to create a very short dataset to work with (20 sentences in english and spanish). The point of this tutorial is just to see how to build an encoder-decoder system with soft attention for tasks such as machine translation and other seq-to-seq processing. So I wrote several sentences about me and then translate them to spanish and that is our data.

First we separate the sentences into tokens and then convert the tokens into token ids. During this process we collect a vocabulary dict and a reverse vocabulary dict to convert back and forth between tokens and token ids. For our target language (spanish), we will add an extra EOS token. Then we will pad both source and target tokens to the max length (biggest sentence in the respective datasets). This is the data we feed into our model. We use the padded source inputs as is for the encoder, but we will do further additions to the target inputs in order to get our decoder inputs and outputs.

Finally, the inputs will look like this:

This is just one sample from a batch. The 0’s are paddings, 1 is a GO token and 2 is an EOS token. A more general representation of the data transformation is below. Ignore the target weights, we do not use them in our implementation.

Encoder:

The encoder simply taken the encoder inputs and the only thing we care about is the final hidden state. This hidden state holds the information from all of the inputs. We do not reverse the encoder inputs as the paper suggests because we are using seq_len with dynamic_rnn. This automatically returns the last relevant hidden state based on seq_lens.

We will use this final hidden state as the new initial state for our decoder.

Decoder:

This simple decoder takes in the final hidden state from the encoder as its initial state. We will also embed the decoder inputs and process them with the decoder RNN. The outputs will be normalized with softmax and then compared with the targets. Note that the decoder inputs starts with a GO token which is to predict the first target token. The decoder input’s last relevant token with predict the EOS target token.

But what about the paddings, they will also be predicting some output target but we don’t really care about those but they will still impact our loss if we factor them in. Here’s where we will be masking the loss to remove influence from paddings in the targets.

Loss Masking:

We will use the targets and where ever the target is a PAD, we will mask the loss for that location to 0. So when we get to the last relevant decoder token, the appropriate target will be an EOS token id. For the next decoder input the target will be a PAD id. This is where the masking starts.

We will cleverly use the fact that PAD IDs are 0 to apply the loss mask. Once we apply the mask, we just compute the sum of the losses for each row (sample in the batch) and then take the mean of all the sample’s losses to get the batch’s loss. From here, we can just train on minimizing this loss.

Here are the training results:

We won’t be doing any inference here but you can find it the following post with attention. But if you really want to implement inference here, just use the same model as training but you need to feed the predicted target back in as an input for the next decoder rnn cell. You need to embed with the same set of weights used to embed INTO the decoder and have it as another input to the rnn. This means that for the initial GO token, you need to feed in some dummy input token that will be embedded.

Conclusion:

This encoder-decoder model is quite simple but it is a necessary foundation prior to understanding the seq-seq implementation with attention. In the next RNN tutorial, we will cover attentional interfaces and their advantages over this encoder-decoder architecture.

Hey, I love this tutorial! I was able to actually understand most of the tensorflow tutorial after reading this, so to the comment above, code provided there should be good enough to get us started. But I would also like access to your repo but I see this next to the link “(Updating all repos, will be back up soon!)” so I will be waiting patiently!

Hello, I am very new to Seq 2 Seq learning. How do we compute the lost function. I mean that for every time instance, output of the decoder is a vector of size vocab_size(of target side) after performing softmax and we have just a number on target output. Should we create one hot vector of size vocab_size and compute the loss ?

Hey Jatin, good question. So once you have the output of the decoder, you will take that and do TWO things. First is multiple by a set of weights [num_hidden_units out of decoder X target_vocab_size]. This will give you the logits. You can then apply softmax, which is just normalizing the data between 0 and 1. When actually computing the loss in Tensorflow or PyTorch, you’ll notice you need to feed in the logits and not the normalized softmax values.