Problem Description: Learn the Alphabet

In this tutorial we are going to develop and contrast a number of different LSTM recurrent neural network models.

The context of these comparisons will be a simple sequence prediction problem of learning the alphabet. That is, given a letter of the alphabet, predict the next letter of the alphabet.

This is a simple sequence prediction problem that once understood can be generalized to other sequence prediction problems like time series prediction and sequence classification.

Let’s prepare the problem with some python code that we can reuse from example to example.

Firstly, let’s import all of the classes and functions we plan to use in this tutorial.

1

2

3

4

5

import numpy

from keras.models import Sequential

from keras.layers import Dense

from keras.layers import LSTM

from keras.utils import np_utils

Next, we can seed the random number generator to ensure that the results are the same each time the code is executed.

1

2

# fix random seed for reproducibility

numpy.random.seed(7)

We can now define our dataset, the alphabet. We define the alphabet in uppercase characters for readability.

Neural networks model numbers, so we need to map the letters of the alphabet to integer values. We can do this easily by creating a dictionary (map) of the letter index to the character. We can also create a reverse lookup for converting predictions back into characters to be used later.

1

2

3

4

5

# define the raw dataset

alphabet="ABCDEFGHIJKLMNOPQRSTUVWXYZ"

# create mapping of characters to integers (0-25) and the reverse

char_to_int=dict((c,i)fori,cinenumerate(alphabet))

int_to_char=dict((i,c)fori,cinenumerate(alphabet))

Now we need to create our input and output pairs on which to train our neural network. We can do this by defining an input sequence length, then reading sequences from the input alphabet sequence.

For example we use an input length of 1. Starting at the beginning of the raw input data, we can read off the first letter “A” and the next letter as the prediction “B”. We move along one character and repeat until we reach a prediction of “Z”.

1

2

3

4

5

6

7

8

9

10

# prepare the dataset of input to output pairs encoded as integers

seq_length=1

dataX=[]

dataY=[]

foriinrange(0,len(alphabet)-seq_length,1):

seq_in=alphabet[i:i+seq_length]

seq_out=alphabet[i+seq_length]

dataX.append([char_to_int[char]forcharinseq_in])

dataY.append(char_to_int[seq_out])

print(seq_in,'->',seq_out)

We also print out the input pairs for sanity checking.

Running the code to this point will produce the following output, summarizing input sequences of length 1 and a single output character.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

A -> B

B -> C

C -> D

D -> E

E -> F

F -> G

G -> H

H -> I

I -> J

J -> K

K -> L

L -> M

M -> N

N -> O

O -> P

P -> Q

Q -> R

R -> S

S -> T

T -> U

U -> V

V -> W

W -> X

X -> Y

Y -> Z

We need to reshape the NumPy array into a format expected by the LSTM networks, that is [samples, time steps, features].

1

2

# reshape X to be [samples, time steps, features]

X=numpy.reshape(dataX,(len(dataX),seq_length,1))

Once reshaped, we can then normalize the input integers to the range 0-to-1, the range of the sigmoid activation functions used by the LSTM network.

1

2

# normalize

X=X/float(len(alphabet))

Finally, we can think of this problem as a sequence classification task, where each of the 26 letters represents a different class. As such, we can convert the output (y) to a one hot encoding, using the Keras built-in function to_categorical().

Naive LSTM for Learning One-Char to One-Char Mapping

Let’s start off by designing a simple LSTM to learn how to predict the next character in the alphabet given the context of just one character.

We will frame the problem as a random collection of one-letter input to one-letter output pairs. As we will see this is a difficult framing of the problem for the LSTM to learn.

Let’s define an LSTM network with 32 units and an output layer with a softmax activation function for making predictions. Because this is a multi-class classification problem, we can use the log loss function (called “categorical_crossentropy” in Keras), and optimize the network using the ADAM optimization function.

After we fit the model we can evaluate and summarize the performance on the entire training dataset.

1

2

3

# summarize performance of the model

scores=model.evaluate(X,y,verbose=0)

print("Model Accuracy: %.2f%%"%(scores[1]*100))

We can then re-run the training data through the network and generate predictions, converting both the input and output pairs back into their original character format to get a visual idea of how well the network learned the problem.

We can see that this problem is indeed difficult for the network to learn.

The reason is, the poor LSTM units do not have any context to work with. Each input-output pattern is shown to the network in a random order and the state of the network is reset after each pattern (each batch where each batch contains one pattern).

This is abuse of the LSTM network architecture, treating it like a standard multilayer Perceptron.

Next, let’s try a different framing of the problem in order to provide more sequence to the network from which to learn.

Naive LSTM for a Three-Char Feature Window to One-Char Mapping

A popular approach to adding more context to data for multilayer Perceptrons is to use the window method.

This is where previous steps in the sequence are provided as additional input features to the network. We can try the same trick to provide more context to the LSTM network.

Here, we increase the sequence length from 1 to 3, for example:

1

2

# prepare the dataset of input to output pairs encoded as integers

seq_length=3

Which creates training patterns like:

1

2

3

ABC -> D

BCD -> E

CDE -> F

Each element in the sequence is then provided as a new input feature to the network. This requires a modification of how the input sequences reshaped in the data preparation step:

1

2

# reshape X to be [samples, time steps, features]

X=numpy.reshape(dataX,(len(dataX),1,seq_length))

It also requires a modification for how the sample patterns are reshaped when demonstrating predictions from the model.

We can see a small lift in performance that may or may not be real. This is a simple problem that we were still not able to learn with LSTMs even with the window method.

Again, this is a misuse of the LSTM network by a poor framing of the problem. Indeed, the sequences of letters are time steps of one feature rather than one time step of separate features. We have given more context to the network, but not more sequence as it expected.

In the next section, we will give more context to the network in the form of time steps.

Naive LSTM for a Three-Char Time Step Window to One-Char Mapping

In Keras, the intended use of LSTMs is to provide context in the form of time steps, rather than windowed features like with other network types.

We can take our first example and simply change the sequence length from 1 to 3.

1

seq_length=3

Again, this creates input-output pairs that look like:

1

2

3

4

ABC -> D

BCD -> E

CDE -> F

DEF -> G

The difference is that the reshaping of the input data takes the sequence as a time step sequence of one feature, rather than a single time step of multiple features.

1

2

# reshape X to be [samples, time steps, features]

X=numpy.reshape(dataX,(len(dataX),seq_length,1))

This is the correct intended use of providing sequence context to your LSTM in Keras. The full code example is provided below for completeness.

We can see that the model learns the problem perfectly as evidenced by the model evaluation and the example predictions.

But it has learned a simpler problem. Specifically, it has learned to predict the next letter from a sequence of three letters in the alphabet. It can be shown any random sequence of three letters from the alphabet and predict the next letter.

It can not actually enumerate the alphabet. I expect that a larger enough multilayer perception network might be able to learn the same mapping using the window method.

The LSTM networks are stateful. They should be able to learn the whole alphabet sequence, but by default the Keras implementation resets the network state after each training batch.

LSTM State Within A Batch

The Keras implementation of LSTMs resets the state of the network after each batch.

This suggests that if we had a batch size large enough to hold all input patterns and if all the input patterns were ordered sequentially, that the LSTM could use the context of the sequence within the batch to better learn the sequence.

We can demonstrate this easily by modifying the first example for learning a one-to-one mapping and increasing the batch size from 1 to the size of the training dataset.

Additionally, Keras shuffles the training dataset before each training epoch. To ensure the training data patterns remain sequential, we can disable this shuffling.

The network will learn the mapping of characters using the the within-batch sequence, but this context will not be available to the network when making predictions. We can evaluate both the ability of the network to make predictions randomly and in sequence.

The full code example is provided below for completeness.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

# Naive LSTM to learn one-char to one-char mapping with all data in each batch

As we expected, the network is able to use the within-sequence context to learn the alphabet, achieving 100% accuracy on the training data.

Importantly, the network can make accurate predictions for the next letter in the alphabet for randomly selected characters. Very impressive.

Stateful LSTM for a One-Char to One-Char Mapping

We have seen that we can break-up our raw data into fixed size sequences and that this representation can be learned by the LSTM, but only to learn random mappings of 3 characters to 1 character.

We have also seen that we can pervert batch size to offer more sequence to the network, but only during training.

Ideally, we want to expose the network to the entire sequence and let it learn the inter-dependencies, rather than us define those dependencies explicitly in the framing of the problem.

We can do this in Keras by making the LSTM layers stateful and manually resetting the state of the network at the end of the epoch, which is also the end of the training sequence.

This is truly how the LSTM networks are intended to be used.

We first need to define our LSTM layer as stateful. In so doing, we must explicitly specify the batch size as a dimension on the input shape. This also means that when we evaluate the network or make predictions, we must also specify and adhere to this same batch size. This is not a problem now as we are using a batch size of 1. This could introduce difficulties when making predictions when the batch size is not one as predictions will need to be made in batch and in sequence.

An important difference in training the stateful LSTM is that we train it manually one epoch at a time and reset the state after each epoch. We can do this in a for loop. Again, we do not shuffle the input, preserving the sequence in which the input training data was created.

1

2

3

foriinrange(300):

model.fit(X,y,epochs=1,batch_size=batch_size,verbose=2,shuffle=False)

model.reset_states()

As mentioned, we specify the batch size when evaluating the performance of the network on the entire training dataset.

1

2

3

4

# summarize performance of the model

scores=model.evaluate(X,y,batch_size=batch_size,verbose=0)

model.reset_states()

print("Model Accuracy: %.2f%%"%(scores[1]*100))

Finally, we can demonstrate that the network has indeed learned the entire alphabet. We can seed it with the first letter “A”, request a prediction, feed the prediction back in as an input, and repeat the process all the way to “Z”.

1

2

3

4

5

6

7

8

9

10

# demonstrate some model predictions

seed=[char_to_int[alphabet[0]]]

foriinrange(0,len(alphabet)-1):

x=numpy.reshape(seed,(1,len(seed),1))

x=x/float(len(alphabet))

prediction=model.predict(x,verbose=0)

index=numpy.argmax(prediction)

print(int_to_char[seed[0]],"->",int_to_char[index])

seed=[index]

model.reset_states()

We can also see if the network can make predictions starting from an arbitrary letter.

We can see that the network has memorized the entire alphabet perfectly. It used the context of the samples themselves and learned whatever dependency it needed to predict the next character in the sequence.

We can also see that if we seed the network with the first letter, that it can correctly rattle off the rest of the alphabet.

We can also see that it has only learned the full alphabet sequence and that from a cold start. When asked to predict the next letter from “K” that it predicts “B” and falls back into regurgitating the entire alphabet.

To truly predict “K” the state of the network would need to be warmed up iteratively fed the letters from “A” to “J”. This tells us that we could achieve the same effect with a “stateless” LSTM by preparing training data like:

1

2

3

4

---a->b

--ab->c

-abc->d

abcd->e

Where the input sequence is fixed at 25 (a-to-y to predict z) and patterns are prefixed with zero-padding.

Finally, this raises the question of training an LSTM network using variable length input sequences to predict the next character.

LSTM with Variable-Length Input to One-Char Output

In the previous section, we discovered that the Keras “stateful” LSTM was really only a shortcut to replaying the first n-sequences, but didn’t really help us learn a generic model of the alphabet.

In this section we explore a variation of the “stateless” LSTM that learns random subsequences of the alphabet and an effort to build a model that can be given arbitrary letters or subsequences of letters and predict the next letter in the alphabet.

Firstly, we are changing the framing of the problem. To simplify we will define a maximum input sequence length and set it to a small value like 5 to speed up training. This defines the maximum length of subsequences of the alphabet will be drawn for training. In extensions, this could just as set to the full alphabet (26) or longer if we allow looping back to the start of the sequence.

We also need to define the number of random sequences to create, in this case 1000. This too could be more or less. I expect less patterns are actually required.

1

2

3

4

5

6

7

8

9

10

11

12

13

# prepare the dataset of input to output pairs encoded as integers

num_inputs=1000

max_len=5

dataX=[]

dataY=[]

foriinrange(num_inputs):

start=numpy.random.randint(len(alphabet)-2)

end=numpy.random.randint(start,min(start+max_len,len(alphabet)-1))

sequence_in=alphabet[start:end+1]

sequence_out=alphabet[end+1]

dataX.append([char_to_int[char]forcharinsequence_in])

dataY.append(char_to_int[sequence_out])

print(sequence_in,'->',sequence_out)

Running this code in the broader context will create input patterns that look like the following:

1

2

3

4

5

6

7

8

9

PQRST -> U

W -> X

O -> P

OPQ -> R

IJKLM -> N

QRSTU -> V

ABCD -> E

X -> Y

GHIJ -> K

The input sequences vary in length between 1 and max_len and therefore require zero padding. Here, we use left-hand-side (prefix) padding with the Keras built in pad_sequences() function.

1

X=pad_sequences(dataX,maxlen=max_len,dtype='float32')

The trained model is evaluated on randomly selected input patterns. This could just as easily be new randomly generated sequences of characters. I also believe this could also be a linear sequence seeded with “A” with outputs fes back in as single character inputs.

We can see that although the model did not learn the alphabet perfectly from the randomly generated subsequences, it did very well. The model was not tuned and may require more training or a larger network, or both (an exercise for the reader).

This is a good natural extension to the “all sequential input examples in each batch” alphabet model learned above in that it can handle ad hoc queries, but this time of arbitrary sequence length (up to the max length).

Summary

In this post you discovered LSTM recurrent neural networks in Keras and how they manage state.

Specifically, you learned:

How to develop a naive LSTM network for one-character to one-character prediction.

How to configure a naive LSTM to learn a sequence across time steps within a sample.

How to configure an LSTM to learn a sequence across samples by manually managing state.

Do you have any questions about managing LSTM state or about this post?
Ask your questions in the comment and I will do my best to answer.

I’m probably missing something here but could you please explain why LSTM units are needed in the alphabet example where any output depends directly on the input letter and there is no confusion between different input -> output pairs at all?

Thank you very much for your fantastic tutorials. I learnt a lot through them.
Reccurent networks are quite complex and confusing to use. And like Shanbe, I am a bit confused with the examples as each output does not seem to have a dependency to the previous inputs but just a direct link to the current input. I am probably missing something too, but I don’t see how it demonstrates the benefits of using a memory rather than a direct Dense layer.
Actually the 100% accuracy can be achieved by just using the last layer and removing the LSTM layer (but it takes a bit more epoch).

I think the confusion is coming from the fact that the letters are encoded into integers. This would not be the case if the neural network could handle letters directly. In this example, they are encoded in the same order and therefore there is a direct relationship between the numbers. We are just teaching the network that the output equals the input+constant. Which is easy to obtain with a simple regression with one cell. But we still need the 26 outputs for the decoding at the end.

Maybe I am lost, but I think the demonstration would make more sense if the letters were scrambled (to avoid the possibility to have a simple linear combination which is solved by regression with one cell) and if we were trying to predict a previous letter rather than the next one. The use of a memory would be meaningful. But I didn’t try.
Nevertheless, the examples are still very interesting to show different approaches to the problem.

Thank you for sharing these educational examples! Appreciate it if you can elaborate on the “LSTM State Within A Batch” example. The confusing part is on the explanation “that the LSTM could use the context of the sequence within the batch to better learn the sequence.”, which may imply that the states of LSTM are reused within the training of a batch, and this motivated the setting of parameter shuffle as False.

But my understanding of Keras’s implementation is that the LSTM won’t reuse the states within a batch. In fact, the sequences in a batch is kinda like triggering the LSTM “in parallel” – in fact the states of LSM should be of shape (nsamples, nout) for both gates – separate states for each sequence – this is what id described [Keras document](https://keras.io/getting-started/faq/#how-can-i-use-stateful-rnns): states are reused by the ith instance for successive batchs.

This means that even parameter shuffle is set as True, it will still give you the observed performance. This also explains why the predictions on random patterns were also good, which was opposite to the observations in the next example “Stateful LSTM for a One-Char to One-Char Mapping”. The reason why setting a bigger batch size resulted in better performance than the first example, could be the bigger nb_epoch used.

I do agree. Better result came with large batch size, because it reaches minima faster and achieve good accuracy in 5000 epochs. If we increase number of epochs when batch_size =1, we can get 100% accuracy.

I think the accuracy increased because there was no resetting of the network as all the samples were fitted in the batch size and as the default resetting of the network takes place after a batch size no of samples whereas in those with batch size=1 the resetting of the network was taking place after each batch size which was actually equal to 1 sample

I do agree with your understanding, that the states within each batch are updated separately rather than can be used as context information. [This discussion](https://stackoverflow.com/a/46331227/7653982) gives more details about the parameter stateful.

Thank you for your amazing tutorial.
However, what if I want to predict a sequence of outputs? If I add a dimension to the output it is gonna be like a features window and the model will not consider the outputs as a sequence of outputs. It is like outputs are independent. How can I fix that issue and have a model which for example generates “FGH” when I give it “BCDE”.

Hadi, I use a “one-hot” encoding for my features. This causes the output to be a probability distribution over the features. You may then use a sampling technique to choose “next” features, similar to the “Unreasonable Effectiveness of RNN” article. This method includes a “temperature” parameter that you can tune to produce outputs with more or less adherence to the LSTM’s predictions.

However, I have a few related questions (also posted in StackOverflow, http://stackoverflow.com/questions/39457744/backpropagation-through-time-in-stateful-rnns): If I have a stateful RNN with just one timestep per batch, how is backpropagation handled? Will it handle only this one timestep or does it accumulate updates for the entire sequence? I fear that it updates only the timesteps per batch and nothing further back. If so, do you think this is a major drawback? Or do you know a way to overcome this?

I believe updates are performed after each batch. This can be a downside of having a small batch. A batch size of 1 will essentially perform online gradient descent (I would guess). Developers more familiar with Keras internals may be able to give you a more concrete answer.

I posed my question wrongly because I mixed up “batch size” and “time steps”. If I have sequences of shape (nb_samples, n, dims) and I process them one time step after the other with a stateful LSTM (feeding batches of shape (batch_size, 1, dims) to the network), will backpropagation go through the entire sequences as it would if I processed the entire sequence at once?

Great tutorial as always. It was fun to run through your code line by line, work with a smaller alphabet (like “ABCDE”), change the sequence length etc just to figure out how the model behaves. I think I’m growing fond of LSTMs these days.

I have a very basic question about the shape of the input tensor. Keras requires our input to have the form [samples, time_steps, features]. Could you tell me what the attribute features exactly means?

Also, consider a scenario for training an LSTM network for binary classification of audio. Suppose I have a collection of 1000 files and for each file, I have extracted 13 features (MFCC values). Also suppose that every file is 500 frames long and I set the time_steps to 1.

Thank you for the enlightening series of articles on LSTMs!
Just a minor detail, the complete code for the final example is missing an import for pad_sequences:
from keras.preprocessing.sequence import pad_sequences

Hi Jason,
Thank you for your tutorials, I learn a lot from them.
However, I still don’t quite understand the meaning of batch size. In the last example you set batch_size to be 1, but the network still learned the next letter in the sequence based on the whole sequence, or was it just based on the last letter every time?
What would have happened if you set batch_size=3 and all the sequences would be at minimal length 3?
Thank you

I think there is a mistake in the paragraph “LSTM State Within A Batch”.
You say “The Keras implementation of LSTMs resets the state of the network after each batch.”.
But the fact is that it resets its state event between the inputs of a batch.
You can see the poor performances (around .5) of this LSTM that tries to remember its last input : http://pastebin.com/u5NnAx9r

Basically, here, with stateful LSTM when the network was fed “K”, it predicted B, which is wrong. Does it mean that it is just memorized sequence and will always predict B irrespective of what is fed. How can we extend it to general video prediction tasks. Please look at the link above.

Sometimes it is helpful to extract LSTM outputs from certain time steps. For example, we may want to get the outputs corresponding to the last 30 words from a text, or the 100th~200th words from a text body. It would be great to write an instruction about it.

i want to use LSTM for words instead of alphabets. how can i implement that. more over can i use that part of speech tagging ? part of speech tagging is also a sequential problem because it also dependent on context.
thanks

Thank you for this great tutorial. I really appreciated going through your LSTM code.
I have one question about the “features”: you are mentionning them but I don’t see where you are including them. Do they represent a multivariate problem? How do we handle that?

Thanks for this great tutorial!
I have a question. I am deadling with kinda same problem. However, each character in a sequence is a vector of features in my input data.
Instead of [‘A’, ‘B’, ‘C’] -> D I have [[0, 0,1, 1.3], [6,3,1,1.5], [6, 4, 1.4, 4.5]] -> [1, 3, 4]

So considering all sequences, my data is in 3d shape.
Could someone help me how to configure the input for LSTM?

Great tutorial and one that has come closest to ending my confusion. However I still not 100% clear. If I have time-series training data of length 5000, and each row of data consists of two features, can I shape my input data (x) as a single batch of 5000,1,2? And if this is the case — under what circumstances would I ever need to increase the time step to more than 1? I’m struggling to see the value the time step dimension is adding if the LSTM remembers across an entire batch (which in my above scenario would be like a time step of 5000 right?)

Hi Jason,
Thanks for your tutorials, I just have a confusion on stateful LSTM in keras.
I know the hidden state will pass through different timesteps. Will the hidden state pass among one batch？
Or samples in one batch have different initialized hidden states?

It’s a awesome tutorial, but I still have some problem.When we use a LSTM/RNN, we usually initialize the state of LSTM/RNN with some random way like Orthogonal, therefore, when we use the LSTM for predicting, the initial state may be or must be different from training, how can it get the right prediction always.Even more, when we train a LSTM with stateful=False, the initial state will be reset, which means initialize randomly, how can it always get a right model?
wait for your answer, thank you!

In its favour, it does train (and eventually approaches 100% accuracy if you up the batch size and epochs a bit).
On the other hand, it does seem that the training data is wrong and that it is learning the correct result in spite of it. I have tried an alternative dataset where the padded data is a random selection of letters. It doesn’t work particularly well for sequences of only 1 letter, but longer sequences seem OK.
I have pasted the new version below. Note that I have increased the dataset size, batch size and training epochs.

This LSTM business is a bit more subtle than I initially thought…. thanks for posting this tutorial though, I really like the “this is how to do it wrong” style of the first few examples. Very useful.

1) Does setting an LSTM to [stateful = false] and using batch size 1 basically turn an LSTM into a more complicated Feedforward net, i.e., a neural net with no memory or knowledge of sequence?

2) After training a “stateful” model above, you reset the states and then make predictions. This means (wrt to the standard lstm equations, found here – https://en.wikipedia.org/wiki/Long_short-term_memory) that the previous cell state, and the hidden state are zero. You can see this by using something like model.get_layer(index = 1).states[0].eval(). This is also true of a non-stateful model – the states are listed as [none] in keras. The confusing part is that resetting the state makes the forget gate and the “U” weights zero-out (as per the equations). Yet, as we see in your above tutorial, you can still make accurate predictions! It makes me wonder why we have a forget gate and U weights at all?

If my questions are confusing in any way, please let me know. Thanks ahead of time for your attention, and for these awesome tutorials.

Dear Jason;
Thanks for your useful tutorials. I have question regarding the ‘Accuracy’. As we see in first example “Naive LSTM”, accuracy is 84% but non of predictions are correct. My question is how this accuracy has been calculated?

Fantastic post, as always Jason. I especially appreciate the way you took time to show downside of alternatives to a stateless model, like piling in extra features. You say you are not a professor, but you think like a teacher, and not many are bothering to do that with such recent technologies.

* For each observation (row) in the input data Keras saves output_dim states (where output_dim = the number of cells in the LSTM), so that
* After processing a batch, it will have collected an array of state information of size (batch_size, output_dim)
* Since state straddles batch boundaries, we must specify the shape of that state array (batch_size, output_dim) in the model definition.

My questions:
* Is that an accurate description?
* May I infer that state information straddles observations (rows of ip data, sequences, whatever you want to call it)?
* May I infer that state information straddles observations even in “stateless” LSTMs? (even though it does *not* straddle batches)
* If the LSTM keeps old state information around for the entire sequence, does that mean that it could potentially operate on an old state (say the state held 4, 5 or 50 cells ago)?
* If so, does that make LSTMs resemble autoregressive routines with a *huge* look back, which are thus directly (not just implicitly) dependent on state values that occurred many steps before?
* Is that explained in the original LSTM paper by Hochreiter?

A batch is a group of samples. After a batch the weights are updated and state is reset. One epoch is comprised of 1 or more batches.

LSTMs are poor at autoregression. They are not learning a static function on lag observations, instead, they learn a more complex function with the internal state acting as internal variables to the function. Past observations are not inputs to the function directly, rather they influence the internal state and t-1 output.

Yes, it helps, because it eliminates the possibility of past states (older than t – 1) directly influencing current state. But then why does the LSTM algorithm keep state information for each and every sample (sequence) in the batch?

And if it is not true that it keeps past state information for each sample, then why does the stateful LSTM need to know the batch size?

Dear Jason;
Thanks for your useful tutorials. I have question regarding the ‘Accuracy’. As we see in first example “Naive LSTM”, accuracy is 84% but non of predictions are correct. My question is how this accuracy has been calculated?

I tried your code on Stateful LSTM for a One-Char to One-Char Mapping and the acc is surprisingly low during training (like ~30%). Is there anything wrong? However I can reach 100% by changing the batch size to 25 and switch the iteration number to a much larger one (like 3000), but I guess that’s not your purpose on this one.
Also, if we manually reset the states after each epoch, how can the model to remember things?

Hallo Jason,
I’m wondering whether there is a relation between sequence_length and number_units!!
I’d imagine that number_units should be greater than (or at least equal) to sequence_length in order to make LSTM able to handle a sample of length = sequence_length without losing sequential information.
For example, if we have samples, each of length = 30 (from t0 to t29), then number_units=32 can be suitable because first unit handles t0, second unit handles t1 taking the first unit’s output into account, …. and unit 30 handles the last time step t29 taking all elapsed time steps (i.e. t0 to t28) into account. Thus, if we choose hereby e.g. number_units=20, then last ten time steps (i.e. t20 to t29) will not be handled by the LSTM, and thus the model will lose sequential information this way.
Is this correct?

Thank you.
Ok, for compilation and running the model there exits no relation. But for having a suitable model that can handle the entire expected sequential data, should num_units >= sequence_length? Because each LSTM-Unit handles a time step and thus if num_units num_units) will not be handled and their sequential information will be lost!!
Is this understanding correct?

This is a correction for mistyping:
…. Because each LSTM-Unit handles a time step and thus if num_units num_units) will not be handled and their sequential information will be lost!!
Is this understanding correct?

I apologize that my comments are always somehow corrupted when I submitted them 🙁
So, what I mean is whether my understanding is accurate or not:
For example, if we have samples, each of length = 30 (from t0 to t29), then number_units=32 can be suitable because first unit handles t0, second unit handles t1 taking the first unit’s output into account, …. and unit 30 handles the last time step t29 taking all elapsed time steps (i.e. t0 to t28) into account. Thus, if we choose hereby e.g. number_units=20, then last ten time steps (i.e. t20 to t29) will not be handled by the LSTM, and thus the model will lose sequential information this way.
Is my understanding correct?

thanks for the tutorial. I tried your code with several other time series datasets, and I have the feeling that it working “too well”. I’m not an expert in NN, but I know ML quite well to say that there “might” be something wrong in my predictions – overfitting or similar (e.g. also on the dataset you use, the results I get are definitely better than yours, but this might depend on different versions of Keras, as well as different splitting of the data, etc…). So, I’m trying to figure out things.

My question is then about the stateful mode of the LSTM. Does it affect also the prediction? Or better, is the model stateful also in prediction mode?
For example, assume my test set is made of 100 time points (sequential examples), and I give them to the network one by one in the right sequence, as you do. Then, when the model predicts a value for the i-th example, is it storing or taking into account that, so far, the sequence has been of the examples from 1 to i-1? Does it preserve and use this historical information somehow for the i-th prediction?

PS: sorry if this answer has been already asked. It tried to search in the thread, but it was quite long.

Hi Jason, your posts have been amazing for helping me understand LSTM. I just have a question about stateful.

Lets assume the following setup:
time_steps = 10
batch_size = 5

This means that within each batch, I have 5 samples, each with 10 rows of sequential data (for a total 50 rows).

I know that setting Stateful=True means that the hidden states are transferred from one batch to another, but how about within each batch?

According to the keras documentation, “If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.”

This sounds like the hidden state for the first sample of batch 1 gets used in the first sample of batch 2, and the hidden states for the 2nd sample of the first batch gets used in the 2nd sample of the second batch.

Does this mean that the hidden states don’t transfer between samples WITHIN a batch? i.e. the hidden states from the first timesteps of my batch don’t transfer to the second sample.

Under this assumption, it seems that my batch size should be 1 when working with time series data.

Actually in the “Stateful LSTM for a One-Char to One-Char Mapping” section, the network already shows signs of overfitting.

When you start the new seed ‘K’, the network predict the next character as ‘B’ instead of ‘L’. I haven’t tried this network myself yet, but I have the feeling that no matter what seed you feed in, it will always predict the next character as ‘B’.

First of all thank you very much for your blogs. I benefited from the ones especially which are about LSTM very much. I appreciate your contributions very much.

However I have some difficulties while practicing what I have learned.

I try to predict device faults one day before by using alarm data. Let me summarize data in short.

Train data is 100*1000 array which includes only 0 and 1. Rows represent the index of the day (sequential 100 days).
Columns size is the dictionary of the alarms which means there are 1000 different kinds of alarms. And if the alarm occurs I keep it is 1 otherwise as 0.
To be more clear, if an alarm (which is the one corresponding to column index 200) occurs at day i, X[i,200] = 1

As Y values (labels), lets say I have an array sized as (100*50). Y values represents the device faults. There are 50 different kinds of device faults which I try to predict.
I keep the distributions of device faults per day like [0, 0, 0, 1, 0, 1, …] just like similar as train data.

So I want to predict device faults which occur day 101 by using the data of first 100 days alarm and device fault data.
I think it is a kind of sequence to sequence prediction. For that kinds of problem, should I use a stateful LSTM additionaly return_sequence = True? And how can I shape input and output data?
Or designing 50 different LSTM which make binary classification for each device fault is a better approach?

hi Jason, I implemented your code for Stateful LSTM for a One-Char to One-Char Mapping but I was not able to achieve accuracy greater than 32% can you tell me what I might be doing wrong, and also when we are training the model inside ‘for’ loop the statement “model.reset_states()” this will reset the model after the training was done for the current epoch and before the training for the next epoch is done in the ‘for’ loop, so in between this states will be reset and the current states of the LSTM will not be available for the next epoch so it will not behave as a stateful LSTM, can you tell me what I am assuming wrong

Thanks for your tutorial, I have one question about the “time-step”. Assuming that I have a dataset with 100 samples in a sequences and it has 10 features in each sample, so I have dataset 100*11 (10 features, 1 dependent variable). If I set time-step =1, (as you used in https://machinelearningmastery.com/multivariate-time-series-forecasting-lstms-keras/) , is that mean that I only predict today’s info by using yesterday’s info? If that is the case, is that mean that I didn’t fully utilize LSTM’s power cause LSTM would be capable to utilize long information?
Thanks.

The “LSTM State Within A Batch” example is a bit misleading here as the samples within a batch are processed in parallel hence LSTM states for each sample are independent and do not really help from sample to sample in any sequential manner. Using shuffle=True in that example achieves the same result.

The main reason why it achieved 100% accuracy on training data is primarily due to the large number of epochs=5000.