How to Get Reproducible Results with Keras

This means they make use of randomness, such as initializing to random weights, and in turn the same network trained on the same data can produce different results.

This can be confusing to beginners as the algorithm appears unstable, and in fact they are by design. The random initialization allows the network to learn a good approximation for the function being learned.

Nevertheless, there are times when you need the exact same result every time the same network is trained on the same data. Such as for a tutorial, or perhaps operationally.

In this tutorial, you will discover how you can seed the random number generator so that you can get the same results from the same network on the same data, every time.

Let’s get started.

How to Get Reproducible Results from Neural Networks with KerasPhoto by Samuel John, some rights reserved.

Tutorial Overview

This tutorial is broken down into 6 parts. They are:

Why do I Get Different Results Every Time?

Demonstration of Different Results

The Solutions

Seed Random Numbers with the Theano Backend

Seed Random Numbers with the TensorFlow Backend

What if I Am Still Getting Different Results?

Environment

This tutorial assumes you have a Python SciPy environment installed. You can use either Python 2 or 3 with this example.

This tutorial assumes you have Keras (v2.0.3+) installed with either the TensorFlow (v1.1.0+) or Theano (v0.9+) backend.

This tutorial also assumes you have scikit-learn, Pandas, NumPy, and Matplotlib installed.

Why do I Get Different Results Every Time?

This is a common question I see from beginners to the field of neural networks and deep learning.

This misunderstanding may also come in the for of questions like:

How do I get stable results?

How do I get repeatable results?

What seed should I use?

Neural networks use randomness by design to ensure they effectively learn the function being approximated for the problem. Randomness is used because this class of machine learning algorithm performs better with it than without.

The most common form of randomness used in neural networks is the random initialization of the network weights. Although randomness can be used in other areas, here is just a short list:

Randomness in Initialization, such as weights.

Randomness in Regularization, such as dropout.

Randomness in Layers, such as word embedding.

Randomness in Optimization, such as stochastic optimization.

These sources of randomness, and more, mean that when you run the exact same neural network algorithm on the exact same data, you are guaranteed to get different results.

Demonstration of Different Results

We can demonstrate the stochastic nature of neural networks with a small example.

In this section, we will develop a Multilayer Perceptron model to learn a short sequence of numbers increasing by 0.1 from 0.0 to 0.9. Given 0.0, the model must predict 0.1; given 0.1, the model must output 0.2; and so on.

The code to prepare the data is listed below.

1

2

3

4

5

6

7

8

9

10

# create sequence

length=10

sequence=[i/float(length)foriinrange(length)]

# create X/y pairs

df=DataFrame(sequence)

df=concat([df.shift(1),df],axis=1)

df.dropna(inplace=True)

# convert to MLPfriendly format

values=df.values

X,y=values[:,0],values[:,1]

We will use a network with 1 input, 10 neurons in the hidden layer, and 1 output. The network will use a mean squared error loss function and will be trained using the efficient ADAM algorithm.

The network needs about 1,000 epochs to solve this problem effectively, but we will only train it for 100 epochs. This is to ensure we get a model that makes errors when making predictions.

After the network is trained, we will make predictions on the dataset and print the mean squared error.

The code for the network is listed below.

1

2

3

4

5

6

7

8

9

10

# design network

model=Sequential()

model.add(Dense(10,input_dim=1))

model.add(Dense(1))

model.compile(loss='mean_squared_error',optimizer='adam')

# fit network

model.fit(X,y,epochs=100,batch_size=len(X),verbose=0)

# forecast

yhat=model.predict(X,verbose=0)

print(mean_squared_error(y,yhat[:,0]))

In the example, we will create the network 10 times and print 10 different network scores.

The complete code listing is provided below.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

from pandas import DataFrame

from pandas import concat

from keras.models import Sequential

from keras.layers import Dense

from sklearn.metrics import mean_squared_error

# fit MLP to dataset and print error

def fit_model(X,y):

# design network

model=Sequential()

model.add(Dense(10,input_dim=1))

model.add(Dense(1))

model.compile(loss='mean_squared_error',optimizer='adam')

# fit network

model.fit(X,y,epochs=100,batch_size=len(X),verbose=0)

# forecast

yhat=model.predict(X,verbose=0)

print(mean_squared_error(y,yhat[:,0]))

# create sequence

length=10

sequence=[i/float(length)foriinrange(length)]

# create X/y pairs

df=DataFrame(sequence)

df=concat([df.shift(1),df],axis=1)

df.dropna(inplace=True)

# convert to MLP friendly format

values=df.values

X,y=values[:,0],values[:,1]

# repeat experiment

repeats=10

for_inrange(repeats):

fit_model(X,y)

Running the example will print a different accuracy in each line.

Your specific results will differ. A sample output is provided below.

1

2

3

4

5

6

7

8

9

10

0.0282584265697

0.0457025913022

0.145698137198

0.0873461454407

0.0309397604521

0.046649185173

0.0958450337178

0.0130660263779

0.00625176026631

0.00296055161492

The Solutions

The are two main solutions.

Solution #1: Repeat Your Experiment

The traditional and practical way to address this problem is to run your network many times (30+) and use statistics to summarize the performance of your model, and compare your model to other models.

I strongly recommend this approach, but it is not always possible due to the very long training times of some models.

Solution #2: Seed the Random Number Generator

Alternately, another solution is to use a fixed seed for the random number generator.

Random numbers are generated using a pseudo-random number generator. A random number generator is a mathematical function that will generate a long sequence of numbers that are random enough for general purpose use, such as in machine learning algorithms.

Random number generators require a seed to kick off the process, and it is common to use the current time in milliseconds as the default in most implementations. This is to ensure different sequences of random numbers are generated each time the code is run, by default.

This seed can also be specified with a specific number, such as “1”, to ensure that the same sequence of random numbers is generated each time the code is run.

The specific seed value does not matter as long as it stays the same for each run of your code.

The specific way to set the random number generator differs depending on the backend, and we will look at how to do this in Theano and TensorFlow.

Seed Random Numbers with the Theano Backend

Generally, Keras gets its source of randomness from the NumPy random number generator.

For the most part, so does the Theano backend.

We can seed the NumPy random number generator by calling the seed() function from the random module, as follows:

1

2

from numpy.random import seed

seed(1)

The importing and calling of the seed function is best done at the top of your code file.

This is a best practice because it is possible that some randomness is used when various Keras or Theano (or other) libraries are imported as part of their initialization, even before they are directly used.

We can add the two lines to the top of our example above and run it two times.

You should see the same list of mean squared error values each time you run the code (perhaps with some minor variation due to precision on different machines), as follows:

Seed Random Numbers with the TensorFlow Backend

Keras does get its source of randomness from the NumPy random number generator, so this must be seeded regardless of whether you are using a Theano or TensorFlow backend.

It must be seeded by calling the seed() function at the top of the file before any other imports or other code.

1

2

from numpy.random import seed

seed(1)

In addition, TensorFlow has its own random number generator that must also be seeded by calling the set_random_seed() function immediately after the NumPy random number generator, as follows:

1

2

from tensorflow import set_random_seed

set_random_seed(2)

To be crystal clear, the top of your code file must have the following 4 lines before any others;

1

2

3

4

from numpy.random import seed

seed(1)

from tensorflow import set_random_seed

set_random_seed(2)

You can use the same seed for both, or different seeds. I don’t think it makes much difference as the sources of randomness feed into different processes.

Adding these 4 lines to the above example will allow the code to produce the same results every time it is run. You should see the same mean squared error values as those listed below (perhaps with some minor variation due to precision on different machines):

What if I Am Still Getting Different Results?

To re-iterate, the most robust way to report results and compare models is to repeat your experiment many times (30+) and use summary statistics.

If this is not possible, you can get 100% repeatable results by seeding the random number generators used by your code. The solutions above should cover most situations, but not all.

What if you have followed the above instructions and still get different results from the same algorithm on the same data?

It is possible that there are other sources of randomness that you have not accounted for.

Randomness from a Third-Party Library

Perhaps your code is using an additional library that uses a different random number generator that too must be seeded.

Try cutting your code back to the minimum required (e.g. one data sample, one training epoch, etc.) and carefully read the API documentation in an effort to narrow down additional third-party libraries introducing randomness.

Randomness from Using the GPU

All of the above examples assume the code was run on a CPU.

It is possible that when using the GPU to train your models, the backend may be configured to use a sophisticated stack of GPU libraries, and that some of these may introduce their own source of randomness that you may or may not be able to account for.

I’m using the tensorflow backend and yes, everything is up to date. Freshly installed on Arch Linux at home. I stumpled upon the problem at work and want this to be fixed.

I tried the imdb_lstm example of keras with fixed random seeds for numpy and tensorflow just as you described, using one model only which was saved after compiling but before training.
I’m loading this model and training it again with, sadly, different results.
I can see though that through the start of both trainings the accuracies and losses are the same until sample 1440 of 25000 where they begin to differ slightly and stay so until the end, e.g. 0.7468 vs. 0.7482 which I wouldn’t see critical. At the end then the validation accuracy though differs by 0.05 which should not be normal, in this case 0.8056 vs. 0.7496.
I’m only training 1 epoch for this example .

I also got pointed at LSTMs being deterministic as their equations don’t have any random part and thus results should be reproducible.

I guess I will ask the guys of keras about this as it seems to be a deeper issue to me.

If you have another idea, let me know. Great site btw, I often stumpled upon your blog already when I began learning machine learning 🙂

I have read your responses and there seems to be a problem with the Tensorflow backend. Is there a solution for that because I need to use the tensorflow backend only and not the theano backend. I also have a single LSTM layer network and running on a server with 56 CPU cores and CentOS. Pls help !!!

Most of the effort was using my GPU, a GEForce 1060,
but I checked at the end that everything worked with
the CPU as well. What I was trying to do was make the
“Nietzsche” LSTM example reproduce exactly. The source
for that is at

General remark: It is harder than it looks to get reproducibility,
but it does work. It’s harder because there are so many little
ways that some nondeterminism can sneak in on you, and because
there is no decent documentation for anything, and the info on
the web is copious, but disorganized, and frequently out of
date or just plain wrong. In the end there was no way to find
the last bug except the laborious process of repeatedly modifying
the code and adding print statements of critical state data to
find the place the divergence began. Crude, but after you’ve
tried to be smart about it and failed enough times, the only way.

Lessons learned:

(1)
It is indeed necessary to create a .theanorc (if it isn’t already
there) and add certain lines to it. That file on windows is found at
C:\Users\yourloginname\.theanorc

If you try to create it in Windows Explorer, windows will block
you because it doesn’t think “.theano” is a complete file name
(it’s only a suffix). Create it as .theano.txt and then rename it
using a command shell.

It turned out I didn’t need the last one and I commented it out.
I imagine it is needed if you are using conv nets; I wasn’t.

(2)
You need to seed the random number generator FIRST THING:

import numpy as np
np.random.seed(any-constant-number)

You need to do this absolutely as early as possible.
You can’t do it before a “from future…” import but
do it right after that, in your __main__ i.e. the .py
file you run from the IDE or command line.

Don’t get clever about this and put it in your favorite
utility file and then import that early. I tried and it
wasn’t worth it; there are so many ways something can
get imported and mess with the RNG.

(3)
This was my last bug, and naturally therefore the dumbest:

There are two random number generators (at least) floating
around the Python world, numpy.random and Python’s native
random.py. The code posted at the URL above uses BOTH of
them: now one, now the other. Francois, I love you man, but…

I seeded one, but never noticed that there was another.
The unseeded one naturally caused the program to diverge.

Make sure you’re using one of them only, throughout.
If you are unsure what all your libraries might be doing,
I suppose you could just seed them both (haven’t tried).

(4)
Prepare for the siege: cut your program down before you begin.
Use a tiny dataset, fewer iterations, do whatever you can do
to reduce the runtime. You don’t need to run it for 10 minutes
to see that it’s going to diverge. When it works you can run
it once with the full original settings. Well, okay, twice,
to prove the point.

(5)
GPU and CPU give different results. This is natural.
Each is reproducible, but they are different.
The .theanorc settings and code changes (pinning RNG’s)
to get reproducibility are the same.

(6)
During training, Theano produces lines like these on the console:

76288/200287 [==========……………….] – ETA: 312s – loss: 2.2663

These will not reproduce exactly. This is not a problem; it
is just a real-time progress report, and the point at which
it decides to report can vary slightly if the run goes a little
faster or slower, but the run is not diverging. Often it does
report at the same number (here: 76288) and when that happens,
the reported loss will be the same. The ETA values (estimated time
remaining to finish the epoch) will always vary a little.

(7)
How exact is exact? If you’ve got it right, it’s exact
to the last bloody digit. Close may be good enough for
your purposes, but true reproducibility is exact. If the
final loss value looks pretty close, but doesn’t match exactly,
you have not reproduced the run. If you go back and look at
loss values in the middle of the run, you are apt to find
they were all over the place, and you just got lucky that
the final values ended up close.

I forgot one crucial trick. The best thing to monitor, to see
if it is diverging, is the sequence of loss values during training.
Using the console output that model.train() normally produces has
the issues I mentioned above under point 6. Using LossHistory
callback is way better. See “Example: recording loss history”
at https://keras.io/callbacks/#example-recording-loss-history

Jason, thanks for this blog page, I don’t know how much I have
added to it, but without it I wouldn’t have succeeded.
/jim

I’ve seen it written that requiring deterministic execution will slow down execution by as much as two times.

When I timed the LSTM setup described above, on GPU, the difference was negligible: 0.07% — 5 seconds on 6,756.

It may depend on what kind of net you are running, but my example above was unaffected.

That makes it acceptable to use deterministic execution by default. (Just remember to re-fiddle the random number generator seed if you actually want a number of different runs, eg to average metrics.)
/jim

Hi Jason,
Thank you for this helpful tutorial, but i still have a question!

If we are building a model and we want to test the effect of some changes, changing the input vector, some activation function, the optimiser etc … and we want to know if they are really enhancing the model, do you think that it makes sense if we mixed the two mentioned ways.
In other words, we can repeat the execution for n times +30, every time we generate a random integer as a seed and we save its value.
At the end, we calculate the average accuracy and recover the seed value generating the most close score of this average, and we use this seed value for our next experiences.

Do you agree that in this scenario we get the “most representative” random values which could be usable and reliable in the tuning phase?