Here instead of the build option, we have the image option that targets an existing image on the Docker registry

we also use volumes_from to load all the volumes of another container. We use it on the Nginx container to load the static directory from the application and serve it, and on the PostgreSQL container to load the persistent tablespace that is in the data container.

• We create a standard redis container, using the latest version of the official image and exposing the redis port.
• On the data container, we use true (That does nothing and just keep the container running) as a command as we just want this container to hold the PostgreSQL tablespace.
• Using a data container is the recommended way when we want to manage data persistence, using this, we don’t risk any accidental deletion during, for example, an upgrade of the PostgreSQL container (That will delete all the data from its container)

The PostgreSQL container

On the data container, we use true as command as we just want this container to hold the PostgreSQL tablespace.

First, we will configure the PostgreSQL container to initialize itself. The Postgres image loads by default all the scripts in the /docker-entrypoint-initdb.d directory, let’s create a simple script that will create a user and a database using the information from the env file :

The web application container

The easy way is to just use an official Python image which is based on Debian, but the resulting image will be quite big (mine was like 900MB).

We want to create a very light front image so we can easily and rapidly upgrade it when we do code change and we want to reduce as maximum the attack surface from the outside. For this, we will base our image on Alpine Linux that is specialised into this and very common in the Docker world.

Then, let’s create our custom Alpine for the Django application, we will first run an interactive session to create the project and then we will create the Dockerfile.

But first, fill the web/requirements.txt with all the Python modules we need:

We’ll create an instance to create our Django application. Note that we do this to not have to install Python and it’s dependencies on our host system. We’ll start an Alpine instance, and interactively :

Let’s populate the configuration file (mydjango/settings.py) with parameters usable with the information Docker provides in the containers, remove all the content between the “DATABASE” part and the “INTERNATIONALIZATION” and replace it with this :

Once done, just leave the container with exit (the –rm flag when creating the instance will destroy it when leaving, the application is saved in the mounted volume)

Here we load the Docker database if we find the environment variable provided by the env file, and use all those information to connect, if not, it’s because we are building the image, we don’t want this to block some commands (Like if you want to compile the gettext translation of your website)

For the redis container, we just target to the redis DNS that will be present once we have deployed it using docker-compose.

Now we have our little django application, let’s put all that we need in the Dockerfile of the web container:

The “restarting” on the data container is normal as it does not have persistent process.

If everything is fine, go to your public IP and then you should see the default Django page (That should only work if it can connect successfully to the database and load the cache engine):

The backups

Then if everything went fine, we can setup backups for the PostgreSQL database, here we just created a small script to do it every night and upload it to S3 (on a bucket with versioning enabled and a lifecycle policy)

Don’t forget to install the boto3 PIP module, the AWS cli and to configure your credentials with aws configure

Then, just configure a cron job to schedule it when you want.

The end

You now have a fully working docker compose environment 🙂 When you do a modification, you just need to run a new docker-compose build; docker-compose up -d and everything will be automatically updated.

Hi, thanks for the great article. It was quite useful for me even after two years of writing that.
I just wanted to say that you have not made it clear which version of the docker-compose this article has been written for, in the beginning of filling up the docker-compose.yml file. I tried to run it by v3 but it failed as the `volumes_from` has been removed from v3 and something else (top-level `volumes` key) is introduced which is not quite the same thing. So to run exactly this setup, we would need v2.
And also having an aggregated docker-compose.yml file including everything, at the end of the article would be much appreciated.

This downtime will not be between the ‘build’ and the ‘up’. You can run ‘build’ any number of times without affecting your already running service.

But if the image has changed, when you run ‘docker-compose up -d’ it will restart the service(s) affected. You will be without service while:
– The previous container stops. Some processes will take a while to stop.
– The process in the new container starts. Again, it can take a while depending in your setup.

Hi Diogo,
thanks for your comment!
As we don’t know your particular case (what you’ve tried, the errors that it is showing…) we’re not able to give you a proper answer.
Maybe you could try with Docker’s support channels. 🙂

Thanks for commenting! It is quite hard to troubleshoot your particular problem since we don’t have the overall view of your project layout, but taking into account the error you get (“web_1 | ImportError: No module named ‘mydjango.wsgi’”), the problem might be related to some wrong path/filename/module name in your scripts or code.

I had the same problem, but added a line in the docker-compose.yml under the web entry to make sure the working_dir was where my manage.py file was sitting. Maybe it has something to do with the newer version of Django. So, now my docker-compose.yml file has the following line: