DevOps Stack Exchange is a question and answer site for software engineers working on automated testing, continuous delivery, service integration and monitoring, and building SDLC infrastructure. Join them; it only takes a minute:

I'm creating a webapp with a python backend, Nginx as the server, and Vue as my frontend web framework. When deploying to production, I build an nginx container that, on build, compiles all my frontend code into minified static files it can serve (takes ~2minutes), and on run, serves those static files and routes non-static-file HTTP requests to a different API Container (running the python app). (So two containers: nginx, and api server).

The problem I'm facing is that during development, I want to use a third container to run a hot-reloading webpack-dev-server for the frontend files, instead of having to wait two whole minutes for webpack to rebuild the frontend bundle every time I make a change. For reference, building the frontend code into servable static files takes around two minutes, whereas the webpack-dev-server can reflect changes in <3 seconds)

My idea is to have a production nginx config that handles the production case of serving static content directly from the nginx container, and a dev nginx config that handles the dev case of routing those static requests to the webpack-dev-server container. But having different docker-compose files for different environments seems to go against docker philosophy. Am I approaching this incorrectly?

1 Answer
1

An outline answer would be to have a docker-compose.yml file for your common base configuration, a docker-compose.prod.yml for production-specific sets of containers and for other specific environments, docker-compose.override.yml. It is possible to use the docker-compose extend keyword/command/statement in a docker compose yml file to inherit from another docker compose yml file.

The above approach is on the basis that one would first consider only the things common across all environments as the pure, base setup and then dev and prod environments would be based off of this (inherited).

Your dev-specific packer setup could be configured in the docker-compose.override.yml file, which inherits from the base docker-compose.yml. Configuration that is common across all environments would reside in docker-compose.yml. Your production setup could be in docker-compose.prod.yml, which again can inherit from the base docker-compose.yml.

References

This seems to be the best practice approach, based on the following references:

Some things you may or may not need to consider for your specific case

There may or may not be some bespoke config code required in
addition i.e. in your deployment itself, e.g. bash scripts etc., should the above approach provide most of what you need but not quite everything. These scripts can be run from docker compose.

Research what can be overridden when writing docker-compose.override.yml
when "inheriting" from docker-compose.yml. Your particular project may require a configuration in each of the dev and prod environments that override the same thing in the base environment, thus the need to override what is in the "parent"/base docker-compose.yml environment. Consider this issue: https://github.com/docker/compose/issues/3729 - which suggests there are limitations as to what can be overridden, particularly if you want to move a configuration item that was present in the parent docker-compose.yml. A better way could be to only have items common across all environments in docker-compose.yml and have the specifics in files for specific enviroments, such as docker-compose.prod.yml