DevOps Stack Exchange is a question and answer site for software engineers working on automated testing, continuous delivery, service integration and monitoring, and building SDLC infrastructure. Join them; it only takes a minute:

We have adopted Docker and Terraform as the foundation forour devOps.
We have a basic version of it up and running today.

Dockerfiles live in each projects git repository and Terraform files
are more centralized.

The number of projects we manage is increasing as do the number of environments, data seeding To make things more interesting we are also
moving toward splitting services up into microservices where that makes sense.

What infrastructure should it use? (We want to be able to allow "prod infrastructure) to be available for developers to spin up in dev cloud for debugging and testing) (along with several other combination)

There are a few other parameters we are considering and all in all the
number of possible permutations become very high.

There is no way we can cleanly keep order of that many Dockerfiles.

So I am looking for a tool that helps us keep track of all the different configurations and makes it easy to find them.

Ideally, inside my head, I am picturing a wizard-based web app, where you enter in the parameters and it executes the necessary scripts automagically.
This would also contain logs of what has been done, what environments are running and a few other things.

So far I havent had much luck.
I know a lot of companies must have solved it and I must be thinking about it
wrong, or there is a big opportunity here.

I don't get why you would need more dockerfiles than project, the project should build its container, release it as an artifact and then you deploy with needed services (each being an artifact also) in a specific environment, usually with a pipeline to promote from one environment to another.
– Tensibai♦Sep 3 '18 at 15:15

But if you want to keep a file based 'all in one' approach, I assume using templates is the way to go, how to write those templates is highly dependent on your needs and common ground between project and infrastructures
– Tensibai♦Sep 3 '18 at 15:17

Each project, can be deployed to any of lets say 5 different destinations where each destination can have different "hardware" setups, require different data seeding.
– Development 4.0Sep 3 '18 at 18:52

1

So the main process is the same and you just need different variables for those environments ?
– Tensibai♦Sep 4 '18 at 7:25

Nice to see you're using containers to deploy your IaC/terraform code. I think it's more about how you organize that IaC that will help you locate it. For example using a combination of repos per AWS account and extensive use of terraform modules. Also, the way you organize your CI/CD pipelines will help visually too.The way we do it is documented here docs.cloudposse.com/reference-architectures/introduction
– Erik OstermanSep 10 '18 at 2:52

1 Answer
1

So let's imagine you have 20 microservices, each in a separate repository. Each microservice needs to be deployed into 5+ environments: dev local, dev cloud, test, staging, prod. Understandably all configuration will be different in each environment. However, you don't need to put it into the Dockerfile - in fact you don't put any configuration into the Dockerfile. Your app gets all required parameters from environment variables, or may be from a config file, if it gets too complex.

So you would have one Dockerfile per project, and if your developers need to spin it up locally, they will just use docker-compose, which has all environment variables set up, and maybe a config file mounted as volume. In stage envs the configuration will be totally different, but the Docker image will be the same. Your app will need to be able to read this configuration at start up time, and bail out early if something doesn't work.

Now, provisioning the infrastructure with Terraform would follow the same approach. You would have a generic module which provisions a set of some resources, and it would accept a bunch of parameters which would depend on environment. You can use workspaces to manage this quite easily.

As for the last question, large companies solve this with a CI/CD system. This can be Jenkins, or Spinnaker, or any of the other tools, whichever you prefer.