DevOps Stack Exchange is a question and answer site for software engineers working on automated testing, continuous delivery, service integration and monitoring, and building SDLC infrastructure. Join them; it only takes a minute:

I am familiarizing with the architecture and practices to package, build and deploy software or unless, small pieces of software.

If suddenly I am mixing concepts with specific tools (sometimes is unavoidable), let me know if I am wrong, please.

On the road, I have been reading and learning about the images and containers terms and their respective relationships in order to start to build the workflow software systems of a better possible way.

And I have a question about the services orchestration in the context of the docker :

The containers are lightweight and portable encapsulations of an environment in which we have all the binary and dependencies we need to run our application. OK

I can set up communication between containers using container links--link flag.

I can replace the use of container links, with docker-compose in order to automate my services workflow and running multi-containers using .yaml file configurations.

And I am reading about of the Container orchestration term, which defines the relationship between containers when we have distinct "software pieces" separate from each other, and how these containers interact as a system.

Well, I suppose that I've read good the documentation :P

My question is:

A docker level, are container links and docker-compose a way of container orchestration?

Or with docker, if I want to do container orchestration ... should I use docker-swarm?

Consider microservices for a moment. Each service should be independently deployed and independently scaled across a cluster of VMs. Where does basic Docker and Docker Compose fit in? They don’t compete with distributed system orchestrators like Openshift Kubernetes, CloudFoundry or Docker Swarm.

A microservices platform that scales out and has fault tolerances needs a cluster of servers with internal load balancing where traffic to each service is load balanced to many replicas. The idea that you would link two containers doesn’t make sense in a microservices world as each container should connect to a logical service that balances traffic to many pods that are scaled independently of other services. So study the templates for how large scale apps run across a whole cluster of VMs with something like OKD Kubernetes.

So where does Docker Compose fit in if you are using a full orchestrator? I don’t think it does if you are using something like Kubernetes. Developers can run Minikube or Minishift on their laptop. Better yet one “not production” cluster can run a different namespace for every developer to have their own personal or shared sandpits to develop in. You can also setup one namespace per test environment on that one cluster simplifying you management. (Production can be in a dedicated cluster.)

Note that Docker isn’t the only container runtime these days and IMHO it’s unlikely to not be the default one for orchestrators in other than Swarm in the future. For example k8s now uses a standard API that lets you plug in alternatives and runtimes that focus on security and footprint are likely to become the default.

Update: There are some special cases where “running two image in the same process space” makes sense. One is the “sidecar pattern”. Rather than than mixing business services together this is where you run a technical service image with every business service image. An example of that is a service mesh that uses an smart proxy in every container. Every business service only connects to the smart proxy in the same process space that forwards traffic to the next business service. The smart proxy can then do things like service discovery, load balancing, circuit breaking, load shedding, protocol upgrades, mutual TLS between all business services.