Introduction

You’ve been editing the same Compose file for this entire tutorial. Well, we
have good news. That Compose file works just as well in production as it does
on your machine. In this section, we will go through some options for running your
Dockerized application.

Choose an option

Customers of Docker Enterprise Edition run a stable, commercially-supported
version of Docker Engine, and as an add-on they get our first-class management
software, Docker Datacenter. You can manage every aspect of your application
through the interface using Universal Control Plane, run a private image registry with Docker
Trusted Registry, integrate with your LDAP provider, sign production images with
Docker Content Trust, and many other features.

Bringing your own server to Docker Enterprise and setting up Docker Datacenter
essentially involves two steps:

Open ports to services on cloud provider machines

At this point, your app is deployed as a swarm on your cloud provider servers,
as evidenced by the docker commands you just ran. But, you still need to
open ports on your cloud servers in order to:

if using many nodes, allow communication between the redis service and web service

allow inbound traffic to the web service on any worker nodes so that
Hello World and Visualizer are accessible from a web browser.

allow inbound SSH traffic on the server that is running the manager (this may be already set on your cloud provider)

These are the ports you need to expose for each service:

Service

Type

Protocol

Port

web

HTTP

TCP

80

visualizer

HTTP

TCP

8080

redis

TCP

TCP

6379

Methods for doing this vary depending on your cloud provider.

We use Amazon Web Services (AWS) as an example.

What about the redis service to persist data?

To get the redis service working, you need to ssh into
the cloud server where the manager is running, and make a data/
directory in /home/docker/ before you run docker stack deploy.
Another option is to change the data path in the docker-stack.yml to
a pre-existing path on the manager server. This example does not
include this step, so the redis service is not up in the example output.

Iteration and cleanup

From here you can do everything you learned about in previous parts of the
tutorial.

Scale the app by changing the docker-compose.yml file and redeploy
on-the-fly with the docker stack deploy command.

Change the app behavior by editing code, then rebuild, and push the new image.
(To do this, follow the same steps you took earlier to build the
app and publish the
image).

You can tear down the stack with docker stack rm. For example:

docker stack rm getstartedlab

Unlike the scenario where you were running the swarm on local Docker machine
VMs, your swarm and any apps deployed on it continue to run on cloud
servers regardless of whether you shut down your local host.

Congratulations!

You’ve taken a full-stack, dev-to-deploy tour of the entire Docker platform.

There is much more to the Docker platform than what was covered here, but you
have a good idea of the basics of containers, images, services, swarms, stacks,
scaling, load-balancing, volumes, and placement constraints.

Want to go deeper? Here are some resources we recommend:

Samples: Our samples include multiple examples of popular software
running in containers, and some good labs that teach best practices.

User Guide: The user guide has several examples that
explain networking and storage in greater depth than was covered here.

Admin Guide: Covers how to manage a Dockerized production
environment.