About Service life-cycle management

Orchestration engine

Cloud 66 provides an orchestration engine to roll out Docker images to your servers and initialize containers from them. This is what is provided for you from start to finish:

Bring up your containers

Monitoring

Scaling

Port forwarding

Load balancing

Health checks

Graceful draining and shutdown of workers

Traffic switching

Deployment rollbacks (version control)

Deploying your stack

The above can be summarized as the life-cycle management of your containers, which occurs with each new deployment of your application. For example, if you have a simple stack running api and web services, this is what happens when you redeploy your stack (as pertains to the life-cycle management of your containers):

Your latest code is pulled from Git and new images are built (on BuildGrid).

These images are rolled out to your Docker cluster.

Containers are initialized from these images, with all relevant environment variables and internal networking made available to them.

If and when your health checks are successful, your old containers are gracefully drained and traffic is switched to the new containers (on the specified port(s)).

Configuration

There are a number of directives you can set in your service configuration to customize your container life-cycle management:

Health

The health option allows you to specify one of two types of checks on your containers - readiness checks, and liveness checks. Both checks define a set of rules that are used to determine whether your application is currently healthy. For instance, you can check that the application is responding on an HTTP endpoint; or a post-initialization file is present.

Readiness health checks are used to determine if your newly started containers are ready to replace the old containers. Until the new containers are ready, the old containers will not be killed, and the new containers will not be served traffic. This effectively provides zero down-time deployments.

Liveness health checks, on the other hand, are used to continuously monitor your application once it’s already running. If your application fails a liveness check, it will be restarted - this is useful for issues that can not be resolved otherwise.

The rules below are available to both health checks - note that you aren’t required to specify all options. Any options not used will use their default values.

You can also use the default health rules with health: default, or explicitly disable health checking by leaving the health option out or specifying health: none.

Pre-start signal

This is a signal that is sent to the existing containers of the service before the new containers are started during deployment. An example could be USR1 - but it depends on what your container is running as to which signals make sense.

services:[service_name]:pre_start_signal:USR1

Pre-stop sequence

This is a stop sequence that is executed on your running containers before they are shut down. It is a sequence of wait times and signals to send to the process. If the sequence completes and the container is still running, a force kill will be sent. For example:

services:[service_name]:pre_stop_sequence:1m:USR2:30s:USR1:50s

The example above, we’ll wait 1 minute before sending the USR2 signal, then wait 30 seconds before sending the USR1 signal, and then wait 50 seconds before we force a kill. These are some examples of duration values that stop_grace and pre_stop_sequence can use - 1m (1 minute), 30s (30 seconds) and 1h (1 hour).

Valid time values are s for seconds, m for minutes and h for hours. Valid signal values for a signal are (without the quotes):