Node auto scaling can be enabled by setting docker:auto-scale:enabled to true or
by using tsurunodeautoscaleruleset to configure a autoscale rule.
It will try to add, remove and rebalance docker nodes used by tsuru.

Node scaling algorithms run in clusters of docker nodes, each cluster is based
on the pool the node belongs to.

There are two different scaling algorithms that will be used, depending on how
tsuru is configured: count based scaling, and memory based scaling.

Having max-container-count value as \(max\), the number of nodes in cluster
as \(nodes\), and the total number of containers in all cluster’s nodes as
\(total\), we get the number of free slots \(free\) with:

\[free = max * nodes - total\]

If \(free < 0\) then a new node will be added and tsuru will rebalance
containers using the new node.

It’s chosen if docker:auto-scale:max-container-count is not set and your
scheduler is configured to use node’s memory information, by setting
docker:scheduler:total-memory-metadata and docker:scheduler:max-used-memory.

Having the amount of memory necessary by the plan with the largest memory
requirement as \(maxPlanMemory\). A new node will be added if for all nodes
the amount of unreserved memory (\(unreserved\)) satisfies:

Considering the amount of memory necessary by the plan with the largest memory
requirement as \(maxPlanMemory\) and docker:auto-scale:scale-down-ratio
value as \(ratio\). A node will be removed if its current containers can be
distributed across other nodes in the same pool and at least one node still has
unreserved memory (\(unreserved\)) satisfying:

Each time tsuru tries to run an auto scale action (add, remove, or rebalance). It
will create an auto scale event. This event will record the result of the auto
scale action and possible errors that occurred during its execution.