I have a multi-tier balancing setup, and to ensure that config changes to the “downstream” (second-in-line) haproxy instances are still working when their config changes (frequently; this is a multi-tenanted SaaS environment) the reload process for each haproxy instance involves doing some sanity checks on the newly-reloaded instance before it is returned to service.

To implement this, I have a “healthcheck” frontend which is created with disabled set. This is the frontend that the “upstream” load balancer hits to test whether to include the downstream instance in the rotation. The theory is that once the sanity checks are completed successfully, the instance can have its “healthcheck” frontend enabled, and life goes on.

However, this doesn’t work in practice. A frontend which is created as “disabled” cannot be enabled. Any attempt to enable the frontend reports, “Frontend was previously shut down, cannot enable”. But… I only disabled it… I didn’t shut it down!

I’m fairly certain that the behaviour is unintentional, and the bug is fairly straightforward to fix:

The code being changed goes back to the dawn of time; my guess is simply that nobody in their right mind does things the way I do them…

Anyhoo, I’d be interested in seeing this behaviour changed in 1.6, so I can stop carrying this local patch, or alternately some pointers on what I’m doing wrong, and how I can achieve what I want to achieve in some other manner.

The “disabled” keyword starts the server in the “disabled” state. That means
that it is marked down in maintenance mode, and no connection other than the
ones allowed by persist mode will reach it. It is very well suited to setup
new servers, because normal traffic will never reach them, while it is still
possible to test the service by making use of the force-persist mechanism.

I would cross-post this to the mailing list. It may be a bug, and for now that’s where Willy wants bug-reporting to happen.