README.md

Bare Metal Service Load Balancers

AKA "how to set up a bank of haproxy for platforms that don't have load balancers".

Project Status:

THIS PROJECT IS DEPRECATED AND UNMAINTAINED.

Ingress Catalog lists a few 3rd party replacements for this project, including an HAProxy Ingress Controller.

Disclaimer:

This is a work in progress.

A better way to achieve this will probably emerge once discussions on (#260, #561) converge.

Backends are pluggable, but Haproxy is the only loadbalancer with a working implementation.

I have never deployed haproxy to production, so contributions are welcome (see wishlist for ideas).

For fault tolerant load balancing of ingress traffic, you need:

Multiple hosts running load balancers

Multiple A records for each hostname in a DNS service.

This module will not help with the latter

Overview

Ingress

There are 2 ways to expose a service to ingress traffic in the current kubernetes service model:

Create a cloud load balancer.

Allocate a port (the same port) on every node in your cluster and proxy ingress traffic through that port to the endpoints.

The service-loadbalancer aims to give you 1 on bare metal, making 2 unnecessary for the common case. The replication controller manifest in this directly creates a service-loadbalancer pod on all nodes with the role=loadbalancer label. Each service-loadbalancer pod contains:

A load balancer controller that watches the kubernetes api for services and endpoints.

A load balancer manifest. This is used to bootstrap the load balancer. The load balancer itself is pluggable, so you can easily swap
haproxy for something like f5 or pound.

A template used to write load balancer rules. This is tied to the loadbalancer used in the manifest, since each one has a different config format.

L7 load balancing of Http services: The load balancer controller automatically exposes http services to ingress traffic on all nodes with a role=loadbalancer label. It assumes all services are http unless otherwise instructed. Each http service gets a loadbalancer forwarding rule, such that requests received on http://loadbalancer-node/serviceName:port balanced between its endpoints according to the algorithm specified in the loadbalacer.json manifest. You do not need more than a single loadbalancer pod to balance across all your http services (you can scale the rc to increase capacity).

L4 loadbalancing of Tcp services: Since one needs to specify ports at pod creation time (kubernetes doesn't currently support port ranges), a single loadbalancer is tied to a set of preconfigured node ports, and hence a set of TCP services it can expose. The load balancer controller will dynamically add rules for each configured TCP service as it pops into existence. However, each "new" (unspecified in the tcpServices section of the loadbalancer.json) service will need you to open up a new container-host port pair for traffic. You can achieve this by creating a new loadbalancer pod with the targetPort set to the name of your service, and that service specified in the tcpServices map of the new loadbalancer.

Cross-cluster loadbalancing

On cloud providers that offer a private ip range for all instances on a network, you can setup multiple clusters in different availability zones, on the same network, and loadbalancer services across these zones. On GCE for example, every instance is a member of a single network. A network performs the same function that a router does: it defines the network range and gateway IP address, handles communication between instances, and serves as a gateway between instances and other networks. On such networks the endpoints of a service in one cluster are visible in all other clusters in the same network, so you can setup an edge loadbalancer that watches a kubernetes master of another cluster for services. Such a deployment allows you to fallback to a different AZ during times of duress or planned downtime (eg: database update).

Lets introduce a small twist. The nginx-app example exposes the nginx service using NodePort, which means it opens up a random port on every node in your cluster and exposes the service on that. Delete the type: NodePort line before creating it.

HTTPS

The nginxsvc is specified in the tcpServices of the loadbalancer.json manifest.

The https service is accessible directly on the specified port, which matches the service port.

You need to take care of ensuring there is no collision between these service ports on the node.

SSL Termination

To terminate SSL for a service you just need to annotate the service with serviceloadbalancer/lb.sslTerm: "true" as seen below. This will cause your service to be served behind /{service-name} or /{service-name}:{port} if not running on port 80. This mimics the standard http functionality.

Custom ACL

Adding the aclMatch annotation will allow you to serve the service on a specific path although URLs will not be rewritten back to root. The following will cause your service to be available at /test and your web service will be passed the url with /test on the front.

You can tell it to expose services on a different namespace through a command line argument. Currently, each namespace needs a different loadbalancer (see wishlist). Modify the rc.yaml file to supply the namespace argument by adding the following lines to the bottom of the loadbalancer spec:

Though the loadbalancer can watch services across namespaces you can't start 2 loadbalancers with the same name in a single namespace. So if you already have a loadbalancer running, either change the name of the rc, or change the namespace in rc.yaml: