The above YAML would expose port 8080 of our helloworld Pods on the http port of the provisioned ELB.

Exposing on a non-http port and protocol

You can change the port of the load balancer and protocol of the load balancer by changing the targetPortfield and adding a ports.protocol field. This way you can expose TCP services directly without having to customize the Ingress Controller.

Customizing the External Load Balancer

This section will focus on the custom options you can set on AWS Load Balancers via a Service of type LoadBalancer, but when available will also explain the settings for Azure Load Balancers. You can configure these options by adding annotations to the service.

Internal Load Balancers

If you want the AWS ELB to be available only within your VPC (can be extended to other VPC by VPC peering) use the following annotation:

The second annotation specifies which protocol a pod speaks. For HTTPS and SSL, the ELB will expect the pod to authenticate itself over the encrypted connection.

HTTP and HTTPS will select layer 7 proxying: the ELB will terminate the connection with the user, parse headers and inject the X-Forwarded-For header with the user’s IP address (pods will only see the IP address of the ELB at the other end of its connection) when forwarding requests.

TCP and SSL will select layer 4 proxying: the ELB will forward traffic without modifying the headers.
In a mixed-use environment where some ports are secured and others are left unencrypted, the following annotations may be used:

AWS Network Load Balancer

AWS is in the process of replacing ELBs with NLBs (Network Load Balancers) and ALBs (Application Load
Balancers). NLBs have a number of benefits over “classic” ELBs including scaling to many more requests.
Alpha support for NLBs was added in Kubernetes 1.9. As it’s an alpha feature it’s not yet recommended
for production workloads but you can start trying it out.

Other AWS ELB Configuration Options

There are more annotations to manage Classic ELBs that are described below.

metadata:
name: my-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
# The time, in seconds, that the connection is allowed to be idle (no data has
# been sent over connection) before it is closed by the load balancer.
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
# Specifies whether cross-zone load balancing is enabled for the load balancer.
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=prod,owner=devops"
# A comma-separated list of key-value pairs which will be recorded as
# additional tags in the ELB.
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: ""
# The number of successive successful health checks required for a backend to
# be considered healthy for traffic. Defaults to 2, must be between 2 and 10.
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
# The number of unsuccessful health checks required for a backend to be
# considered unhealthy for traffic. Defaults to 6, must be between 2 and 10.
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "20"
# The approximate interval, in seconds, between health checks of an
# individual instance. Defaults to 10, must be between 5 and 300.
service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5"
# The amount of time, in seconds, during which no response means a failed
# health check. This value must be less than the service.beta.kubernetesaws-load-balancer-healthcheck-interval
# value. Defaults to 5, must be between 2 and 60.
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e"
# A list of additional security groups to be added to the ELB.

Using Multiple Ingress Controllers

By default a cluster in Giant Swarm is bootstrapped with a default Ingress Controller based on NGINX. This Ingress Controller is registered with the default nginx Ingress Class.

You can run additional Ingress Controllers by exposing them through Services of type LoadBalancer as explained above.

Some use cases for this might be:

An Ingress Controller that is behind an internal ELB for traffic between services within the VPC (or a group of peered VPCs)

An Ingress Controller behind an ELB that already terminates SSL

An Ingress Controller with different functionality or performance

Note that if you are running multiple Ingress Controllers you need to annotate each Ingress with the appropriate class, e.g.

kubernetes.io/ingress.class: "nginx"

or

kubernetes.io/ingress.class: "nginx-internal"

Not specifying the annotation will lead to multiple ingress controllers claiming the same ingress. Specifying a value which does not match the class of any existing ingress controllers will result in all ingress controllers ignoring the ingress.

Further note that if you are running additional Ingress Controllers you might need to configure them so their Ingress Class does not collide with the class of our default NGINX Ingress Controller. For the community supported NGINX Ingress Controller this is described in the official documentation.

Giant Swarm uses cookies to give you the best online experience. If you continue to use this site, you agree to our use of cookies. To disable all but strictly necessary cookies, you may disagree by clicking the button to the right. Please see our privacy policy for details.