Routing external traffic into your Kubernetes services

There are several methods to route internet traffic to your Kubernetes cluster. However, when choosing the right approach, we need to consider some factors such as cost, security, and maintainability. This article guides you to choose a better approach to route the external traffic to your Kubernetes cluster by considering the above facts.

Before routing external traffic, let’s get some knowledge on the routing mechanism inside the cluster. In Kubernetes, all the applications are running inside a pod. A pod is a container which gives more advantages over static instances.

To access an application running inside a pod, there should be a dedicated service for it. The mapping between the service and pod is determined by a “label selector” mechanism. Below is a sample yaml which can be used to create a Hello World application. There you can get a clear idea about the label selector mapping.

Let’s see how can we create a Kubernetes service for the above Hello World application. In this example, I have used theapp=helloworld label to define my application. Now you need to use this “helloworld” label as the selector of your service. Then only your service identifies which pods to be looked after by the service. Below is the sample service corresponding to the above application,

This specification will create a new Service named “service-helloworld” which targets TCP port 8080 on any Pod with the "app=helloworld" label.

Here you can see the service type is “ClusterIP.” It is the default type of a Kubernetes service. Other than this, there are another two types of services called “NodePort” and “LoadBalancer.” The mechanism of routing traffic to a Kuberntes cluster will depend on the service type you used when defining a service. Let’s dig into more details.

LoadBalancer: Exposes the service externally using a cloud provider’s load balancer. In AWS, it will create an ELB for each service which exposes the type as the “LoadBalancer.” Then you can access the service using the dedicated DNS name of the ELB.

NodePort: Exposes the service on each Node’s IP at a static port. You can connect to the NodePort service outside the cluster by requesting <NodeIP>:<NodePort>. This is a fixed port to a service and it is in the range of 30000–32767.

ClusterIP: The ClusterIP service is the default Kubernetes service. Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. But to expose these services to the outside, you need an ingress controller inside your cluster.

By considering these services types, the easiest way of exposing a service outside the cluster is using the “LoadBalancer” service type. But these cloud load balancers cost money and every Loadbalancer Kubernetes service creates a separate cloud load balancer by default. Therefore, this service type is very expensive. Can you bear the cost of a deployment which creates a separate ELB (if the cluster is in AWS) for every single service you create inside the Kubernetes cluster?

The next choice we have is the “NodePort” service type. But choosing the NodePort as the service type has some disadvantages due to several drawbacks. Because by the design it bypasses almost all the network security provided by the Kubernetes cluster, it allocates a port from a range 30000–32767 dynamically. Therefore, standard ports such as 80, 443 or 8443 are cannot be used. Because of this dynamic allocation, you do not know the assigned port in advance, and you need to examine the allocated port after creating the service and on most hosts, you need to open the relevant port in firewall after the service creation.

The final and the most recommended approach to routing traffic to your Kubernentes service is “ClusterIP” service type. The one and only drawback of using ClusterIp is that you cannot call the services from the outside of the cluster without using a proxy. because by default, ClusterIP is only accessible by the services inside its own Kubernetes cluster. Let’s talk about how we can get the help of Kubernetes ingress controller to expose ClusterIP services to outside the network.

The following diagram illustrates the basic architecture of the traffic flow to your ClusterIP services through the Kubernetes ingress controller.

If you have multiple services deployed in Kubernetes cluster, I recommended the above approach due to several advantages.

Ingress enables you to configure rules that control the routing of external traffic to the services.

You can handle SSL/TLS termination at the Nginx Ingress Controller level.

You can get the support for URI rewrites.

When you need to provide external access to your Kubernetes services, you need to create an Ingress resource that defines the connectivity rules, including the URI path and backing service name. The Ingress controller then automatically configures a frontend load balancer to implement the Ingress rules.