… but that ended up with you sneaking into the bathroom to babble chunks of keywords you barely remember.

“Ok Google, what is a cluster of Ingress in a cube of nodes? … Please please please help me.”

Nope. Hasn’t happened to me. Not even once. No Sir!

But let’s pretend that it did happen (hypothetically) several years ago, and that all the articles I stumbled upon during my research were made of such cryptic symbols that tears started to come out of my eyes (and they weren’t tears of joy).

I said: Let’s pretend.

So based on this fictional situation, let’s try to write the easiest tutorial that involves setting up Traefik as an Ingress Controller for Kubernetes. (And I promise: I will explain everything that is not obvious.)

You can skip this part if you’re familiar with Kubernetes.

What You Need to Know about Kubernetes

Kubernetes, as in Cluster

Kubernetes is a cluster technology. It means that you will see a cluster of computers as one entity. You will not deploy an application on a specific computer, but somewhere in the cluster, and it’s Kubernetes’s job to determine the computer that best matches the requirements of your application.

Machines in the cluster can have different characteristics

Nodes

Each computer in the cluster is called a node. Eventually, the nodes will host your applications. The nodes can be spread throughout the world in different data centers, and it’s Kubernetes’s job to make them communicate as if they were neighbors.

Nodes are machines in the cluster

Containers

I guess that you are already familiar with containers. Just in case you’re not, they used to be the next step of virtualization, and today they are just that: the de-facto standard for virtualization. Instead of configuring a machine to host your application, you have your application wrapped into a container (along with everything your application needs — OS, libraries, dependencies, …) and deployed on a machine that hosts the container engine which is in charge of actually running the containers.

Containers technically don’t belong to Kubernetes; they’re just one of the tools of the trade.

Container embed the required technologies

Basically, Kubernetes sees containers like in the following diagram:

The container engine is responsible for handling the underlying technologies needed in your containers

Pods

Because Kubernetes loves layers, Kubernetes adds the notion of Pods around your containers. Pods are the smallest unit you will eventually deploy to the cluster. A single Pod can hold multiple containers, but for the sake of simplicity let’s say for now that a single Pod will host a single container.

So, in a Kubernetes world, a Pod is the new name for an instance of your application, an instance of your service. Pods will be hosted on the Nodes, and it’s Kubernetes’s job to determine which Node will host which Pod.

If a Pod consists of multiple containers, they will share the same resources / network

It’s Kubernetes’s job to ensure that your cluster will host 5 replicas of your Pods at any given time.

Kubernetes is responsible for deploying the pods to the right places

Services

Oh, yet another fun concept …

Pods’ lifecycle are erratic; they come and go by Kubernetes’ will.

Not healthy? Killed.

Not in the right place? Cloned, and killed.

(No, you definitely don’t want to be ruled by Kube!)

So how can you send a request to your application if you can’t know for sure where it lives? The answer lies in services.

Services define sets of your deployed Pods, so you can send a request to “whichever available Pod of your application type.”

When requesting your application, you don’t care about its location or about which pod answers the request

Ingress

So we have Pods in our cluster.

We have Services so we can talk to them from within our cluster. (Technically, there are ways to make services available from the outside, but we’ll purposefully skip that part for now.)

Now, we’d like to expose our Services to the internet so customers can access our products.

Ingress objects are the rules that define the routes that should exist.

Ingress objects are the rules that define the routes to our services

Ingress Controller

Now that you have defined the rules to access your services from the outside, all you need is a component that routes the incoming requests according to the rules … and these components are called Ingress Controllers!

And since this is the topic of this article — Traefik is a great Ingress Controller for Kubernetes.

… oh? Is my opinion biased? Well, maybe … but all of the above is true!

Let’s Start Putting Everything Together!

This is the part where we actually start to deploy services into a Kube cluster and configure Traefik as an Ingress Controller.

Prerequisite

You have access to a Kubernetes cluster, and you have kubectl that points to that cluster.

Just so you know, here I’m using Kubernetes embedded inDocker for Mac.

What We’ll Do

We’ll use a pre-made container — containous/whoami — capable of telling you where it is hosted and what it receives when you call it. (Sorry to disappoint you, but I can confirm we won’t build the new killer app here)

We’ll define two different Pods, a whoami and a whoareyou that will use this container.

We’ll create a deployment to ask Kubernetes to deploy 1 replica of whoami and 2 replicas of whoareyou (because we care more about others than we care about ourselves … how considerate!).

We’ll define two services, one for each of our Pods.

We’ll define Ingress objects to define the routes to our services.

We’ll set up Traefik as our Ingress Controller.

And because we love diagrams, here is the picture:

Setting Up Traefik Using Helm

Helm is a package manager for Kube and, for me, the most convenient way to configure Traefik without messing it up. (If you haven’t installed Helm yet, doing so is quite easy and fits in two command lines, e.g. brew install kubernetes-helm, and helm init.)

Now that we have described our Ingress, we ask Kubernetes to take the file into account with the following (same old) command:

kubectl apply -f whoami-ingress.yml

And now, since Traefik is our Ingress Controller, let’s see if something has changed in the dashboard …

Traefik has automatically detected the new Ingress!

That’s it, no reload, no additional configuration file (there were enough). Traefik has updated its configuration and is now able to handle the route whoami.localhost!

Let’s check …

Traefik redirects the requests as expected

Yes, our service is useful, but its UI could use a bit of love ❤️

Here, we confirm that Traefik is our Ingress Controller and that it is ready to bring its features to your services!

May I Ask For Load Balancing in Action?

Earlier, we talked about a WhoAreYou deployment with 2 replicas.

Let’s do it in just one file :-)

There is not much to say here, except that this time we have 2 replicas … (replicas: 2)

So let’s ask Kubernetes to apply our new configuration file.

kubectl apply -f whoareyou.yml

And let’s go right away to Traefik’s dashboard.

And voilà!

Traefik detects the new Ingress and the two available Pods in the Service

First request, one of the Pod replies

Second request, the other Pod replies

Conclusion

If I had wanted to write an article that dealt with “just” configuring Traefik as an Ingress Controller for Kubernetes, well, it would have fit in one line:

helm install stable/traefik

Because it’s actually the only configuration operation that we’ve needed in the article. The rest is just explanations and demos.

Yes, this step was indeed as simple as that!

Go Further With Traefik!

Now that you know how to configure Traefik, it might be a good time to delve into its documentation and learn about its other features (auto HTTPS, Let’s Encrypt automatic support, circuit breaker, load balancing, retry, websocket, HTTP2, GRPC, metrics, etc.)

Containous is the company that helps Træfik be the successful open-source project it is (and that provides commercial support for Traefik).