Tuesday, October 23, 2018

I have been working with Kubernetes in production for 2 years now. Over this time we have migrated 3 or 4 times clusters, changed techinques and tools of deployments, changed habits or pushed back to some of the hype and trends coming up every day on the kubernetes scene. I guess nothing new here, this is what almost everybody does.

My personal take on this ever changing scene is to keep things simple - as much as possible, especially while your underlying infrastructure is changing from version to version. Kubernetes 1.4 (the first version I used in production) has the same basic constructs as version 1.7 and 1.10 currently in production, but many other other things come and go or change. When you are adapting or trying to adopt kubernetes on your org you need to rely on these simple and basic constructs that will make your transition from one version to the other painless, through with time you master more advanced and `cryptic` features. Anyway the point is, keep it simple.

On the early adoption days, we followed what felt natural based on our previous experiences. We were (still are) creating services spreading them around accomodating for the different business needs and features. Kubernetes is our golden cage and creating new services / apps never felt so easy. Going back to our previous experiences, while creating a new app/ service you feel the urge to `expose` it somehow, to your private network, to some namespace, to some context. Using the service type `LoadBalancer` while in AWS spinning ELB's felt like total magic! At first you have a couple of services, talking to each other, exposing some public API and it was good, you were provisioning your ELB's, you could related to the old concept a DNS entry for a specific service. Old school I know but it felt good.

Time goes by the 2 services are 15 now, they talk to each other, they integrate. Creating some many ELB's or Services becomes a messy and error prone business! Even within the kubernetes namespace context - dealing with some many different service entries is challenging - especially when you follow point to point integration. It was obvious that you need some kind of proxy. A step further

Personally I always thought that the proposed Ingress API / Solution from kubernetes despite the `good` intentions - was too much, overcomplicated for my over simplifying mind in its ever ending battle to keep things simple. Also, I never felt good with the idea to have 1 everything under the same ingress hose. So despite having examples of things like Traefik or Nginx as the reference Ingress controllers, personally I thought that I was increasing my platform's configuration entropy and complexity. Maybe a bit spoiled by all the nice things abstracted by Kubernetes after all this time, adding some serious config for doing something like simple never made it to my book. So yes to Ingresses and Proxies but most probably multiple and independent Ingresses within the same cluster and not one.

On the other end internal proxies within the kubernetes network - was always something OK to do, but kubernetes with its DNS service was always easing the pain- at the end of the day everything is just a label in a `service.yml` definition. Internal service names change or move easily.

One night I end up reading this article by R.Li (CEO of Datawire) and I was like `yes!! I totally agree with you`. This is how I decided to give Ambassador a go. What I really liked was the fact that Ambassador which is a kubernetes wrapper around Envoy proxy (a bit oversimplified definition apologies), takes away all the config that you would have to do using the Kubernetes Ingress resources, glues almost instantly on your existing deployment - being aware of your existing services and then leaving you only the task to configure its proxying rules and proxying behavior.

So in 1 place, you get to spin a horizontally scaled L7 proxy, no news here, the same way you scale your pods you would scale Ambassador, and either you can feed it your proxying config or you can enhance your existing service definitions with metadata that Ambassaor will detect (since it talks and understands the Kubernetes API). Currently I prefer (the first) - adding directly to the Ambassador deployment all the configs instead of adding labels and metadata to our existing services. In away our Services and the pods behind our services do not have a hint that are being proxied and this gives us great freedom to play around and change configs in 1 place without risking errors or mistakes to existing pieces of the puzzle!