Previously when I brought my my site back online I briefly mentioned the simple setup I threw together with Caddy running on a tiny GCE VM with a few scripts — Since then I’ve had plenty of time to experience the awesomeness that is managing services with Kubernetes at work while developing Kubernetes’s testing infrastructure (which we run on GKE).

So I decided, of course, that it was only natural to migrate my own service(s) to Kubernetes for maximum dog-fooding. :kubernetes: ↔ :dog:

2) I also needed to configure RBAC for kube-lego which doesn’t currently ship with RBAC configured out of the box. Again, this was just involved applying a config update based on the comments at jetstack/kube-lego#99 with kubectl apply -f k8s/kube-lego.yaml. The config below is probably giving kube-lego a lot more access than it needs, but I wasn’t particularly concerned about this since this is on a toy “cluster” for my personal site and the service is already managing my TLS certificates. :shrug:

If you haven’t given Kubernetes a try but you are already comfortable with Docker
you should give it a try. Kubernetes makes managing services easy and portable.
The same kubectl commands I use to debug our services for the Kubernetes project’s
infrstructure on GKE work just as well on my toy cluster at home. :smile:

If my site were a serious production service instead of a toy learning experience I would seriously look towards GKE instead of a one node “cluster” running on a DIY “server” sitting by my desk at home, but setting up a toy cluster with kubeadm was a great experience for experimenting with Kubernetes. I can recommend using kubeadm for similar experiments, it’s quite simple to use once you have all the prerequesites installed and configured and the docs are quite good, however it won’t solve many of the things you’ll want for a production cluster.

If you really just want to play with it first (and not host anything), check out minikube.

Addendum:

I also used Calico for my overlay network, but I haven’t really exercised it yet so I can’t really comment on it.

Kubernetes secrets are awesome. My simple Go service can just read in the GitHub webhook secret as an environment variable injected into the container without worrying about how the secret is loaded and stored.

To get a one node cluster working you need to remove the master taint. This is terrible idea for a production cluster but great for tinkering and effectively using kubelet as your PID1.

UPDATE: My site is on Netlify now, but I still run my own Kubernetes cluster to host other small projects. Hosting it on a toy Kubernetes cluster worked well, execept when the power went out at my apartment … I’d like my site to be online even then, hence Netlify :upside_down_face: