Monthly Archives: January 2018

Having a master in a Kubernetes cluster is all very well and good, but if that master goes down the entire cluster cannot schedule new work. Pods will continue to run, but new ones cannot be scheduled and any pods that die will not get rescheduled.

Having multiple masters allows for more resiliency and can pick up when one goes down. However, as I found out, setting multi-master was quite problematic. Using the guide here only provided some help so after trashing my own and my company’s test cluster, I have expanded on the linked guide.

First add the subnet details for the new zone into your cluster definition — CIDR, subnet id, and make sure you name it something that you can remember. For simplicity, I called mine eu-west-2c. If you have a definition for utility (and you will if you use a bastion), make sure you have a utility subnet also defined for the new AZ

kops edit cluster --state s3://bucket

Now, create your master instance groups, you need an odd number to enable quorum and avoid split brain (I’m not saying prevent, and there are edge cases where this could be possible even with quorum). I’m going to add west-2b and west-2c. AWS recently introduced the third London AWS zone, so I’m going to use that.

Find the etcd and etcd-event pods and add them to this script. Change “clustername” to the name of your cluster, then run it. Confirm the member lists include both two members (in my case it would be etc-a and etc-b)

(NOTE: the cluster will break at this point due to the missing second cluster member)

Wait for the master to show as initialised. Find the instance id of the master and put it into this script. Change the AWSSWITCHES to match any switches you need to provide to the awscli. For me, I specify my profile and region

The script will run and output the status of the instance until it shows “ok”

edit /etc/kubernetes/manifests/etcd.manifest and /etc/kubernetes/manifests/etcd-events.manifest
Change the ETCD_INITIAL_CLUSTER_STATE value from new to existing
Under ETCD_INITIAL_CLUSTER remove the third master definition

Stop the etcd docker containers

docker stop $(docker ps | grep "etcd" | awk '{print $1}')

Run this a few times until you get a docker error saying you need more than one container name
There are two volumes mounted under /mnt/master-vol-xxxxxxxx, one contains /var/etcd/data-events/member/ and one contains /var/etcd/data/member/ but it varies because of the id.

Wait for the master to show as initialised. Find the instance id of the master and put it into this script. Change the AWSSWITCHES to match any switches you need to provide to the awscli. For me, I specify my profile and region

The script will run and output the status of the instance until it shows “ok”

edit /etc/kubernetes/manifests/etcd.manifest and /etc/kubernetes/manifests/etcd-events.manifest
Change the ETCD_INITIAL_CLUSTER_STATE value from new to existing

We DON’T need to remove the third master defintion this time, since this is the third master

Stop the etcd docker containers

docker stop $(docker ps | grep "etcd" | awk '{print $1}')

Run this a few times until you get a docker error saying you need more than one container name
There are two volumes mounted under /mnt/master-vol-xxxxxxxx, one contains /var/etcd/data-events/member/ and one contains /var/etcd/data/member/ but it varies because of the id.

Kubernetes is an awesome piece of kit, you can set applications to run within the cluster, make it visible to only apps within the cluster and/or expose it to applications outside of the cluster.

As part of my tinkering, I wanted to setup a Docker Registry to store my own images without having to make them public via docker hub. Doing this proved a bit more complicated than expected since by default, it requires SSL which requires a certificate to be purchased and installed.

Enter Let’s Encrypt which allows you to get SSL certificates for free; and by using their API, you can set it to regularly renew. Kubernetes has the kube-lego project which allows this regular integration. So here, I’ll go through enabling an application (in this case, it’s a docker registry, but it can be anything).

First, lets ignore the lego project, and set up the application so that it is accessible normally. As mentioned above, this is the docker registry

I’m tying the registry storage to a pv claim, though you can modify this to tie to S3, instead etc.

Once you’ve applied this, verify your config is correct by ensuring you have an external endpoint for the service (use kubectl describe service registry | grep "LoadBalancer Ingress"). On AWS, this will be an ELB, on other clouds, you might get an IP. If you get an ELB, CNAME a friendly name to it. If you get an IP, create an A record for it. I’m going to use registry.blenderfox.com for this test.

Verify by doing this. Bear in mind it can take a while before DNS records updates so be patient.

host $(SERVICE_DNS)

So if I had set the service to be registry.blenderfox.com, I would do

host registry.blenderfox.com

If done correctly, this should resolve to the ELB then resolve to the ELB IP addresses.

Next, try to tag a docker image of the format registry-host:port/imagename, so, for example, registry.blenderfox.com:9000/my-image.

Note we are not using a port this time as there is now support for SSL.

BOOM! Success.

The tls section indicates the host to request the cert on, and the backend section indicates which backend to pass the request onto. The body-size config is at the nginx level so if you don’t change it, you can only upload a maximum of 64m even if the backend service (docker registry in this case) can support it. I have it set here at “1g” so I can upload 1gb (some docker images can be pretty large)

Share this:

Like this:

From the Kubernetes blog, the next version of Kubernetes has been released. And one feature has definitely caught my eye:

Windows Support (beta)

Kubernetes was originally developed for Linux systems, but as our users are realizing the benefits of container orchestration at scale, we are seeing demand for Kubernetes to run Windows workloads. Work to support Windows Server in Kubernetes began in earnest about 12 months ago. SIG-Windows has now promoted this feature to beta status, which means that we can evaluate it for usage.

So users of Windows can now hook up Windows boxes into their cluster. Which leads to an interesting case of mixed-OS clusters. Strictly speaking, that’s already possible now with a mix of Linux distributions able to run Kubernetes.

Share this:

Like this:

Tried a different route today. Still sore from the hour run yesterday. This new route turns out to be just under 5k, though I’m not sure it’s right, since my Fitbit seemed to lose communication with my phone so didn’t track the route properly. Guess I’ll try again tomorrow maybe.

Still, I got two achievements on the run which was two PRs on segments on the run (which were tracked properly)