My adventures setting up a local Kubernetes control plane and cluster up.

The name Kubernetes originates from Greek, meaning helmsman or pilot, and is the root of governor and cybernetic.

Kubernetes is a portable, extensible open-source platform for managing containerised workloads and services, that facilitates both declarative configuration and automation.

It’s a platform for managing containers.

A running Kubernetes cluster contains node agents (kubelet) and a cluster control plane (aka master), with cluster state backed by a distributed storage system (etcd).

While all the big cloud providers now have k8s offerings, its still feasible to get all this setup on your own kit (vms or bare metal), thanks to distributions like Agile Stacks and Rancher. I found the Rancher documentation outstanding. In this guide, I’ll be going with Rancher for no particular reason.

Terminology

Each computing resource in a Kubernetes cluster is called a node. Nodes can be either bare metal or VM’s. Kubernetes classifies nodes into three types: etcd nodes, control plane nodes, and worker nodes.

Machine Configuration

Static IP’s

On a hat based distro, head into /etc/sysconfig/network-scripts, and edit the ifcfg script that relates to your network interface (in my case enp0s3):

$ sudo vim /etc/sysconfig/network-scripts/ifcfg-enp0s3

By default, interfaces are setup to use DHCP. Set the BOOTPROTO to none for a static IP. My script ended up looking like this:

If this doesn’t work, and you’re using a dodgy self signed Docker registry (like me), you’ll need to whitelist it, as by default Docker does not permit insecure communication (such as untrusted SSL certs):

As root create /etc/docker/daemon.json, and pop the following into it:

Returning to the Rancher UI, will see that the new cluster is in the provisioning state. Be patient, this took about 10 minutes on my local setup.

This cluster is currently Provisioning; areas that interact directly with it will not be available until the API is ready. Pulling image [rancher/hyperkube:v1.11.6-rancher1] on host [192.168.1.115]

You’ll know when RKE has finished provisioning, there will be dozens of containers running to support the k8s ecosystem, including controllers, schedulers, a network fabric, ingress controllers and much more.