Main Navigation Right

CoreOS Blog

Blog Menu

Running Kubernetes Example on CoreOS, Part 2

July 30, 2014

• By Kelsey Hightower

Edit: The most up to date Kubernetes + CoreOS guide can be found on the Kubernetes GitHub project.

In the previous post I outlined how to set up a single node Kubernetes cluster manually. This was a great way to get started and try out the basic Kubernetes examples. Now its time to take it up a notch.

In this post I will demonstrate how to get Kubernetes installed on a three node CoreOS cluster running on VMware Fusion. The goal is to setup an environment that closely mimics a cluster of bare-metal machines supporting the Kubernetes networking model.

Before we jump into the tutorial we’ll take a moment to understand what Kubernetes is all about. I tend to think of Kubernetes as a project that builds on the idea that the datacenter is the computer, and Kubernetes provides an API to manage it all.

The deployment workflow provided by Kubernetes allows us to think about the cluster as a whole and less about each individual machine. In the Kubernetes model we don't deploy containers to specific hosts, instead we describe the containers we want running, and Kubernetes figures out how and where to run them. This declarative style of managing infrastructure opens up the door for large scale deployments and self healing infrastructures. Yeah, welcome to the future.

But how does it all work?

Kubernetes introduces the concept of a Pod, which is a group of dependent containers that share networking and a filesystem. Pods are great for applications that benefit from collocating services such as a caching server or log manager. Pods are created within a Kubernetes cluster through specification files that are pushed to the Kubernetes API. Kubernetes then schedules Pod creation on a machine in the cluster.

At this point the Kubelet service running on the machine kicks in and starts the related containers via the Docker API. It’s the Kubelet’s responsibility to keep the scheduled containers up and running until the Pod is either deleted or scheduled onto another machine.

Next Kubernetes provides a form of service discovery based on labels. Each collection of Pods can have an arbitrary number of labels which can be used to locate them. For example, to locate the Redis service running in production you could use the following labels to perform a label query:

service=redis,environment=production

Finally, Kubernetes ships with a TCP proxy that runs on every host. The TCP proxy is responsible for mapping service ports to label queries. The idea is that consumers of a service could contact any machine in a Kubernetes cluster on a service port and the request is load balanced across Pods responsible for that service.

The service proxy works even if the target Pods runs on other hosts. It should also be noted that every Pod in a Kubernetes cluster has its own IP address, and has the ability to communicate with other Pods within the cluster.

The Environment

The examples in this post were testing with VMware Fusion 6 on OS X 10.9.4, but should also work with other virtualization products.

The Network

The Kubernetes networking model aims to provide each Pod (container set) with its own IP address and supports cross host communication. To make this work create a custom VMware network, vmnet2, dedicated for containers. The vmnet2 network should have DHCP and NAT disabled to mimic a basic switch.

The second network interface for each VM will be connected to the vmnet2 network and will later become a member of the cbr0 bridge used by Docker.

Each VM plays a specific role in the Kubernetes cluster, either a Master or Minion in Kubernetes terms. The Master runs the apiserver, controller manager and optionally the kubelet and proxy servers. Minions on the other hand run only the kubelet and proxy servers. While you can have more than one Minion there can only be a single Master per cluster; a Kubernetes limitation that will removed in the near future.

Next install CoreOS on each of the VMs. For this environment I followed the ISO installation method and performed a local install. At this point you should have a clean set of VMs and are ready to install Kubernetes.

Installing Kubernetes

The recommend way of setting up Kubernetes on CoreOS is via Cloud-Config. The cloud-config files used for this tutorial can be found on GitHub.

There is one cloud-config file for each VM:

master.yml

node1.yml

node2.yml

Using cloud-config files we can automate 100% of the Kubernetes installation and setup process. There are a number of items being setup and configured in each cloud-config file:

Set the hostname

Configure static networking for the primary interface

Setup the cbr0 bridge and assign it a subnet from the 10.244.0.0/16 range

Configure iptables to NAT non-container traffic through the primary interface

Configure Docker to use the cbr0 bridge and disable adding rules to iptables

Download and install the Kubernetes binaries

Install the systemd units for each Kubernetes service

Configure etcd to use a static set of peers

Customize the Cloud-Config Files

The cloud-config files should not be used as is; the following changes must be made:

Change the static host IP address throughout the cloud-config file.

Change the SSH authorized key

Not excited about updating a bunch of cloud-config files? I figured as much, so I wrote kubeconfig to help you out. Instead of editing the cloud-configs manually you can use kubeconfig to generate the cloud-config files specifically for your environment.

Create Configuration Drives

While we could use the cloud-config files generated by kubeconfig directly, it’s easier to use config-drives to expose our cloud-configs to the VMs. Again we can use kubeconfig to automate this process for us.

kubeconfig -c config.yml -iso

The result of running the above command is a config-drive for all three VMs:

master.iso

node1.iso

node2.iso

For each VM attach the related config-drive and boot them.

At this point you should be able to login as the core user to each of your VMs. It will take a few minutes for the Kubernetes binaries to download and the related services to start.

Use the systemd journal to monitor progress of the VMs after they boot up:

journalctl -f

You can check the status of each of the Kubernetes components with the following commands

You are now ready to start running the examples hosted on the Kubernetes GitHub repo.

Running Kubernetes Examples

You have two choices on running the examples. You can log on to the Master and run the kubecfg command from there. Or you can download the kubecfg command line tool to your local machine and execute commands remotely.

Conclusion

This post demonstrated how to setup a CoreOS cluster running Kubernetes on virtual hardware that mimics a bare metal infrastructure. Kubernetes consists of many moving parts, but as you can see the installation and setup of everything can be fully automated when using CoreOS with Cloud-Config.

Kubernetes is still in the early stages and things are rapidly evolving. The good news is you’ll have the ability to help shape the direction of the Kubernetes project by kicking the tires and joining the community. CoreOS will be here to support you by providing the best platform for running Linux containers and hosting projects like Kubernetes.