Steps

Kubernetes Playground

Step1 of 8

Introducing kubectl

Here we have a Kubernetes cluster with two nodes: one master node and one worker node. Both nodes have kubectl installed.

kubectl is the command line tool that we use to communicate with a Kubernetes cluster. You can view Kubernetes resources by running kubectl get <resource-type>. Let's take a look at the nodes in our cluster:

kubectl get nodes

You can see we have two nodes, one master and one worker.

Let's look at the components of the cluster:

kubectl get componentstatus

This shows us that etcd, the scheduler, and the controller-manager are all working correctly.

Let's see if there are any pods:

kubectl get pods

We haven't deployed any pods yet, so that's expected.

In fact Kubernetes is already running some Pods under the hood. You can split up a Kubernetes cluster into different namespaces. So you can give different teams namespaces and they won't clash with each other.

If we look at Kubernetes' internal namespace, kube-system, by adding -n kube-system we can see that Kubernetes is actually running several of its own components in pods. That's a cool thing about Kubernetes, once you've bootstrapped some of the components, it can run the rest of the components itself.

kubectl get pods -n kube-system -o wide

Deploying pods

Generally you define the specification of Kubernetes objects in YAML files called manifests. Let's take a look at pod.yml:

cat pod.yml

The apiVersion and the kind define the type of object we want to create. We give it the name "devops". And then we have a spec for the containers to run inside the pod.

Here we are running one container from the default DockerHub registry, tfogo/devops. This is a simple nginx container which will serve some content on port 8080. We expose port 8080 on the pod.

You can submit manifests to the Kubernetes API by using the kubectl apply command:

kubectl apply -f pod.yml

This will submit the manifest to the API. Kubernetes will then schedule this pod onto a healthy node in the cluster. Let's see how our pod is doing:

kubectl get pods

We can see detailed information about a specific resource by using kubectl describe. Let's take a look at our devops pod:

kubectl describe pod devops

The pattern here is kubectl describe <resource-type> <resource-name>. Here we see detailed information about the pod: Where it's scheduled, its IP, its status, events in its history, etc. Let's try curling its IP address at port 8080. You should see it output "Smooth DevOps".

You can also easily get the IP address from:

kubectl get pod devops -o wide

Notice how the Pod IP is accessible from any node in the cluster!

Requesting Resources

You can let Kubernetes know how much CPU and RAM your containers need. You can also limit how much CPU and RAM your containers use up. You can do this by specifying the resource section in the Pod manifest. Take a look at pod-with-resources.yml:

cat pod-with-resources.yml

Here we request a minimum of half a CPU core and 128MB of RAM. We also set an upper limit of one core and 256MB of RAM. If we apply this manifest, Kubernetes will update our pod with these requests (make new pod or update?):

kubectl apply -f pod-with-resources.yml

You can see the new pod by listing all pods:

kubectl get pods

You can see more information about the new pod by running:

kubectl describe pod devops-with-resources

Deleting Pods

You can delete resources with the kubectl delete command:

kubectl delete pod devops-with-resources

Liveness Probes

Kubernetes watches pods to check if they are running or not. You can define exactly how Kubernetes determines if a pod is healthy by defining a liveness probe. See pod-with-liveness.yml for an example:

cat pod-with-liveness.yml

Here we define a liveness probe by making an HTTP request to the /healthz endpoint. If that returns a 200 then we know the app is running correctly.

Let's run this pod:

kubectl apply -f pod-with-liveness.yml

We can see this using:

kubectl get pods -o wide

And we can see what the /healthz endpoint returns:

curl <pod-ip>:8080/healthz

You can also define a readiness probe. Whereas liveness determines if a process is running correctly. Readiness is meant to test if a process is ready to start receiving traffic. For example, a Pod may need to do a lot of setup after it started before it is able to start accepting workloads.

This is useful so Services know which pods to send traffic to.

Labels and Annotations

When you have a lot of Pods in your cluster you want a way to be able to organize, sort, and identify your Pods. So Kubernetes allows you to add Labels to your resources. These are key/value pairs which are specifically used to identify resources. For example: version=2.0.3, env=production, app=frontend.

Labels are used by some resources to match and point to other resources. For example, Services point to a group of Pods that have a certain set of labels.

Annotations are also key/value pairs, but they are meant for non-identifying information. They are a place to store additional metadata about Kubernetes objects. Annotations are mostly used to store data that is useful for other tools and libraries but are sometimes used by Kubernetes itself.

If you're unsure about the difference between labels and annotations. Labels are selectors, and annotations are all other metadata.

Take a look at pod-with-labels.yml:

cat pod-with-labels.yml

We add a couple of labels and an annotation to the Pod.

You can schedule this pod by running:

kubectl apply -f pod-with-labels.yml

And you can view this new pod (with its labels) by running:

kubectl get pods --show-labels

And delete the pod by running:

kubectl delete pod devops-with-labels

Deployments

Now you know how to deploy single Pods, but in the real world you want to deploy a set of Pods and have an easy way to scale and update that set. This is where the Deployment resource comes in.

watch kubectl get pods -o wide

Let's look at the definition of a deployment:

cat deployment.yml

Here in the spec you'll notice we set replicas: 3 which will tell Kubernetes we want to maintain 3 replicas of this Pod. Let's apply this resource:

kubectl apply -f deployment.yml

We should see the pods spinning up. It's easy to scale a deployment - simply change the number of replicas and apply the definition again.

When make a change to the pods in a deployment, the deployment will perform a rolling update of its pods. Deployments allow you to set exactly how the rolling update occurs. For example, you can tell it to only replace one Pod at a time.

Let's update the version of the container we're deploying to tfogo/devops:v2:

kubectl apply -f deployment-v2.yml

You can check the status of a deployment with this command:

kubectl rollout status deployment devops

Now let's check what version 2 of the pod does. Let's find an IP of a pod and curl it on port 8080.

You can see the history of rollouts too:

kubectl rollout history deployment devops

If you want to rollback a rollout, you can always re-apply an old manifest, or use this handy command:

kubectl rollout undo deployment devops --to-revision=1

Services

So we have a Pod with an IP that we can access from within the cluster. But one of the main points of Kubernetes is that a node can go down with all the pods on it and Kubernetes will bring up all the pods again on another node in an instant.

This means that the Pod IP can change. And what if you have multiple Pods of the same application, how do you route to that application?

This is where the Service object comes in. The Service is a layer in front of a group of pods. By default a service will create a "Cluster IP" which is a virtual IP that can be routed to within the cluster. Let's look at how to define a service:

cat service.yml

You define a selector. This is a set of labels. Pods that match this set of labels will be fronted by the Service. You also define a port which is the port you need to access on the cluster IP, and a targetPort, the port you need to access on the pod.

This is a case of Kubernetes objects being loosely coupled. A service doesn't own pods. If you delete a service, the pods will remain. This makes it easier to carry out operational tasks such as moving pods to a different service.

Create the service by applying the yaml file:

kubectl apply -f service.yml

You can see the services here:

kubectl get services

Notice that we have two services. One is the one we just made, and one is the service that fronts the kube-apiserver.

We can use the Cluster IP to access this service.

curl <cluster-ip>

Node Ports and Load Balancers

There are several different types of Services. We just saw the default: ClusterIP. But services can expose applications in more ways. If we look at service-node-port.yml we've added type: NodePort:

cat service-node-port.yml

Now when we apply this, the service will pick a port (around the 30000 range). When you connect to any node in the cluster at this port, it will forward to the Service.

kubectl apply -f service-node-port.yml

kubectl get services -o wide

kubectl describe services devops

You can see in the Node Port section the port on any node where the service is visible.

curl <node-ip>:<node-port>

On Katacoda, you can click the "+" icon to view a port on any host you want. So try that out now.

Load Balancers

By using type: LoadBalancer, Kubernetes will provision a cloud load balancer and point it to the Pods. This is useful for publishing your service externally.

This won't work in this sandbox, since this sandbox won't provision a load balancer.