Config files are located in ./resources. Please take time to review them.

You should already be familiar with Kubernetes Deployments by now, but this lab should help add some additional features and functionality to basic deployments.

First, let's make sure the environment is ready. Wait until the node Status is "Ready". To exit the below command use CTRL+C.

watch kubectl get nodes

Also, you should see "enabled" next to Dashboard, Heapster and Metrics-Server when you run this command:

minikube addons list

In this first section we will cover Resource Requests. Requests are the minimum available requirements that should be met for CPU and Memory for a pod to be scheduled on a node. If there aren't any nodes available that can handle the request, then the pod cannot be scheduled.

You can see that they both fail to schedule due to insufficient resources.

The takeaway here is that when thinking about setting the Resource Requests for your pods, make sure that they are realistic and will be able to be met. If there aren't any nodes that can meet the requests, the pods won't run.

Click the links below to cleanup this section, then proceed to Step 2.

kubectl delete -f /root/resources/deployment-cpu-requested.yaml

kubectl delete -f /root/resources/deployment-mem-requested.yaml

Resource Limits

In this section we will cover Resource Limits.

While Requests set the expected resources to be available to run, the Limits set the maximum amount of CPU or Memory that a pod will be able to consume.

In the CPU Limits deployment we added a limit of .5 CPU, but the pod will request 1 CPU:

resources:
limits:
cpu: "500m"

In the Mem Limits deployment we added a limits of 100M, but the pod will request 150M:

resources:
limits:
memory: "100M"

Run the following deployments then let's see how Kubernetes applies these rules when the limits are exceeded.

kubectl apply -f /root/resources/deployment-cpu-limited.yaml

kubectl apply -f /root/resources/deployment-mem-limited.yaml

Check on the status of the pods. Initially you will see that the memory pod is ok, but soon it will show "OOMKilled" (Out Of Memory Killed). Use CTRL+C to exit.

watch kubectl get pods

Let's see the resources being used by the pods:

kubectl top pod

The CPU pod should show it is using 500m (millicores, or .5 of 1 CPU) even though the process running inside wants 1 whole CPU. The memory pod doesn't even have time to show metrics because it is constantly being OOMKilled and restarted.

Readiness Probes

Readiness Probes are for when an application might need to load large data or configuration files during startup, or depend on external services after startup. In such cases, you don’t want to kill the application, but you don’t want to send it requests either.

We added the following Readiness config:

tcpSocket:
port: 8080
initialDelaySeconds: 10
periodSeconds: 10

Run 2 deployments. One will have a Readiness endpoint that cannot be reached and the other will be reachable.

Notice that the "fail" pod never starts because the Readiness check is getting "Connection refused" from the checkpoint we defined. This pod will continue to restart and try to check the readiness port before becoming viable.

In this case, we intentionally defined an incorrect port, but it could fail for many reasons and you will need to investigate the container/pod to see why. It may be because this is an update to your deployment and the new image is not listening on the same port as the check was configured for.

The successful pod is was able to be reached by the Readiness check and is now online.

Liveness Probes

Liveness probes differ from Readiness checks in that they are run continuously through the life of the Pod. Many applications running for long periods of time eventually transition to broken states, and cannot recover except by being restarted. Kubernetes provides liveness probes to detect and remedy such situations.

We added the following to the deployment for the liveness probe. This can also be a TCP call:

You can see that the "success" pod is working. If you check the logs of the successful pod, you will see log messages like "GET / HTTP/1.1" 200 612 "-" "kube-probe/1.15" "-" , indicating that the kube-probe is checking it on a regular interval to make sure it is reachable and working as intended.

The failure pod continues to try to restart, but the liveness probe cannot reach it and continues to fail. In a real scenario, a container may become unhealthy over time, and the liveness probe will catch when it does, restart it, and return it to a healthy state.

Rolling Updates

In this last section, we will take a look at Rolling Updates.

There are a number of update strategies including Rolling, Recreate, Blue/Green (via labels) and Canary.

We will cover the most common which is a Rolling Update. This allows you to control the number or percentage of pods that are scaled beyond what the deployment originally called for, the number of unavailable pods at any given time, and the length of the rollout.

When a deployment is updated, Kubernetes creates a new Replica Set and deploys the new pods in that newly created RS. It will then remove them from the old RS based on the options be chose in the strategy spec.

Note that in out plan we can only update 1 pod at a time and we have to wait 10 seconds in between a new pod coming online before we proceed. This will need be to adjusted with larger deployments to match the acceptable number of pods offline and overall time the rollout takes.

Next, let's use a YAML file to update the image and ConfigMap for the deployment. This is the preferred method as this is trackable via SCM.

Help

Katacoda offerings an Interactive Learning Environment for Developers. This course uses a command line and a pre-configured sandboxed environment for you to use. Below are useful commands when working with the environment.

cd <directory>

Change directory

ls

List directory

echo 'contents' > <file>

Write contents to a file

cat <file>

Output contents of file

Vim

In the case of certain exercises you will be required to edit files or text. The best approach is with Vim. Vim has two different modes, one for entering commands (Command Mode) and the other for entering text (Insert Mode). You need to switch between these two modes based on what you want to do. The basic commands are: