Best Practices for Kubernetes’ Pods

Written by Stefan Thorpe

Cloud computing is one of the most active and significant fields in modern computer science. For both businesses and individuals, it allows the kind of interconnectedness and productivity gains that can completely transform the way that we work and live. However, as anyone who uses cloud Services professionally can attest to, simply being on the cloud isn’t enough. In order to utilize this technology to its full potential, businesses need to carefully consider the exact set up that they use.

Compatibility is one of the biggest challenges that any dynamic IT system faces. In situations where new products, hardware, and software are regularly being introduced into the ecosystem, all it takes is one incompatible component to completely disrupt the workflow. An elegant solution to this problem used to be to make use of virtual machines.

Except now there’s a more modern solution. Containers are essentially mini-VMs. Docker is probably the best-known container for Linux, and Microsoft Azure has also been expanding Windows capabilities in this regard too. Kubernetes, or K8s, is Google’s answer to container orchestration and works on all cloud platforms from on-premise to hybrid and is compatible with every OS (to some degree). Hence, why it’s becoming the go-to choice for container orchestration. A Kubernetes Pod refers to a group of containers which have been deployed on a single host. They can, therefore, work together more efficiently.

This is a very powerful concept in container management and orchestration. However, as with any technology, it’s the implementation that is the crucial factor in determining its value to your business. By adhering to the following best practices, you can utilize the massive potential behind Kubernetes to its fullest effect.

Keep the Image You Use Small

Before you start looking around for base images, you should have a good idea of what it is you need to get out of your final setup in terms of functionality. If you only have a vague idea, try and refine it as much as possible before you begin searching for base images to use. This will allow you to analyze potential packages in detail and make sure that they contain what you need, with as little excess as possible.

If the app you need is only 15 MB in size, it would be a waste of resources to use an image with a 600 MB library. Of course, you will have to contend with some excess in most situations. However, the smaller the image you can use, the faster your container will build, the less space it will require, and often it will reduce the attack surface, therefore, enhancing your overall security.

Use a Single Image Unless…

For the most part, it makes sense for a Pod to run as an abstraction over one container only. However, you can employ multiple Pods to tightly couple helper processes to your primary lead Pod, such as for log monitoring.

Running multiple containers in Pods is also a viable solution when using a Service Mesh to connect, manage, and secure microservices as these will intercept all communication between the individual microservice components.

Double Check Your Base Images

Many of the most common mistakes people make when using Kubernetes occur when they are selecting the base image to build their container from. For example, if you only glance at the base image and see that it appears to contain the package you want but don’t look any further, you are setting yourself up for a potential disaster. You could find yourself with the wrong version of the package you need which will throw up numerous compatibility and functionality issues. Worse still, the image could contain malware, spyware, or dreaded ransomware. Any malicious content on a corporate network is cause for serious concern.

As with any other piece of software you install on your network, you should first use programs, such as CoreOS’ Clair or Banyon Collector (though GKE and Docker have it built into their dashboards already), to run a static analysis on your container and to check for any vulnerabilities.

Use a Non-Root User Inside Each Container

When packages within your container are updated, the privileges of the root user are required. However, following updates, you need to make sure that you switch to a non-root user. This is an important security consideration. If an intruder gains access to your container and it is logged in as the root user, they will have all the control and permissions they need to wreak havoc.

Worse still, they could escape from the container and begin to interfere directly with the hosting machine. Ensuring that you are logged in as a non-root user will mean that, should an intruder gain access to your container, the amount of damage they can do will be limited. They will need to perform a second, much harder hack to gain root access.

Use Namespaces and Labels

Distinctly define and structure Namespaces in Kubernetes clusters and label your Pods for long-term maintainability. Namespaces are essentially a virtual cluster inside your Kube cluster, which are each logically isolated from one another. They will be of significant help when it comes to organization, security, and operations. Creating and using Namespaces essentially segments your Services into more manageable bite-sized chunks which can improve performance as the Kube API will have smaller objects to handle.

Labels help organize and select (or deselect) subsets of objects as necessary. Outlining a good label policy (one that clearly defines the use case of your object—Pods in this instance) and being disciplined about sticking to it will save you time, energy, and a lot of headaches in the long run.

Services and Pods

Services are Kubernetes’ answer to how a Pod running on one cluster node can communicate with a Pod running on another, and all just because the sender knows the receiver’s Pod network IP address.

Kuberenetes’ Services establish a well-defined endpoint for Pods, even if a Pod gets resurrected or relocated to another node. Services also enable load balancing across a set of server Pods, so that client Pods can operate durably and independently.

Always get started with your Services before starting your Pods. This is because Kube provides environment variables when it begins running a container which identifies the Services that were running when it started. However, use the DNS name of a service when you’re writing code to talk to it—it’s a much more flexible way of tapping into your Services. Lastly, don’t define a hotspot for a Pod unless you really need to as it will limit the places it can be scheduled.

Familiarize Yourself with Kube Components

Kubernetes contains a multitude of components that can be used to enhance the performance, reliability, and security of your setup. You should take the time to read through the extensively helpful Kubernetes documentation and familiarize yourself with these components. Naturally, you are unlikely to use all of them, but there are some which will be useful for any setup.

For example, the Kube scheduler can take charge of scheduling your Pods so that they are running on the most appropriate node. When the scheduler detects a new Pod, it checks to see if it has been assigned to a node. If the Pod is unassigned, the scheduler will automatically assign it to a node.

Once you begin to understand what these various components can do, you can configure your system to behave in a more automated, secure, and efficient way.

When used correctly, and in line with best practices, Kubernetes Pods can completely transform the way your business approaches and utilizes virtual machines. By bundling containers together within Pods, it is much easier to have different processes and VMs working in concert to achieve unified results.

Caylent offers DevOps-as-a-Service to high growth companies looking for help with microservices, containers, cloud infrastructure, and CI/CD deployments. Our managed and consulting services are a more cost-effective option than hiring in-house and we scale as your team and company grow. Check out some of the use cases and learn how we work with clients by visiting our DevOps-as-a-Service offering.