Migrating and Managing Containers with Kubernetes – Part 2

Don’t forget Part 1, which covers the basics of Docker containers and the key purpose of Kubernetes?

Understanding Kubernetes Capabilities

Kubernetes gives IT teams a wide range of opportunities to streamline their container setups, but doing so requires a clear understanding of the tools and capabilities offered within the open source platform. It’s vital to understand both the overarching concepts within Kubernetes and the nuanced functions it makes possible.

Key concepts to understand include:

1. Pods

A pod is the smallest iteration of a Kubernetes grouping. It is a batch of Docker containers designed to operate within a shared storage or network configuration. A pod will feature rules that determine how the collection of containers work contingent with one another and ensures they remain interconnected and coupled as they are moved throughout the configuration. Generally speaking, pods are best suited to handle any application stack that is integrated across multiple verticals.

2. Nodes

A node is a degree larger than a pod. It could be a virtual or physical machine, for example. Regardless of the specific infrastructure setup, a node is designed to provide all of the necessary services and components to run a pod. As such, a node serves as the place where you can manage the condition, various addresses, storage capacity and metadata of a group of pods collected under a common umbrella.

While pods are created by Kubernetes itself to automatically manage groups of related containers, nodes need to be set up externally. For example, the Google Compute Engine can be used to create relevant nodes to provide overarching governance and management over a number of related, but not interconnected, pods.

3. Services

Services, as defined in the Kubernetes framework, don’t exist in the hierarchical framework of pods and nodes. Instead, services represent a logical way of connecting groups of containers in a manner similar to pods. The key distinction is that pods can be destroyed and recreated based on the operational demands of the system.

As pods are created, moved and restructured, gaps can exist between how the back-end system manages the configuration and how the front end views the collection of containers. Services create a logical abstraction of interconnected containers to fill this gap, providing a framework in which services define sets of pods and associates them with one another. In essence, when the pods go through moves, adds and changes, they will be able to reside in a service so the pods themselves don’t have to be linked in the back end to be coordinated effectively.

4. Master instances

Kubernetes allows users to establish master control planes that create policies and procedures for operations within a cluster of machines. Nodes are organized into related clusters and managed within master instances, creating a clear hierarchy for how systems are managed from the top all the way down to pods and individual containers within a master instance.

These four concepts make up the primary architecture of Kubernetes. Organizations can seamlessly manage network and microservice strategies within this framework, assigning resource availability based on how various containers, pods and nodes fit within the larger picture.

The above hierarchy defines much of how Kubernetes functions, but a variety of other tools and capabilities are worth mentioning:

Application deployment tools: The cluster, node/service and pod configuration of Kubernetes allows for easy scaling in application deployment. In the Google Kubernetes Engine system, users need only package the app into a Docker image, upload the image into the register, create the container and deploy it to a cluster. From there, it’s simply a matter of readying the app for the internet, scaling the container setup and deploying.

Jenkins: Jenkins is a container environment using a master-and-agent system that lets users quickly create an app in a lab setting, test it and roll it directly into a container registry in order to quickly move from dev to test to production efficiently enough to support continuous delivery.

Monitoring: Google provides dedicated Stackdriver Monitoring tools that can reside above Kubernetes Engine Clusters to provide performance metrics and reporting within a Kubernetes configuration. This includes custom metrics and autoscaling according to performance data.

Helm Package Management: As Kubernetes is hierarchical in nature, strong documentation can go a long way in helping organizations optimize and manage a configuration. Helm is a dedicated Kubernetes charting application that makes it much easier to define, maintain and manage Kubernetes Instances.

Securing Kubernetes: Google Kubernetes Engine provides protections at multiple layers in the stack, from the container image to the runtime, cluster and API service. Layering can be included across the configuration using cloud identity and access management, role-based access control and similar methods to safeguard data across the configuration.

Getting the Most From Kubernetes

Google provides a wide range of tools, pre-built systems and capabilities that promote more advanced monitoring, management and security for Kubernetes. But the container management solution still creates a wide range of challenges that companies must address. In many cases, these difficulties stem from the flexibility and freedom that containerized architectures create.

The Google Cloud represents a prime environment to host Kubernetes systems. Major benefits of running Kubernetes in the Google Cloud include:

Support for hyperscale computing: The Google Kubernetes Engine empowers organizations to create hyperscale configurations while still tightly segmenting different assets relative to departments and user groups. The result is a much more manageable scalability than possible in typical cloud setups.

Integration with Google Identity and Access Management: By default, Kubernetes will use passwords, usernames and similar basic authentication methods. Using Google Kubernetes Engine lets you integrate with Google Identity Access Management to enact robust authentication features and leverage MasterAuth capabilities.

Determinism for hybrid and mobile environments: Google Kubernetes Engine gives IT teams tools to configure multi-cloud environments capable of supporting a wide range of deployment types in concert with one another.

Performance through robust management: The Google Cloud Platform is deeply optimized for Kubernetes systems, so much so that Google devotes considerable in-house engineering resources to optimize and oversee Kubernetes Engine configurations, creating a considerable performance and reliability boost.

About Dito

As a Google Cloud Premier Partner, Dito is uniquely positioned to work across all of its product and service lines, simplifying access to and serving as a single vendor for all of your Google Cloud needs & additional partner solutions.