DATA CENTERS

8 Things to Know About the Container Stack

Get up to speed on the rapidly evolving world of containers.

As the container development model goes mainstream, the container stack itself is evolving. Now that businesses see the value in containers, development and business focus is moving away from the engine and toward adding more sophisticated capabilities for more direct benefit to the business. Indeed, in just a few short years, containers have moved from a technological “wild west” with no governance, competing technologies, and fractured communities to a more genteel, commoditized IT package, complete with standards driven by the cross-vendor Open Container Initiative.

At the most basic level, Linux containers allow companies to package and isolate applications with their entire runtime environment -- all of the files necessary to run. This makes it easy to move the containerized application among environments (dev, test, production, etc.) while retaining full functionality. Linux containers help reduce conflicts between development and operations teams by separating areas of responsibility. Developers can focus on their apps and operations can focus on the infrastructure.

Moreover, because Linux containers are based on open source technology, organizations get the latest and greatest advancement as soon as they’re available. Container technologies such as CRI-O, Kubernetes, and Docker help your team simplify, speed up, and orchestrate application development and deployment.

In fact, containers are at an evolutionary point in IT where they are ceasing to be the innovation; instead, they are becoming the platform for innovation. From composite applications and microservices to rapid application development and various permutations of DevOps, enterprise IT is now at another inflection point: Where do we go from here?

Here are eight things organizations and developers need to know now about the container stack and how it’s changing.

(Image: Ratchat/Shutterstock)

The elements

To run containers, you need an entire enterprise container infrastructure: Linux; the runtime; and orchestration. The stack often comprises LDK--Linux, Docker, and Kubernetes--but that’s starting to evolve as container technology advances. Docker is not the only general purpose container runtime in town; new ones seem like they are popping up daily. Also, specialized container runtimes are being developed, which allow users to push the boundaries, for example, running containers within isolated virtual machines.

(Image: Red Ivory/Shutterstock)

Why Kubernetes is king

Container technology enables companies to more effectively and efficiently develop, test, deploy and maintain applications--the competitive lifeblood of any organization today. But containers, by nature, have a lot of moving parts, and most companies have a lot of containers at this point. Kubernetes orchestration--the open-source system for automating deployment, scaling, and management of containerized applications--enables companies to get their arms around containers. There are other options out there, most notably, Mesos and Swarm, but Kubernetes has become not just a de facto standard, but a standard upon which other standards are being built.

(Image: welcomia/Shutterstock)

The role of CRI and CRI-O

Last year, the Kubernetes project introduced its Container Runtime Interface (CRI), a plugin interface that gives kubelet, a cluster node agent used to create pods and start containers, the ability to use different OCI-compliant container runtimes without needing to recompile Kubernetes. Building on that work, the CRI-O project, originally known as OCID, provides a lightweight runtime for Kubernetes. CRI-O enables developers to run containers directly from Kubernetes; as long as the container image complies with the OCI standard, CRI-O can run it. This eliminates the need for a separate runtime, reducing complexity and increasing reliability.

(Image: MaLija/Shutterstock)

Container standards

If there was ever a sign that containers are no longer bleeding-edge, it’s the level of standardization we are seeing with the technology. That’s a good thing for wider and more effective adoption of containers in the enterprise, but it’s important for companies and development teams to be able to put those standards in context to make the right decisions about the container stack moving forward. Standards to keep an eye on now include the Open Container Initiative (OCI) Image Specification (Version 1.0.1 of which was just released); OCI Runtime Specification (also Version 1.0.1); Kubernetes Container Runtime Initiative (CRI);Container Network Interface (CNI) and Docker Container Network Model (CNM).

(Image: Comanicui Dan/Shutterstock)

The role of various open source projects

Kubernetes, which was developed by Google engineers based on an internal platform, is arguably one of the most active open-source projects in the container world right now. However, while Kubernetes may be one of the most visible open-source container projects, there are dozens of others that are moving the container stack forward. For example, Origin is the upstream project for OpenShift, which enables organizations to develop, deploy and manage containers. Organizations and individuals alike can more fully leverage containers -- and the expertise and experience of those who have been there, done that -- by becoming active contributors to one or more of these communities.

(Image: FreePhotos/Pixabay)

Public cloud integration

As companies expand their use and application of the container stack, it makes sense to leverage the public cloud. For example, many AWS services are now accessible through the Service Broker interface built into OpenShift Container Platform. This makes it easier for AWS users to configure and deploy services from within OpenShift and provides a single, scalable application definition, based on Kubernetes, for enterprise needs.

(Image: Krisda Ponchaipulltawee/Shutterstock)

“Boring” can be a good thing

IT pros today don’t think about the Linux kernel. Why? It’s boring. That is, it just does what it’s supposed to do, when and how it’s supposed to do it. The real innovation is happening above the kernel level. The same is true today for the container runtime interface. Indeed, when you think about the container stack today, the value is no longer at the runtime level. Runtime has become a boring commodity, which means organizations don’t have to worry about it or devote resources to it. Instead, they can focus on how to innovate through the development of the container ecosystem and the use and of containers in their own organizations.

(Image: despoticlick/Pixabay)

Application development redefined

The potential for Linux containers goes much deeper than redefining development or even operations. They are also fueling the redefinition of the application itself. The monolithic application stack as we know it, thanks to containers, can be broken into dozens or even hundreds of tiny, single- purpose applications that, when woven together, perform the same function as the traditional application. This enables each of these individual pieces to be rewritten, reused, and managed far more efficiently than monolithic applications, delivering a truly composite application built entirely of microservices that can readily scale to meet demand. While a microservices architecture does not dictate the use of containers, most organizations moving to microservices will find containers to be a more agreeable way to implement their applications.

(Image: Kitiphong Pho/Shutterstock)

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.