Red Hat Linux Containers: Not Just Recycled Ideas

Red Hat and its partner, Docker, bring DevOps characteristics to Linux containers, making them lighter-weight vehicles than virtual machines for cloud workloads.

Some people accuse Red Hat of dusting off an old idea, Linux containers, and presenting them as if they were something new. Well, I would acknowledge Sun Microsystems offered containers under Solaris years ago and the concept isn't new. But Docker and Red Hat together have been able to bring new packaging attributes to containers, making them an alternative that's likely to exist alongside virtual machines for moving workloads into the cloud.

And containers promise to fit more seamlessly into a DevOps world than virtual machines do. Containers can provide an automated way for the components to receive patches and updates -- without a system administrator's intervention. A workload sent out to the cloud a month ago may have had the Heartbleed vulnerability. When the same workload is sent in a container today, it's been fixed, even though a system administrator did nothing to correct it. The update was supplied to an open-source code module by the party responsible for it, and the updated version was automatically retrieved and integrated into the workload as it was containerized.

That's one reason why Paul Cormier, Red Hat's president of products and technologies, at the Red Hat Summit this week, called containers an emerging technology "that will drive the future." He didn't specifically mention workload security, rather, he cited the increased mobility a workload gains when it's packaged inside a container. In theory at least, a containerized application can be sent to different clouds, with the container interface navigating the differences. The container checks with the host server to make sure it's running the Linux kernel that the application needs. The rest of the operating system is resident in the container itself.

Is that really much of an advantage? Aren't CPUs powerful enough and networks big enough to move the whole operating system with the application, the way virtual machines do? VMware is betting heavily on the efficacy of moving an ESX Server workload from the enterprise to a like environment in the cloud, the vCloud Hybrid Service. No need to worry about which Linux kernel is on the cloud server. The virtual machine has a complete operating system included with it.

But that's one of the points in favor of containers, in my opinion. Sun used to boast how many applications could run under one version of Solaris. In effect, all the containerized applications on a Linux cloud host are sharing the host's Linux kernel and providing the rest of the Linux user-mode libraries themselves. That makes each container a smaller-sized, less-demanding workload on the host and allows more workloads per host.

Determining how many workloads per host is an inexact science. It will depend on how much of the operating system each workload originator decided to include in the container. But if a disciplined approach was taken and only needed class libraries were included, then a host server that can run 10 large VMs would be able to handle 100 containerized applications of similar caliber, said Red Hat CTO Brian Stevens Wednesday in a keynote at the Red Hat Summit.

It's the 10X efficiency factor, if Stevens is correct, that's going to command attention among Linux developers, enterprise system administrators, and cloud service providers. Red Hat Enterprise Linux is already a frequent choice in the cloud. It's not necessarily the first choice for development, where Ubuntu, Debian, and Suse may be used as often as Red Hat. When it comes to running production systems, however, Red Hat rules.

Red Hat has produced a version of Red Hat Enterprise Linux, dubbed Atomic Host, geared specifically to run Linux containers. Do we need another version of RHEL? Will containers really catch on? Will Red Hat succeed in injecting vigor into its OpenShift platform for developers through this container expertise?

We shall see. But the idea of containers addresses several issues that virtualization could not solve by itself. In the future, containers may be a second way to move workloads into the cloud when certain operating characteristics are sought, such as speed of delivery to the cloud, speed of initiation, and concentration of workloads using the same kernel on one host.

Can the trendy tech strategy of DevOps really bring peace between developers and IT operations -- and deliver faster, more reliable app creation and delivery? Also in the DevOps Challenge issue of InformationWeek: Execs charting digital business strategies can't afford to take Internet connectivity for granted.

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio

Thanks to Krishnan Subramanian, director of OpenShift strategy at Red Hat, for his comment, which appears for the time being to close out this debate over the value of containers and their future. Stay tuned. This is sure to be a topic of furture discussion.

Disclaimer: I am a Red Hat employee but this comment is based on my understanding of the technology and industry. It cannot be construed as a statement from my employer.

There seems to be lot of misconceptions about containers. Leaving aside FUD about Red Hat joining CloudFoundry, I want to address some real points about containers.

1) Red Hat never said containers are new. In fact, it is very well known that containers are there for a long time. Even Red Hat has been using containers inside OpenShift for a long time. What is different now is that Red Hat is embracing Docker for its containers than doing something on its own. Why reinvent wheel when Red Hat can work with Docker to do containers right? Also, unlike others in the industry, Red Hat is not new to Open Source. OSS is part of Red Hat's DNA. That is why they joined OpenStack Foundation and contributed large number of resources for the project and doing similar thing with Docker.

2) Charlie has already explained well about containers. I don't want to do the same here. In short, containers are about efficiencies and scale. It is not just developer efficiency but ops efficiency.

3) The innovation Red Hat has been highlighting is not about Docker per se but about how to make containers awere of other containers and how orchestration can be done more effectively. It is new and Red Hat has done a great job of bringing it to OSS community.

4) There seems to be some complete misunderstanding of underlying technology in one of the comments. A comment dismisses Project Atomic on one hand as unnecessary and praises CoreOS as revolutionary. Both CoreOS and Project Atomic has been doing similar things. Acting as an underlying host for containers. CoreOS is doing a great job and Project Atomic is also doing similar thing starting with a well trusted Red Hat Enterprise Linux as the starting point. Calling it as striped down version of RHEL is wrong.

Renat Khasanshryn, CEO of Altoros, in Sunnyvale, Calif., and Minsk, Belarus, sent these comments, which I'm adding them to the stream. Altoros, among other things, consults on Cloud Foundry use, so Renat pays attention to PaaS. He wrote:

"In 10 years, containers, not VMs, will sit atop of the Host OS for 50%+ of cloud-native workloads. Docker is the most "trendy", but not the most advanced container technology out there. First of all, many thanks to Red Hat and Docker for a great job popularizing containers among developers. Developers rule in this world, and will get what they want (containers).

<Will Red Hat succeed in injecting vigor into its OpenShift platform for developers through this container expertise?> Answer: The momentum surrounding Cloud Foundry will likely result in domination of the open PaaS category. I believe OpenShift will either join Cloud Foundry or will be taking advantage of any weakness in the Cloud Foundry ecosystem to carve out a space of its own. Stay tuned in the next two weeks for a blog post about why OpenShift should now join Cloud Foundry Foundation, against prediction of some industry pundits.

<Do we need another version of RHEL?> Answer: Not really. Instead, we need a solid cloud-native operating system. We need an operating system that is light-weight, and without the junk that comes with a "one-thing-fits-all" design. CoreOS, OSv give us a great peak of what a cloud-native OS will offer to the end users.

< Will containers really catch on?> Answer: I am very bullish on the future of containers and believe that containers, not VMs/hypervisors, will dominate cloud-native workloads in as little as 10 years from now.

By 2007, VMware lost the battle for the title of #1 provider of multi-tenancy solution for the hosting market. A few little known companies emerged as winners. These few companies hold the most advanced container tech out there.

Between 2000 and until today, container packaging technologies, including Docker, are facing enormous problems getting changes accepted by the upstream Linux kernel communities.

Container's future is bright; however, not without challenges

I believe containers will ultimately win over Type 2 hypervisors for cloud-native workloads, however they face some stormy winds:

Short-term, adoption of containers-based products, including OpenShift and Cloud Foundry, will continue to suffer from Big 3 public cloud players having no incentives to replace a combination of hypervisor + host OS, with containers.

Today's cloud leaders pursue lock-in strategies by taking advantage of the lack of portability for VM-native workloads. Container-based workloads and PaaS only speed up the race to the bottom of cloud pricing, while not providing any decent amounts of lock-in for cloud providers.

Can someone tell me what SuperNAP Exec. VP of Data Center Technology Mark Thiele means when he says below, "The biggest risk to RHAT is more likely that an alternative h/w abstraction solution will arise before containers have gained a big enough foot hold to be considered the defacto solution..." What form would such hardware take? A supersized CPU, many cores, with an embedded, skeleton operating system that automatically treated every application loaded on it as a container?

I like IBM's Ric Telford's analogy. From my point of view, if you want to be light on your feet and free to move at short notice, then furnished apartments -- containers -- are for you. Also, Lew Tucker's telling comment, "sometimes having a familiar environment, even if it means bringing your own OS, may involve more work, but it's at least the work that you know," neatly sums up several pro-virtualization arguments, but he's not trying to decide the issue for the virtual machine status quo. He was quite familiar with containers as head of cloud computing for Sun, prior to becoming CTO of cloud for Cisco, and he concludes there's probably containers somewhere in the cloud's future. Well said. And Joe Emison at BuildFax continues to burn through pro-container comments with his own insight and doubts about their usefulness. Thanks, Joe. You've all made this the best informed debate I've seen on the subject.

It doesn't really matter if containers is a new idea - what is important is that it is bringing a concept to the cloud to help reduce complexity of workload migration. I think this is just another example of the maturation of the cloud and the march towards flexible infrastructure-as-code. To me it is sort of like going to a furnished apartment vs an unfurnished (stay with me on this analogy!) An unfurnished apartment meets some set of your needs, but you need a moving van of stuff (the VM) to meet the rest of your needs. A furnished apartment addresses more of your needs, so you just need a few boxes and suitcases (the container). Both have a purpose, but now you have choice and flexibility...

Re: Great discussion with no obvious answer to RHat Container good or Container bad

Mark-- I think that, to some extent, your question has already been answered. At least in the public cloud computing space, VMs have been the standard, and I don't see that changing. Outsourcing the problems around hardware virtualization to vendors like Amazon has been fine, and many organizations have been fine with the costs.

So in order for Red Hat to win here, we have to accept that the private cloud world is necessarily going to be different from the public cloud world in a really fundamental way--namely, that you'll have to deal with another abstraction layer of containers in order to get your workloads running on your private cloud.

This seems unlikely to me. I really doubt that we're going to have dramatically different deployment configurations for private vs. public cloud. After all, I think many organizations are going to be using both (public for dev/test agility, private for regulated workloads and better cost control for enterprises) and/or organizations are going to be hiring developers who have grown up in the public cloud.

Why do we think that Red Hat will be successful in imposing an unnecessary learning curve on developers? I don't think they will. Unless containers can become de facto in the public cloud world, the main abstraction layer will be the VM, and everyone will use tools that minimize time-to-develop for public and private clouds alike on VMs. (And to the TeaPartyCitizen who implies that somehow costs are cheaper with containers on a single box: all you need to do is look at developers' salaries to realize that you're being penny-wise and pound-foolish)

Eric Schmidt used to say in reference to Java: "in software we often solve problems by creating a new layer of abstraction which then allows for innovation both above and below the line." In Java, this was the JVM. We are seeing this happening in cloud computing, and get to ask the question - for cloud-based application development, what's the platform?

It's clear that building on cloud services for compute, storage, networking, etc. accelerates application development, since we don't have to worry about building to the underlying infrastructure. Whether to build to a virtual machine, container, or PaaS model, however isn't quite so clear cut. Coming from Sun, I saw many of the advantages of Solaris containers and then was somewhat surprised to see the the success of the virtual machine model at AWS.

Lesson learned: sometimes having a familiar environment, even if it means bringing your own OS, may involve more work, but it's at least the work that you know. Now that containers are coming back, our choices in cloud computing are expanding again. Both models, including PaaS will survive, as they address somewhat different problems.

The interesting shift will come as we continue to see the emergence of new models, platforms, and services to address the development and operational needs of large scale cloud-native web apps.

There's no doubt Google has made headway into businesses: Just 28 percent discourage or ban use of its productivity ­products, and 69 percent cite Google Apps' good or excellent ­mobility. But progress could still stall: 59 percent of nonusers ­distrust the security of Google's cloud. Its data privacy is an open question, and 37 percent worry about integration.