We are currently seeing that containers are gaining more and more popularity among businesses, companies and enterprises. In this video chat, technical experts define the value of containers and discuss how businesses are using containers in order to accelerate deployment and application movement, within different cloud environments.

Watch the Chat

Here’s what the panelists had to say

Nik: So, I’ve been seeing a lot about containers as everyone else has, especially the speed of adoption of containers this year. So Ruslan, can you get us started and just try and define our terms – what is a container and why are they such a big deal right now?

Ruslan: Very good question actually. It’s very important to understand the meaning of this word.

So from our point of view, there are two main layers: first of all the Containers virtualization layer, it’s a layer that provides isolation and security and everything related to this – it’s like a virtual machine, but much slimmer. And the second important layer is Application Packaging Standard, like Docker for example, introduced this year. These days people understand these two different meanings of containers. As you know, there is a big buzz around Docker Containers, Rocket Containers and people do not realize that there are several layers of containers virtualization.

Nik: Can you tell us how a container differs from a virtual machine for instance?

Bill: Yeah, a lot of people like to compare containers to virtual machines; I like to think of a container more as a fully virtualized Linux process, it’s very lightweight, but it provides you VM like isolation. So that a container running on your host ideally shouldn’t be able to interfere with other containers on your host. So that’s very much a VM like capability, that’s the difference.

Nik: So, Bill where is the operating system? Is the operating system within the container, does the container sit on top of the operating system? I mean do you run a container in a virtual machine, does it run on bare metal? I put out a lot right there.

Bill: Very good question, that gets right to the point. The container, one way of thinking about it, it does contain a piece of the operating system, just what your application needs. It shares a kernel with other containers, so in a VM you have a complete operating system in your entire environment, setting on your server; with the container, you have a server with a host operating system and multiple containers, setting on top of that host operating system. And again, those containers share the services of the underlying operating system, but they contain all of their user space dependencies, encapsulated right there on the container.

Ruslan: May I add something to this? It’s a really good question on how to host containers, like inside the virtual machines or on top of bare metal hardware. Today there are two main approaches. In some cases it’s enough to host them in virtual machines, and it’s used by many companies today. The main reason for that is actually security. Some implementation of containers does not provide really good security for many companies. They use virtual machines to provide a better layer of isolation and it works for many workloads nicely. But of course the best option is to run containers on top of bare metal hardware, because in this case containers’ technology delivers the original promise of density. It’s much harder of course, as you need to know how to configure such kind of containers, but it’s possible to host them on top of bare metal hardware. It’s very important to understand.

Nik: So Kyle, the container sits on top of the operating system and uses a thin portion of the operating system, so that you can have many containers on top of your OS. Would there be any reason to run a container within a virtual machine, instead of running on top of the bare metal?

Kyle: And the answer is absolutely. Because let’s face it, while containers are absolutely fantastic they are not a panacea, despite what some people tend to think they are. Let’s put it this way, it’s going to be a long time before we switch over to everything we build in 100% containers. If you think about existing ways of packaging software we’ve already got, if you’re talking about something like OAuth, a virtual machine or the DataPower clients or something like that, those are going to be virtual machines for some time to come.

And so, what we often see is that we need to have a mix of containers and virtual machines. And when you’re going to have that kind of a mix, it’s just make sense to host that on a virtualization environment like VMware or like KVM, so that you can choose, based on your different needs – for density, security and for other types of things that Ruslan was talking about – where you are going to host your containers. What I’ve actually seen is that there is a real benefit to being able to host your containers in a virtual machine, so that you can provide these other services as part of that underlying infrastructure.

Because let’s face it – we are not yet to the point that we can provide all the same levels of let’s say network security on containers that we can on infrastructures like VMware and things like that.

Ruslan: Absolutely, good point. One more maybe, to add to your thoughts – another good option, where virtual machines are good for containers: for example when you want to burst to public cloud, you can order a big virtual machine to spin up containers inside this big virtual machine. There is a mix right now on the market and it’s very important to understand that the option to host containers on bare metal hardware exists actually today, and in some cases it’s really useful.

Kyle: And to get to the point that Ruslan made, that I think is an absolutely critical point, the real thing that people are looking at is one of the big the selling points of containers, is the improvement in density that you get with containers over virtual machines.

One of the things that I’ve commonly heard a lot of my customers complain about with using virtual machines is the unwillingness to take multiple virtual machines to host a single system of any complexity. And with containers we have lots more options on how we’re going to place the different parts of that system in a smaller number of virtual machines, and the density goes way, way higher and you get much better use of your virtualization infrastructure and of your hardware just for taking advantage of it.

Bill: That’s a really good point. I’ve heard something a little bit different. What I’m hearing is – containers are really really useful for standing up environments, really quickly for development purposes, can spin up environments or spin them down. You can do that on bare metal, you can do that on a VM. You can still realize a lot of the benefits of containers, running them on top of the VM, so you can give your developers the experience of being able to quickly spin up and spin down. You can start realizing I think with containers a lot of the continuous integration, continuous delivery capabilities that containers enable. It’s a tool in your toolbox right? And then when you land in your final production environment, do it on a VM for all the security and operational aspects that everyone know and loves today. It’s not an either/all, all/or nothing kind of proposition, there’s lots of flexibility.

Nik: Would there be certain workloads that would be more advantageous to run in that sort of blended container – VM versus container bare metal? It sounds like, to me at least, (I come from an infrastructure background) and it just sounds like it’s a resource hog – the more operating system junk you put in, the less RAM you’re going have for instance, the less CPU you going to have. Again, I’m thinking there might be different workloads where you need the flexibility of having the virtual machines, and there are other workloads again, system of record that you would want to have on bare metal with a container.

Ruslan: Let me add something to this about a production ready solution with good security on top of bare metal hardware. We have cloud hosting service partners, that are providing Jelastic around the world, about 40 countries today and they host it in production during the last 4 years; and there are no security issues – it’s stable and they generate a lot of money. It’s much more efficient, compared to virtual machines. For them, it’s really beneficial because it drives costs down, total cost of ownership and increases return on investment. It’s a proven solution. Of course again, some of them are using virtual machines for different kind of reasons. There is a mix today on the market and we know what kind of solution is better.

Kyle: Right. You just get different options when you mix them that that you don’t get with one or the other alone. And I think that’s one of the benefits of being able to combine the two. With VMs, one of the things that you can do on a lot of the standard VMware and infrastructures is you can do things like separate your network traffic at the VLAN level.

You can do a lot of networking tricks and containers, but there’s just more options you have when you combine the two, rather than if you do, either and alone.

Ruslan: In the end, actually companies are looking for simpler solutions. They don’t want to deal with different kinds of companies; for example, I buy a license for the virtual machine from one company and I buy a license for containers from another company. So you need to learn how to manage two different products. You want to operate only one layer, like a turnkey software solution. For end customers it is better to have only one layer, but it doesn’t work really for many cases. These days we see a mix makes sense in many cases, but I believe that in the future, approximately in 5 years, we’ll see more and more containers on top of bare metal hardware, from my point of view.

Nik: It’s just the nature of the industry. Resources get cheaper and faster and thicker as we go. RAM isn’t as much of a concern as it was 5 years ago for anything. Your product networking tricks, Kyle, that’s one of the things I’ve seen a lot of discussion around, because I’m still trying to figure out what does it look like? Can you turn that into a container?

I know it has to live on a Linux operating system, there’s a version coming out for Windows eventually, for right now it’s primarily Linux, so when you have a container setup – does it have its own IP stack or does that run on the Linux distribution? How does that work?

Kyle: We are primarily at this point talking about Docker. One of the key things that Ruslan was bringing across is there are multiple different container technologies and Docker is one of the container technologies. You mentioned that Microsoft is working with Docker extensively. Microsoft also announced their own technology for containers that runs inside of their virtual machine, Hyper V itself. The way that this works is that it is hosting essentially just a section of the IP stack, it’s not the full IP stack, it is sharing a lot of those services from the host OS that it’s getting, but it’s giving you enough so that, for instance, it can surface ports out. The thing is that within the Docker® container: what it looks like to the applications and the processes running in the container is they think they are in their own full copy of the operating system, but in fact they are just using a thin layer of the operating system, that is being provided by the underlying host operating system. So they think for instance, that they’re operating at their own IP address, maybe 192.168.1.1. when in fact they’re sharing the IP address of the host operating system, same with ports: they may think that for instance they have an Apache server running in a container – it may think that it’s running on host port 80 but in fact that port may be being remapped, because I am sharing that same set of IP addresses that the host is providing.

And what I’m having to do is to keep my container separate from each other on that host, I’m having to remap ports at the container level to say – okay you can 80 and you get 81 and you get 82, and you get 83 – even though inside, you all think you are running on 80.

Nik: Bill, what are some of the advantages for developers, and again – do developers have to be as concerned about what Linux distribution they are running on or is the application inside the container portable from one version of Linux to another if it’s in the container?

Bill: Yes, I think that’s one of the beautiful things about containers and Docker, I kind of look at Docker as having tamed container technology, they’ve put in place a lot of useful tools for working with containers. One of the nice things about containers is – if you have a Linux host that supports Docker and it doesn’t matter if it’s Ubuntu or Red Hat or some other distribution, if it’s Linux and it supports Docker, the container is going to run on it, no question. It’s a nice separation between your application layer and your underlying operating system and infrastructure layer, there’s a nice boundary there – the container, in your API into the host, forms a nice boundary to help isolate changes. You can take your container development, stand it up on your laptop, move it into a Softlayer Cloud or Amazon Web Services and that container is going to work the same, wherever you drop it.

Kyle: And the great thing is, it doesn’t have to be the same operating system on the inside and the outside. Inside, I could have developed my application on top of Ubuntu because that’s what I’m familiar with.

But as Bill just said, when I run it inside a Pure Application System, or I run it inside of Bluemix, that may be running inside of Red Hat and I’ll never know the difference because I’m bringing along just enough of the operating system to get me that API, while the rest of the services are being provided to me via the container framework itself.

Nik: So Ruslan, is that really the reason? Bill mentioned this haming of containers that Dockers managed to achieve. Is that the really reason why they are exploding? I’ve been seeing people talking about this being the fastest adoption of any technology in 15 years, but containers aren’t new, containers have been around for 10 years. Why is it suddenly today that we are seeing this explosion?

Ruslan: The Docker team was really smart and they changed the dimension for container usage, so earlier it was used mostly by businesses, like hosting service providers to get more benefits from utilization and density. Docker shifted the focus of containers’ technology – they introduced a new way to package applications and stacks, to deliver them across to different clouds. They also introduced this technology to enterprises, because enterprises before have not trusted containers. Today, almost every enterprise is looking for a solution for containers. They just changed the focus for this technology and they added something on top of it.

Actually they did a really good job for the ecosystem: it simplifies significantly the workloads, it provides much better workloads mobility.

Nik: Docker is certainly getting big hugs from a lot of very high-end companies that don’t necessarily like each other – when Google, IBM, Apple and Microsoft all are hugging the same people saying thankyou, thankyou – you know they’ve been doing something right.

So Kyle, how is IBM integrating containers, specifically Docker® Containers into things like Bluemix and Pure Application Systems?

Kyle: We’ve got a couple of different options that we can talk about right now. The first is with Pure Application Systems in the new 2.1 firmware – we just introduced support for Docker. And again that’s using the model that I was already talking about, that’s hosting Docker® containers inside a VM that then runs on the Pure Application Infrastructure. So you can mix and match. In the same pattern, I can have support for VMs that may be already be created smaller on something like DataPower clients and I can host another VM inside a bunch of Docker® containers but I can put my webserver on.

The other one that we also announced support for at InterConnect that is now in beta, publically available, is on Bluemix – we introduced support for Docker® containers on Bluemix. So there you can mix and match technologies – again like running VMs on Bluemix, running your Docker containers on Bluemix, or using the CloudFoundry environment that Bluemix has been providing since Bluemix was announced last year. So there you can mix and match whichever you want. If you wanted to, for instance inside the Docker® container run a Java or Ruby program that will call a service, that was provided from one of the other parts of Bluemix or by runtime hosted in Bluemix and let’s say, a VM that you’ve also got hosted at Bluemix – you can mix and match any of those that get the communication that you want between the different parts – just depending on how the software you want to call is available.

Nik: Bill, what are we seeing some of our customers actually doing? Are there benefits that we are seeing from the end user? This all sounds like a lot of inside baseball. What does our customer really get out of the adoption of containers?

Bill: Within the customer set that I deal with, which is a kind of a very narrow subset of all of IBMs customers here, we see a lot of people kicking the tyres, what they are looking to use containers and Docker® technology in particular for is, realizing some of the continuous integration and continuous delivery type promises of Docker. That’s seems from my perspective, rather than density of workloads seems to be the interest – a simpler way to deliver applications into production and do it really consistently, repeatedly – the stuff that you test is identical to the stuff that you put into production. There is no need to rebuilt it. You can just relayer it with the Docker® file and you’re off and running. So that’s the level of interest that I’m seeing. It’s more also on the development side and of course the operations guys, they have an eye on this because at some point they will want to put the Docker® containers and the patterns and the orchestration templates assembled with Docker® containers into production. And they will want to do so with the traditional lifecycle management capabilities that we support and products like Pure with a push button, high availability and disaster recovery, integrated logging monitoring, patch management – things of that nature. I see a lot of tyre kicking, a whole lot of interest on the development side and the ops guys are watching carefully.

Kyle: On Bluemix we have actually had a few of customers go live in production applications on Bluemix with Docker® containers already, even though the service itself is still officially in beta. We’ve seen some customers really begin to move very quickly, again because of that wonderful portability and transportability that Bill was just talking about. You can take your app off your desktop and put it in an environment and spin up 100 of them, if you need to.

Nik: That’s a show of confidence, if we have enterprise customers that are taking a beta product into production, that’s a serious show of confidence.

Ruslan: Let me add something to Bill’s comments. I agree 100%. Most enterprises today, they are analyzing containers to simplify DevOps workloads. To speed up development processes and to make environments identical, so definitely this is one of the main benefits for enterprises today. It’s very important to understand. I see many customers and companies, they heard about Docker and they don’t know how to use it. Not many of them understand why they need it. And there is a challenge actually with legacy applications as for some workloads it does not work.

Nik: What are some of the challenges of containers? In the enterprise world, how does a piece of software that gets licensed by the core or gets licenced by some VPU type metric, get licensed for a container?

Kyle: Those are really difficult questions to answer. The good news is that at least when you are still hosting your containers inside a VM infrastructure – you can still do the same kind of these CPU calculations. You’re still consuming the same amount of CPU it’s just you’re spreading it over multiple instances, often your software licenses won’t handle that. There are other situations where I will agree that it’s completely something that our customers have yet to work out, because the vendors simply have not caught up to that type of calculation. It’s a real challenge.

Another challenge is enterprise level security, because while containers give you a lot of fantastic advantages, they also create some possible new attack factors, especially as you look at the possibility of side channel attacks. New types of denial of service attacks. These are things that container vendors like Docker are working very hard and very fast to address and deal with. To be honest I don’t think all of the questions have been answered yet. We have yet to see that first real challenge to Docker from a security aspect, and I think that’s probably something we are going to see here pretty soon as well.

Bill: I want to second that. We have teams working on orchestrating Docker® containers, Docker compose and scheduling them, but Docker containers themselves – some of the next big areas I see is setting security policies on containers and getting into the containers and maybe with the very very fine grain control for example, what system calls can they call? They have access to 100+ system calls, which ones are off limits and have a dial if you will, or which ones are accessible or not? Checking the arguments on those calls, things of that nature. VMs have been around for a long time. There was a lot of concern when they were first introduced. Are they really safe? Can one VM interfere with another? We are going to go through the same sort of maturing cycle with Docker® containers. It will take us a while to get there, but we will get there, I’m confident of that.

Nik: There will also be architectural decisions that will need to be made about availability for instance, if you’re running several containers on a virtual machine and that virtual machine goes down – you’re going to lose all those containers. So they are going to need to be replicated on a separate, either virtual machine or ideally a different bare metal. There are going to be a lot of issues that the industry is going to have to work around. This is moving so quickly that I’m just still stunned at the adoption rate.

Kyle: And if you look at technologies like Docker, Swarm and some of the other types of distribution technologies like that, I think you will see that those kinds of questions are to be answered very very quickly as that adoption gains far more traction in the industry.

Louis: In conclusion, I’d like to ask you to finish the sentence: 2015 is the year of the container because…

Nik: …containers are providing things that people in the industry have been wanting for years – that kind of versatility and portability between systems. We’ve all been craving it, we’ve all predicted it and this is the first time we’ve actually seen it happening.

Kyle: …it’s perfect for microservices.

Ruslan: …everybody is talking about it, everybody wants to implement containers. It’s a hot topic and Docker did a really good job on the marketing side. A nice solution for containers today.

Bill: …#IBMPureApplication #IBMBluemix #Docker

In conclusion, we’d like to mention that now is the era of container technology, and you’d be hard pressed to find a developer who hasn’t heard about Docker yet. Jelastic is ahead of the curve on this.

We want to say thank you to the IBM team for the opportunity to participate in this informative video chat.

If you would like to see how Jelastic implements container technology, get your free 2-week trial now.