Francesco, thanks for your comment. There's no question that moving to a container-centric model will have an impact on licensing. You raise lots of good questions, but these are questions for which there is no answer (yet).

As for support, I believe VMware offers support for any supported OS instance, but I don't know how containers on that OS instance would be handled. I would guess that the OS vendor would need to outline the support boundaries...? Another good question that has not yet, to my knowledge, been addressed clearly.

I am not an expert but I think that licensing considerations will influence this debate as well: many technologies, including hypervisor addons and OSs are licensed per-VM nowadays. Being able to bypass the number of VM limits by spinning up lots of containers will probably make some organizations choose Docker or other container technologies as a way to reduce licensing costs. Until vendors will modify their EULAs and licensing models to account it: but will it be easy enough to meter the number of running containers in an organization? Maybe, in the future, licensing models based on actual resource usage, even for private clouds, could emerge again to handle these scenarios?

Another bullet is on support: if you're running a SLES container within a RHEL VM running in vSphere (don't know if it's possible but let's assume so), who is going to help you when something goes wrong?

ReturnoftheMus, I appreciate the continued discussion. As container technologies continue to mature (Docker is still pre-1.0, for example, and you mentioned the relative newness of some of the kernel features that support containers), the use cases for containers will undoubtedly increase. On that point I think you and I both agree. As you rightly point out, containers are already becoming a key part of PaaS platforms (the nascent Solum project in OpenStack, for example, will heavily leverage Docker containers). As for whether the VMs vs. containers debate is premature, I can honestly say that the amount of interest I've seen replacing VMs with containers justifies the article. Personally, I think a lot of it is hype, and what I wanted to do with this article was bring the discussion and the focus back to reality. Let's focus on where it makes sense to use containers, and continue using mature VM technologies and infrastructures where that makes sense for our businesses. Thanks!

There is no doubt docker has emerged as a leading light in the promotion of container technology for the enterprise , however I felt the title of your blog post raised a somewhat premature debate.
As we know some of the world’s major CSPs favour containers over VMs, especially at the PaaS and SaaS layers enabling them to get the economies of scale that they otherwise wouldn’t have.
I’d also stress that even-though container unification was started back in 2011, it’s was only in 2013 that we got that unification in the 3.12 release, which is way beyond where most enterprise kernels sit today.
Security has largely been addressed with the introduction of the user namespace; however the distros have only just started enabling it.
My overall point is now that we have this unification, this opens the door to many more docker type applications, with endless possibilities.

Thanks for the clear post on this topic, Scott. This is definitely something that has been cropping up, and an area where I'm interested to learn more, so this brief overview is very well timed for me.

The security aspect you mention is going to be a key concern for many users I suspect, so that vetting process is going to be key if we'll see adoption similar to VMs. Good stuff.

Hi David, thanks for taking the time to comment. I don't know that there are any "hard and fast" rules for when customers should deploy VMs versus deploying containers. There are a number of factors that should be considered, though, including application support (most container technologies are strongly focused on Linux and Linux applications), operational readiness (staff training to implement and support containers, tools must support containers, organizational readiness to support open source projects, support, etc.), and other requirements (security might be one; I don't know that containers have been as fully vetted as VMs from a security perspective). Further, as organizations move more heavily into private cloud deployments using cloud management platforms (CMPs) such as OpenStack, CloudStack, vCAC, OpenNebula, Eucalyptus, and others, the ability/readiness of the CMP to support containers is another question customers must answer. What of scale? Does the customer really need the enhanced scalability that containers can offer? Customers must evaluate all these factors before making a decision. I would stress again that this is not an "either/or" situation, but rather an "and" decision. There's no reason customers can't deploy both VMs and containers, as best suits each situation.

Brian, you are correct that some container technologies (Docker is one of them) use techniques like layered file systems to dramatically reduce both the space required for the containers as well as the time required to launch a container. However, not all container technologies do this. Using "straight" LXC, for example, won't gain you the benefits of a layered file system and the reductions in space requirements that result from it. Further, the rise of high-performance inline deduplication technology is making the formerly onerous space requirements for VMs far more palatable. This is why both VMs and containers have a place in the toolbelt of a modern cloud architect; it allows cloud architects to use the right tool for the job at hand. Thanks for commenting!

ReturnoftheMus, technically you are correct. Docker itself, consisting of both a daemon as well as some userspace tools, is not a container. However, given that Docker is expressed designed to work with container technologies in the Linux kernel (cgroups, namespaces, etc.), I feel it is reasonable to refer to Docker as a container technology. This simplifies the discussion when talking about Docker and allows the conversation to move forward to how we can use Docker to support applications and operations in modern data centers. Thanks for your comment!

Containers are good, because they create an environment that makes a distinction between read and write memory. This is useful if for example, a 1,000 copies of a 5GB image needed to be ran because, the space requirements could still only be 5GB (this would also be nice if the image needed to be in-memory), on the other hand, VMs would require 1,000 X 5GBs of space. However, they would be instances where space is of little concern and VMs are given preference, and if this is not the case at present then maybe some future requirement makes it the case -- I guess, the best security would be flexibility.

Our latest survey shows growing demand, fixed budgets, and good reason why resellers and vendors must fight to remain relevant. One thing's for sure: The data center is poised for a wild ride, and no one wants to be left behind.