The virtualization blog by Joep Piscaer

Docker for proprietary enterprise software, really?

With three weeks since DockerCon and well into my summer vacation, I’ve had some time to digest DockerCon, and I want touch on the disconnect between traditional applications, running them inside containers, and how Docker is addressing this. From this perspective, I’ll highlight some of the announcements made during the show.

Docker is for Developers

Docker is generally aimed towards at the software development cycle and minimizing the gap between development and operations by offering a simplified and unified workflow and application packaging format, standardizing the hand-over between the two.

This is visible in the portfolio of products Docker Inc. is working on: Docker for Windows, Mac, AWS and Azure are all tools for the developer end of the cycle: making it easy for them to work with the Docker toolchain.

Docker For Operations

While at DockerCon, the second noticeable trend was Orchestration. As an infrastructure guy, I’ve looked at Docker and containerization with amusement. I thought:

Are these guys kidding? Don’t they know how hard it is to manage infrastructure at scale? Have they even ever tried network or storage operations?

And still, there’s some lingering uncertainty on how to map my experience with storage, networking and virtualization to the Docker world. But they made sense to me during the first day keynote, where Solomon Hykes (CTO and Founder of the company) talked about Orchestration, or how to make infrastructure as simple as possible.

With the newly announced SwarmKit, Docker is suddenly a solid player in the orchestration space (where Kubernetes, Mesos, Mesosphere, DC/OS, Fleet and others compete), not to mention in the more traditional orchestration engines for VM-based environments (vCenter, System Center VMM, etc.). While there’s still much to be improved in the storage space, the new clustering features in 1.12 are big strides in orchestration/clustering, security and networking. I’m curious to see the downstream Docker enterprise products (Docker Datacenter, Docker Cloud) inherit the 1.12 platform features!

I want to call out two distinct features in 1.12 that caught my eye:

fully automated certificate management, key rotation, PKI and end-to-end TLS. For those with any production experience with the VMware suite (vCloud and vRealize suites), you know what a pain it is to manage products that don’t have security and cryptography thought out well.

And finally, for the enterprise?

So Docker really made good progress to be considered not just a easy platform for developers, but it is beginning to have some real merit to deploy into production. That is: use the Docker products to run production applications.

So on the one hand: running the Docker platform version 1.12 is massively different to running, say, Docker Datacenter or Docker Cloud. These are both more complete solutions that include the Universal Control Plane for central management, clustering and orchestration. I’m very curious to see the new 1.12 platform features become available in Datacenter and Cloud and the options this will open up for running traditional enterprise applications in Docker, especially now Microsoft is porting Docker to run on Windows (which is insanely cool in my book; check out this Channel9 post).

Maturing the platform will make the subject of ‘Docker in the Enterprise’ less relevant, just as the discussion ‘vSphere in the Enterprise’ isn’t relevant any more. If you want to learn more about actually running Docker as part of your enterprise IT strategy, I recommend reading Ian Miel’s post, A checklist for Docker in the Enterprise.

So while Docker is obviously advocating that running traditional applications inside of a Docker platform (in addition to your existing hypervisor platform) is a great idea, because you now get all those great portability advantages, I think this really only works (at least currently) for applications developed in-house or more generally speaking: when you have access to the source code to be able to refactor it to a micro-service oriented architecture or the 3rd party vendor refactors it for you.

This is not consistent with my view of a traditional traditional enterprise application, which I believe is more often than not supplied by a 3rd party vendor in binary form (i.e. closed source, proprietary software)
With this assumption, running a binary application inside of a container is useless, since you have no influence to refactor the application from a traditional app to a micro-service oriented application.

So really, having the Docker layer for traditional enterprise application where there’s little change of refactoring, which is again, in my mind, the vast majority of traditional enterprise applications, is pretty much useless.
For these kinds of closed-source proprietary software applications, a Virtual Machine makes more sense for now, since hypervisor platforms are more catered for these workloads with VM- or OS-based high availability, (storage) vMotion, DRS,snapshotting and backups, and others.

So Joep, what’s your point?

Well, I’d like to think it’s pretty obvious:Docker makes sense for pretty much any open-source application, especially those being refactored to a micro-service oriented architecture. It doesn’t make sense for closed-source applications, which most traditional enterprise applications are.

I want to leave you with some closing thoughts on two distinct developments in the Docker ecosystem that will hopefully make the second part of my statement untrue in the future:

I hope that Microsofts efforts of porting Docker to Windows will help in the shift of Windows applications to the micro-service architecture

I hope the new Distributed Application Bundle packaging format will, at least partly, replace current software distribution formats (like ISOs, MSIs and more) and software vendors will deliver their software as DAB-files to deploy inside of (Windows) Docker containers.