Four predictions for OpenStack in 2016

“Will OpenStack be successful? Yes. The better question to ask is how successful?”

That’s a memorable quote from Alan Waite of Gartner. He said this while speaking on the topic of software-defined compute back in August at Gartner’s Catalyst even in San Diego. I detail it more extensively on my Catalyst recap blog here.

I love this quote because I think it’s a realistic viewpoint on a much maligned technology. Don’t get me wrong: OpenStack is complex. It’s hard to install and often takes a lot of configuration and customization. Those are inherent byproducts of an open source project that has the amount of contributors and frequency of updates of OpenStack. However, the technology is poised to be as important to evolving data centers as VMware was in the previous decade.

After talking with our OpenStack customers, as well as industry experts, here are our top four predictions for OpenStack in 2016.

Prediction 1: In 2016, 30% of companies will deploy OpenStack in production

A 2015 Red Hat survey found 16% of organizations are using OpenStack in production. That jives with my anecdotal evidence of production growth. At last year’s OpenStack Summit in Atlanta, I observed no more than two or three case studies about OpenStack in production. At the 2015 event in Vancouver, I estimate there were a dozen. There’s good momentum as companies get serious about OpenStack clouds and we believe the number of production deployments will roughly double. Large enterprises will lead the way; they have the most resources to allocate to the platform.

Prediction 2: Docker becomes the No. 2 “hypervisor” for OpenStack with 12% of organizations using it in production

According to this year’s OpenStack Superuser survey, Docker is the fourth most used “hypervisor” in OpenStack (see below). Given container and microservices momentum, we expect this to jump all the way to the second spot with around 12% of organizations using it. We also predict bare metal will decline a bit as Magnum (for container management) gains steam and provides similar performance with added benefits. Bottom line: The data center of the future is an and, not an or, meaning companies will use VMs and containers. OpenStack provides a great orchestration layer to tie it all together, as evidenced by FICO’s architecture.

Prediction 3: Talent, not technology, is the No. 1 inhibitor to OpenStack success

OpenStack is 16 major projects comprising more than 20 million lines of code. It’s big. It’s complex. And it’s hard to implement. Just ask Mirantis, a company thriving on OpenStack implementations. To be involved part-time with OpenStack is akin to not being involved at all. Specialized experts are needed in order to fully take advantage of everything it offers. In 2016, unmet demand for OpenStack professionals will hold back widespread adoption in the same way as the lack of data scientists holds back widespread adoption of Hadoop today.

Prediction 4: More than 25% of production OpenStack clouds will stall due to underlying storage issues

OpenStack was started as a storage platform. NASA collaborated with Rackspace to create an open source alternative to AWS S3. Since then it’s grown to be a full infrastructure-as-a-service (IaaS) cloud platform. The irony? Storage is now the most complex component. Swift, Cinder, and Manila all offer different interfaces. The silo’d, protocol-driven nature of the “old world” is just getting coded as APIs in the “new world” of cloud. I wrote about the problem a few months ago here. We predict storage to be the number one component/project that prevents OpenStack success, tripping up at least 25% of deployments.

Why you should align your storage with OpenStack in 2016

OpenStack is a cloud platform that combines the three fundamental parts of data centers, storage, compute and networking. As far as development is concerned, compute is furthest along. Networking took a big leap forward with Neutron in 2015. Both have relatively mature models for software-defined infrastructure orchestrated by OpenStack. Yet despite its origins, storage has traditionally lagged behind, still relying on slow, hardware-defined solutions. Out of the three, storage is the weakest link that threatens to stall the whole OpenStack ecosystem from achieving maturity. When it takes you five seconds to spin up an instance of compute, five seconds to dynamically plumb the network, but five hours to get storage aligned, something needs to change.

Since OpenStack does not provide storage, but only orchestration to the storage, it needs separate hardware or software platforms to function. The problem is that until recently, storage was never designed for an environment like OpenStack. OpenStack requires storage that is as cloud-ready as the platform itself. This is where software-defined storage excels. It gets us back to the original vision: an AWS-like environment that can be deployed and managed by an organization.

As we see 2016 usher in more production use of OpenStack, more Docker use with OpenStack, and more talented DevOps managing OpenStack, then the time is perfect to implement a software-defined solution that aligns with all three.

Rob Whiteley

Rob Whiteley is the VP of Marketing at Hedvig. He joins Hedvig from Riverbed and Forrester Research where he held a series of marketing and product leadership roles. Rob graduated from Tufts University with a BS in Computer Engineering.