So what’s the common theme through all of these moves? HPE Senior Vice President and General Manager of the company’s Software Defined and Cloud Group Ric Lewis says it’s simple: HPE wants to own the next generation of infrastructure management. And a big part of that is what HPE is calling a whole new type of infrastructure dubbed “composable.”

The big picture

HPE’s overall strategy has been narrowed to focus on three core areas, Lewis says. One is hybrid IT: HPE wants to help customers build private clouds on next-generation infrastructure that integrates with public cloud resources. A second broad focus area is what Lewis calls the “Intelligent Edge,” which encompass technologies related to the Internet of Things. Finally, the third pillar revolves around services and helping customers successfully execute projects in the first two areas.

One perception that HPE struggles with is the notion that it doesn’t have a public IaaS cloud to compete with the likes of Amazon Web Services, Microsoft Azure and Google Cloud Platform. Lewis says despite public IaaS cloud getting the lion’s share of attention, “that misses how important private cloud is.” Private cloud and on-premises infrastructure is the market that HPE wants to own, which includes helping customers manage their use of public IaaS cloud resources. This strategy will not be easy though as a host of other legacy enterprise infrastructure vendors are vying for the same prize, including Dell-EMC, Cisco and IBM.

There is opportunity though. A recent Worldwide Infrastructure Forecast by IDC estimates that through 2020, public cloud infrastructure is set to grow at 15% compound annual growth rate; private cloud is forecast to grow at 11%. This compares to traditional IT growing at only 2%. If companies like HPE and others can offer compelling options, there is a market for enterprises to upgrade their on-premises infrastructure.

The first next-wave: HCI

On the immediate horizon for next-generation enterprise data center design is hyperconverged infrastructure. HCI or integrated systems typically deliver a package of pre-compiled servers, network and storage components in a single engineered offering. This is opposed to buying those components separately and end users configuring them. HCI systems are typically sold as software that controls infrastructure resources or as hardware-software combinations, and they’re most typically used for virtual desktop infrastructure or as a type of VM vending machine that offers users virtual or even bare metal infrastructure, says Gartner research director Paul Delory.

HPE’s product in this market is named the Hyper Converged 380, and it has been out for a couple of years. Most analysts see it as trailing offerings from market leaders Nutanix, Simplivity and Dell EMC (VxRail). HPE significantly upgraded its position in the market last year when it acquired Simplivity, automatically making the company one of the premier HCI vendors.

HCI is an estimated $1.5 billion annual revenue run rate market and is in a phase of “rapid maturation,” says Forrester Research analyst Richard Fichera. As the market matures, he expects use cases for HCI will become more diverse and more widely known about. A recent survey by Forrester found that up to 50% of HCI customers were running databases on their HCI platform, and up to one-third were running enterprise applications such as collaboration, enterprise resource planning or HR and finance apps on HCI.

Lewis says the plan is to integrate the HC380 and Simplivity product lines, with the first step being to certify Simplivity to run on HPE’s DL380 servers this year. In the future, HPE will plan to offer Simplivity exclusively on HPE hardware, while still supporting any previous Simplivity customers who run the system on non-HPE hardware, Lewis says.

The future: Composable infrastructure

HCI is a viable market today, but Lewis and HPE are already looking beyond that to the future. What Lewis is really excited about is HPE Synergy: It’s what he calls a composable infrastructure system that HPE launched in December 2015. Composable infrastructure has three components: Fluid pools of compute, storage and fabric (network) capacity that can be provisioned as needed; software-defined intelligence that controls them; and an API to access it.

Synergy is sold in a 21-frame unit that supports 12 modules per frame. The idea is that Synergy is made up of compute, network and storage, and developers can request a virtual machine with any combined amount of those resources. When the workload is done running, those infrastructure resources are delivered back to the “pool” for other users to access. One workload could be a compute-heavy application with a lot of CPU power; another could be memory-heavy for read-write operations.

This is a different idea than an HCI system, which typically provisions pre-configured VM or bare metal infrastructure templates. Composable is more similar to the idea of a public cloud where resource capacity is simply requested and provisioned from shared capacity. The difference with Synergy compared to public IaaS cloud is that it sits on customers’ premises. Like cloud, HPE offers Synergy in an “opex” buying model in which customers pay for the product based on how much they use it; or they can buy it outright as a capital expense.

“Composable systems will likely have much of their success initially with larger enterprises who have complex software environments,” Fichera says. “If all you want is an efficient way to provision VMs, it probably makes sense to buy HCI. If you have a big, complex enterprise software system and are looking for ways to streamline constantly-changing infrastructure resources, that’s probably where composable will make more headway.”

HPE is unique in the market offering composable infrastructure, Fichera says. Cisco had a composable product with the M Series of its UCS servers, but no longer actively sells those because of a lack of demand, the company said. Fichera says it’s unclear yet if Synergy will be more desirable. HPE does have paying customers willing to talk about their use of Synergy.

Synergy in practice

The HudsonAlpha Institute for Biotechnology in Huntsville, Ala., houses hundreds of researchers studying the latest in genomic medicine. This science specialty creates petabyte-scale data and requires batch compute processing jobs. HudsonAlpha CIO Peyton McNully says the long-term goal is for his IT department to transition from HPE C-7000 servers to Synergy, and that’s a process that’s underway.

“Ultimately, (Synergy gives us) more flexibility and better software definition through Rest APIs,” McNully says. “Some days we run a hypervisor for certain workloads, then in the evening hours it’s back to bare metal, CentOS and Docker containers for batch processing jobs.” Enabling that flexibility can be “quite expensive” on standard infrastructure equipment, he says, but by using Synergy HudsonAlpha is able to compose the exact-sized infrastructure environment each workload needs.

“There are 15,000 to 18,000 genomes that we’re analyzing each year and I’ve got a certain amount of compute to do that with,” McNully explains. “The opportunity to ring out every last drop of that compute in the evening hours, and then spin up private clouds during the day for different types of workloads, is a huge price and performance benefit.”

List pricing for the HC380 ranges from $26,000 to $100,000 for an all-flash configuration. For HPE Synergy, list pricing starts at $12,750 per compute block for one- to 36-server configurations, with discounts when buying in volume.

Copyright 2018 IDG Communications. ABN 14 001 592 650. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of IDG Communications is prohibited.