Could HPE’s ‘Composable Infrastructure’ Benefit CX?

Conferences such as last week’s HPE Discover 2015 in London typically begin with the theme that the pace of change for technology is so brisk that everything becomes new again in just a few short years.

It’s really wishful thinking, though: The message of an industry that wishes it had the luxury of fashion or pop music or politics, all of which can change on a whim for no discernable reason.

Hewlett Packard Enterprise (HPE) — one of the two new companies produced by November’s two-way split of HP Corp. — needs a new approach to selling servers, its core product. It’s well aware of the fact that servers are no longer the topic of much interest, including in articles such as this one.

But it is aware that modern data centers are being defined, more and more, by the software they run. So it is developing a conceptual model for new servers that works more like the cloud: a model where applications are capable of carving out physical computing space exclusively for themselves, out of a giant pool of resources.

What Are We Talking About?

The reason this move might be important to you is because it brings into the spotlight — for many organizations, for the very first time — the question of whether complete data center architectures should be entirely replaced with software-oriented designs, geared particularly around the workloads that data centers manage.

“On the one hand, cloud, mobile, big data and analytics give you the tools to accelerate speed and time-to-value, to create dramatically new experiences and even new markets,” stated Peter Ryan, the new HPE’s senior vice president and managing director for the EMEA region (Europe/Middle East/Africa), during a keynote speech at HPE Discover 2015. “On the other hand, most organizations are being built with rigid, inflexible IT infrastructures that are costly to maintain and that make it difficult if not impossible to implement new ideas quickly.”

Some data center architects and other experts would say Ryan is exaggerating, at least a bit. OpenStack (upon which predecessor company HP already made a considerable investment and which HPE will continue to do) has already enabled organizations to deploy servers in a much more fluid fashion than before and has introduced them to the significant benefits of hybrid cloud architectures.

But HPE wants to move one step further — a step whose eventual benefits are not yet obvious and which some may still wish to debate. It’s a move to hard-wire the procurement services for cloud infrastructures into hardware — moreover, to HPE-branded hardware.

“Composability is an architecture where you bring the hardware and the software together,” said Chris Cosgrave, HPE’s Chief Worldwide Strategist, during a session for customers at Discover. “It’s software-defined everything, unlimited scalability and really the first time we’ve started to treat infrastructure purely as code.”

Why This Matters

Understanding why composability is worth a discussion in the context of the issues we typically discuss in this publication, such as customer experience, requires some background.

Today, your organization may utilize a massive global data center. That phrase doesn’t mean what it used to; today, a data center is a group of servers connected by a shared network and the Internet can be that network.

There’s a viable argument that the word “center” is no longer accurate.

For much of the last decade, the technology we’ve come to call “the cloud” has enabled businesses — perhaps yours included — to shift the boundaries dividing the private and public assets of your hybrid data center. Within the last three years, the same technology has enabled your data centers to craft virtual machines that are custom-fitted to particular workloads.

Consider a fluid combination of compute, storage, memory and bandwidth, where the quantities of all four of these resources are variables in a fluctuating formula. Each physical server, either on your premises or your cloud service provider’s, contributes certain amounts to the overall pools of these resources, like invitees to a pot-luck picnic.

A cloud operating system such as OpenStack gathers these resources together, regardless of their physical location on the planet and enables them to be operated like a single computer.

HPE’s composable infrastructure plan would accomplish much the same thing, but in a somewhat different way. First, it would move the software with which admins manage HPE’s new Synergy line of servers into hardware — or, more accurately, into firmware not unlike the kind that runs your smartphone.

Within that firmware, a new kind of provisioning agent called the image streamer would take over the job of assigning compute, memory, storage and networking resources in a cloud environment. This would create colossal pools of resources upon which you could still deploy OpenStack or VMware vSphere, although they may play lesser roles in this new scheme.

Synergy “supports both the traditional work that our developers do to maintain some of our existing software products,” Linesch continued, “but it also really helps them out, as they race toward re-architecting, or architecting, new products using more of a microservices, containerized kind of approach.”

Containerization would play a huge role in this new model. With an environment such as Docker, each software-based container is “shrink-wrapped” to fit a specific workload and supplemented with only the services that workload needs to run.

Typically a simple, text-based script instructs the Docker Compose tool to assemble a new container based around that software and deliver that container to Docker Machine to run. With HPE Synergy, that simple script triggers a process that divides the physical components of its servers (not some virtualized space created by them, which is the key difference) into logical partitions.

It’s as though servers were made of clay and from that clay, any component could be fashioned to order, run for the duration of the workload and then disposed of and returned back to clay when it was done.

What This Changes

Simply because a product comes into existence does not mean the world’s center of gravity shifts instantaneously. No one knows this fact better than the successor company to HP, which had prototyped a working touchscreen tablet more than a decade before the iPad.

For Synergy to catch on as an ideal rather than just as a server brand, HPE will need to make some form of composability available in its existing server lines, including its long-standing ProLiant. This way, data centers could move to Synergy, or something like it, in more easily sustainable increments rather than giant leaps.

But the ideal would have to sell first. That ideal has to appeal to the IT department, to software developers and to the key business departments of the organization simultaneously.

The ideal is this: Modern software is no longer a handful of amalgams of tightly constrained business processes — the ERP, the HCM, the BPM, the CRM and the CMS. Rather, it has become a looser, interchangeable assembly of discrete business functions that evolve as the business evolves.

If this were a story about how HPE is going to sell organizations on a completely new line of new servers, then it would end on the note that the task ahead of it is daunting and perhaps impossible. But since it is actually about your business, there is a brighter, more positive way to look at this:

Software whose architecture dates back ten or twenty years — or, in the case of some financial institutions, fifty years — is unsustainable in modern practice. It has to be replaced and your organization may make such a decision soon.

That’s not an impossible, mountain-moving decision; there are integration paths and ways to make this change work. And this is true for most businesses.

Once these organizations have already begun this move to a modern world, the task of making hardware such as HPE’s servers adapt to that world will have become much easier.