What is The Hyperconverged Tax?

Hyperconvergence scales by adding nodes. Each node provides compute, storage, and network. The problem is very few data centers need these three resources at the same time. In most cases the data center consistently needs more of one of the resources and rarely any of the others. Those two others sit idle, paid for but idle. They are essentially a tax on the resource the data center needs. Avoiding the hyperconverged tax should be a top priority of organizations considering one of these architectures.

Should You Avoid Hyperconvergence?

One way to avoid the hyperconverged tax is to stay with the the status quo, continue to architect virtual infrastructures that keep the three resources (compute, network and storage) separate.

Most virtual infrastructures are dealing with two network resources. There is an east-west communications path between servers and then a north-south path between servers and storage.

The problem with the traditional architecture is it is more complicated and really doesn’t leverage the capabilities of a virtualized infrastructure. Most virtual architectures can’t possibly take advantage of all the available compute, even with high virtual machine to physical server ratios. Running some storage services, which the traditional architecture avoids, on that compute can makes sense.

But the lack of distribution of storage services means that the traditional storage architecture tends to have over-worked north-south network communications and storage I/O operations bottleneck at the centralized storage system.

Should You Grow Out of Hyperconvergence?

For most data centers, hyperconvergence resource efficiency is not an issue in the early days of implementation. It only becomes a problem as the environment scales. Another way to avoid the hyperconvergence tax is to select a hyperconverged solution that can, as the environment scales, becomes a bare metal, dedicated solution. These systems allow for the addition of nodes that are mostly made up of one type or resource, compute only or storage only.

The problem with growing out of a hyperconverged solution is many of the challenges still remain. The traffic remains all east-west, so networking at scale remains a challenge. And there are still inefficiencies if the expansion of storage is capacity driven instead of performance driven. Since most hyperconverged architectures distribute data across all (or many) nodes in the cluster, placing one high capacity node in a cluster of smaller capacity nodes doesn’t leave for correct data distribution. The organization would need many high capacity nodes before the resource allocation will balance out.

How About Open-Converged?

There is an alternative: Open-Converged Architecture. In our on-demand webinar we talk about the problems facing converged architectures and introduce attendees to the concept of open-converged architecture. The problems with all the other forms of hyperconverged architecture and even legacy architectures is how they handle the storage workload.

All virtual infrastructures have compute load and an east-west traffic load. What makes hyperconvergence unique, and what eventually becomes its Achilles heal, is hyperconvergence adds storage IO to the compute and network load.

Open convergence solves this problem by distributing the storage workload. Each node in the cluster has internal flash that it uses for all active data. All reads are local to the node that the virtual machine requesting data is located. Not only does the local flash provide a significant performance boost, it also eliminates a significant amount of the east-west AND north-south network traffic. Internal server SSDs are also significantly less expensive than SSDs in a shared storage array. There are no network spine-leaf redesigns with open convergence.

Finally data is also secure and protected. All writes are written to both internal flash in the node and to a centralized durable data repository. This repository is fully protected and acts as the centralized sharepoint should one or more hosts fail.

The result is the nodes are fully utilized for their compute and high performance IO potential. They don’t have to worry about storage capacity nor data protection responsibilities. The centralized storage repository handles those responsibilities. You can add compute or storage independently and incrementally—no more H-tax.

Share this:

Like this:

Related

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.