What agencies need to know about hyper-converged infrastructure

By Kurt Marko

Jun 22, 2016

Even in the fast-changing world of IT, there’s still truth to the axiom that everything old is new again. Styles of IT infrastructure and application design patterns are cyclical, especially when it comes to the trade-off between a few large, integrated and centralized systems versus dozens of distributed nodes.

MORE IN THIS SERIES

Why the time is right for hyper-converged infrastructure

Evolutionary changes in technology and policy have created a climate for technology-fueled automation and efficiency in the data center. Read more.

HCI opportunities in agency IT

In today’s parlance, it’s the dichotomy between scale-up and scale-out system architectures. In addition, the popularity of warehouse-scale cloud services such as Amazon Web Services, Microsoft Azure and Google Cloud have given rise to hyperscale distributed enterprise systems that mimic those services' scale-out design philosophy.

Mainframes and, more recently, so-called converged infrastructure systems -- think blade servers or pre-integrated rack-scale systems -- are classic examples of monolithic, centralized, scale-up designs that are able to handle a large number of workloads from many different applications. They have long appealed to inherently conservative IT organizations by being reliable, secure, supportable and relatively easy to manage.

However, they are expensive, inflexible, ill-suited to modern web and mobile app designs and nearly impossible to scale without an expensive capital investment and lengthy upgrade project.

With the rise of cloud services, the flaws in converged infrastructure have become glaring, leading many IT organizations to look for a more cloud-like way to deploy internal systems yet maintain central manageability and bulletproof enterprise reliability. Enter hyper-converged infrastructure (HCI), the industry’s response to the need for a scalable, cost-effective, adaptable and easily provisioned data center architecture.

HCI is a concept that’s still evolving from marketing buzzword to term of art, so there isn't yet a canonical definition. Essentially, it's a mix of standard servers and clever software that meshes a number of largely identical nodes into a unified pool of computing and storage resources.

Those Lego-like building blocks typically have the following attributes:

Very dense x86 servers with front-mounted hot-swap disks. Many designs incorporate multiple server nodes -- each with one or two CPU sockets -- in a single chassis sharing redundant power supplies and a common storage tray.

Local storage, either all flash (usually solid-state drive) or a mix of flash and hard disk drive, allocated among all the computing nodes in a chassis.

A virtualized server or virtualized machine (VM) and storage stack that allows carving each node into logical pieces of computing, RAM and storage suitable for various workloads.

A distributed resource allocation and management system that allows workloads to be easily, if not automatically, spread across multiple nodes in a hyper-converged cluster. The software stack is centrally managed, allowing dozens to hundreds of nodes to be configured, controlled and monitored from a single administrative console. As new nodes are added to an HCI cluster, they are automatically detected, and computing and storage resources are provisioned and made ready for workload deployments.

Integrated hardware and software sold by a single vendor as an appliance, typically with one to four nodes per box.

Virtual network overlays that can be used to automate network configuration and security for workloads. This approach is particularly useful as the infrastructure grows and workloads are deployed or migrate across dozens of nodes and between data centers.

Given that HCI aggregates storage across multiple nodes into larger virtual pools, it's often used as an alternative to monolithic storage arrays on a storage-attached network. Indeed, several of the leading HCI vendors got their start in storage by providing scale-out building blocks that could be joined into a virtual array for both primary storage (e.g., databases and shared filesystems) and backup/archiving.

About the Author

Kurt Marko is a technology consultant and writer based in Boise, Idaho.