The State of Storage Architectures

Storage architectures evolved from the legacy scale-up architectures to scale-out architectures. Scale-out architectures evolved into hyperconverged architectures. The data center is also evolving, of course, but at an even faster pace. The problem is all this storage evolution has not always met the needs of the evolving data center.

The modern data center needs a storage architecture can that can deliver the performance guarantees, while at the same time delivering the most efficient means possible – without creating more complexity.

Architecture 1 – Scale-up Storage

Scale-up storage architectures are the oldest architecture and are based on a fixed number of storage controllers, typically two for redundancy. Shelves, filled with hard disk or flash drives are then attached to those controllers. The number of shelves the controller can support is based on its computing power and IO capabilities.

Workload performance expectation is also a factor in how far the architecture can scale. If the workload is more capacity driven than performance, the impact of overloading the controller with lots of capacity won’t matter. But if performance is the concern, then each addition of capacity will impact performance and make a difference to how the connecting applications respond.

An advantage of scale-up solutions is they are easy to understand. The design has been around almost since the dawn of the data center. The problem with scale-up architectures is that the organization typically has to overbuy on initial performance so that the system will satisfy future growth. That means for a period of time, the organization is paying for something (performance) that it does not need. Then, at some point, enough workloads are added so the system hits either a performance or capacity limit. That’s when IT buys a new system.

Scale-up architectures are ideal to solve a point problem, especially if the workload isn’t rapidly requiring additional performance or capacity. They are simple, relatively cost effective and, thanks to flash storage, far more scalable than they used to be.

Flash enables the scale-up architecture to offer a tremendous amount of performance for a considerable period of time. Also, because flash enables a low-impact application of deduplication and compression, these systems can often meet the organization’s capacity requirements.

Architecture 2 – Scale-out architectures

Scale-out architectures are designed to be the scale-up cure all. These architectures are created from servers, called nodes. Of course a server comes with processing power and then storage capacity is installed inside each node. The nodes are clustered together and storage is aggregated into a single pool. The addition of a node to the cluster automatically provides the cluster with more capacity and compute performance.

The nodes within a scale-out cluster are typically less powerful than the single controller of a scale-up architecture, but the aggregation of the nodes eventually deliver greater performance and capacity. The problem is scale-out architectures typically need a quorum of nodes to start, often three.

The problem is for some data centers, or at least for initial projects the three node requirement, the quorum may deliver more performance and capacity than the organization needs for a long time. In some cases the the organization will never actually need to scale the scale-out architecture.

Another problem for scale-out storage architectures is that they typically need many parallel workloads to reach their potential, they often have limited IOPS capabilities per volume. For example if the organization has one application that needs 1 million IOPS, then scale-up with the appropriate processing power will be a better fit. But if the organization has five workloads that each need a hundred-thousand IOPS and expect to add five more hundred thousand IOPS workloads in the next few years, then a scale-out architecture is a better fit.

Scale-Out Variants – Scale Right and Hyperconverged

Scale-Right

To overcome some of the challenges with scale-out architecture, some storage vendors allow their systems to be designated as “scale-right architectures.” These architectures allow the organization to start with a single node used in a scale-up design then scale-out to many nodes when they need. This architecture allows the organization to start small for a particular project then expand the system as IT adds more workloads.

Hyperconverged Architectures

Hyperconverged architectures are a form of scale-out storage that instead of requiring dedicated compute leverage the compute of an existing hypervisor cluster like VMware, Hyper-V or a Linux-based hypervisor. The storage software is virtualized and loads as a virtual appliance within the hypervisor cluster.

To some extent, hyperconverged architectures scale automatically since compute, storage and networking components are all added at the same time. For initial implementations this may be ideal, but as the hyperconverged cluster scales problems arise.

First, most data centers don’t scale all three vectors (compute, storage, networking) at the same pace. They end up buying more nodes to meet a capacity demand or to meet a compute demand but not both. The result is they end up overbuying on one of the resources.

Second, as the environment scales there are limits on how IT can guarantee specific levels of performance to certain applications. In hyperconverged architectures, everything is shared and ensuring one application gets X number of IOPS is difficult. The only way to make sure performance expectations are met is to build the environment so that all workloads get the same level of performance, essentially overbuying the three resources.

The Network Challenge of Scale-Out and Scale-Up

Both scale-up and scale-out architectures share a problem. The quality of the networking to interconnect these nodes is critical. Often called east-west traffic, this server to server communication is traffic to make sure that all the nodes are in-sync and that data protection levels are met. Hyperconverged exacerbates them because it is also carrying a compute/application responsibility.

The All-Flash Challenge to Scale-Out Architectures

To save cost, most scale-out storage and hyperconverged systems interconnect nodes via basic IP communication. This low level of sophistication was acceptable when scale-out systems where hard disk based. The latency of the hard disk overshadowed the latency of the network communication requirements. But in a flash based system there is no media latency for the network to hide behind. All-Flash scale-out systems also tend to scale further which means more nodes to communicate with and they tend to support more performance critical workloads.

Lastly, all-flash scale-out architectures tend to be more popular in the service provider and very large enterprise. In both use cases there is an added demand for specific performance guarantees, also known as quality of service.

Architecture 3.0

In our next entry we will discuss the next all-flash architecture – architecture 3.0. This architecture will leverage high-performance, deterministic network infrastructure to interconnect nodes. NVMe over Fabrics may be the only network type that is able to meet that requirement. The architecture will leverage this advanced networking interconnect to also provide the ultimate performance guarantee, virtual, private storage arrays, that can be dedicated to one server or workload.

It will also combine the best aspects of scale-up and scale-out architectures. The system can start as a single node, scale-up architecture and then add nodes to scale performance and capacity. And unlike the legacy scale-up architectures it will not have a per-node or per-volume performance limitations.

StorageSwiss Take

Architecture is everything in all-flash. Flash has risen the media part of the storage architecture from the slowest component of the data center (hard disk drive) to the fastest (flash). Everything surrounding the flash media – the architecture – is now very important to tapping into all the capabilities flash can provide. In the rest of this series we will go in depth to explain these architectural aspects.

Share this:

Like this:

Related

Twelve years ago George Crump founded Storage Switzerland with one simple goal; to educate IT professionals about all aspects of data center storage. He is the primary contributor to Storage Switzerland and is a heavily sought after public speaker. With over 25 years of experience designing storage solutions for data centers across the US, he has seen the birth of such technologies as RAID, NAS and SAN, Virtualization, Cloud and Enterprise Flash. Prior to founding Storage Switzerland he was CTO at one of the nation's largest storage integrators where he was in charge of technology testing, integration and product selection.