Why Flash Storage Benchmark Testing Is Not Hype

Some day, your company might need the "oomph" of flash storage.

I've been briefed on or directly involved with a few lab tests in the past few months that demonstrated the performance of flash storage systems. Although the performance achieved is generally well beyond what most organizations need today, there is value in doing these tests and getting their results. Interestingly, not everyone agrees.

The most common complaint is that no one needs this kind of performance. That is not exactly accurate. Most might not need it, but some do -- right now. High-frequency trading (HFT) and high-performance computing (HPC) are two excellent examples.

Also, as virtual server and virtual desktop environments become more dense, with more virtual machines per host resulting in fewer, more storage-I/O-demanding hosts, we will see an increasing performance demand. Finally, it is important to note that most of the millions of IOPS tests are on sequential read I/O, not random read/write I/O. On those tests, performance often drops as low as the 500k IOPS range, a requirement that we are starting to see in some heavily virtualized environments.

The adoption of virtualized servers and desktops as well as clustered servers is a significant change in the way we measure or should measure IOPS. No longer are we looking to meet the demand of a single application with a dedicated storage device. We are now looking to meet the demand of dozens of hosts, all driving traffic to a single or clustered set of storage devices. The combined IOPS of the data center is now a critical factor.

We also now have storage systems capable of supporting a mixture of each of these workloads: virtualized servers and desktops as well as clustered applications and scale-up applications. In the past we had to allocate separate storage to each, so large, combined IOPS numbers were not needed. Now we can support mixed workloads on a single system, which simplifies storage management but requires storage performance.

There is also the value of what we learn about storage system and infrastructure design from these tests. For example, in a recent test we learned the value of having multiple PCIe hubs to transfer data to the storage infrastructure. We also learned the advantage of using Gen 5 Fibre (16 Gbps FC) Channel instead of 8G. These lessons apply in any performance-constrained situation and justify why you should be investing in advanced servers and networks now instead of later.

Finally, there is also the reality that eventually most data centers will need this performance. A few years ago, tests that were delivering 100,000 IOPS were being ridiculed for being unrealistic, now 100k IOPS is a common request for data centers.

You probably don't need 1 million IOPS today but you might very well soon. The good news is the work is already been done; you can apply that learning on today's infrastructure and know we are ready for you in the future.

The key is truly understanding the performance requirements of the workloads themselves and how they interact with the storage infrastructure. As George points out, relying on IOPS is a poor indicator of performance, particularly for file-based environments, which usually have very heavy metadata performance constraints. IOPS numbers don't take metadata performance into consideration. We at SwiftTest (ww.swifttest.com) are directly addressing this with workload modeling and performance validation solutions that are a direct reflection of your actual workloads. With this insight into your production applications, you can determine which storage systems and which configurations are truly best for your environment before putting them into production.