Mixed Media: Why SSD vs. HDD Misses the Mark

Solid-state storage represents a tremendous growth opportunity for the data center, particularly now that scale-out infrastructure, Web-facing applications and social media are placing a premium on speed and agility rather than raw capacity.

But some still have the tendency to compare apples to oranges when it comes to storage infrastructure, which inevitably leads to the perennial argument of solid state vs. traditional media. The long-running narrative has been that solid-state drives (SSDs) and hard-disk drives (HDDs) will battle it out for the heart and soul of the data center using either high-speed/high-performance or high-capacity/low-cost capabilities as their chief weapons.

However, as ExtremeTech’s Joel Hruska noted recently, the performance gaps that do exist between the two technologies depend very much on the volume and characteristics of the data loads they are handling. Lower volumes and fewer concurrent requests, for example, play to HDD’s strengths, but as activity increases, so does latency. And in this day and age when a few milliseconds could mean the loss of millions of dollars or thousands of potential customers, the stronger performance characteristics of solid state can quickly compensate for its higher up-front costs.

This contrast in operating styles is the primary reason traditional storage firms are starting to embrace solid state. EMC, for example, recently acquired start-up DSSD in a bid to tap into the growing demand for large-scale Flash arrays. The full strategy is something of a mystery given that DSSD is so new that it doesn’t even have a product in the channel yet and has kept details of its underlying technology very close to the vest. However, backers include Arista Network’s Andy Bechtolsheim and Sun Microsystems’ Bill Moore, and expectations are high that the eventual system will feature both hardware and software management tools optimized for Big Data applications.

Still, many champions of solid state say plainly that their goal is the eventual replacement of hard disk in the data center. Pure Storage’s Vaughan Steward acknowledges that too many writes can wear out Flash quickly, which is why many organizations use it as a high-speed cache. However, new software techniques like real-time data reduction are improving the write situation, and many applications are becoming Flash-optimized by removing code designed to anticipate the speed restrictions of disk.

And when it comes to market growth, there is no denying that solid state still has room to run, says Objective Analysis’ Jim Handy. Non-SATA solutions in particular—including SAS, PCIe, DIMM and Fibre Channel—are poised for 30 percent-plus growth over the next several years, at least when it comes to unit shipments. Price fluctuations in memory chips will influence profit margins, to be sure, but with the leading driver of future demand being the data center, it seems that scale-out enterprise applications will look increasingly toward Flash-based storage infrastructure, particularly in converged or modular settings.

But if this is the case, is the fact that SSDs and HDDs target varying workloads a distinction without a difference? Solid state is not a direct replacement for hard disk, but is the nature of data environments changing to the point that HDDs won’t be able to keep up much longer?

Perhaps, but let’s not overlook the fact that much of the data load in the enterprise will continue to thrive on local, high-capacity storage infrastructure. By supporting a mixed storage environment, organizations will be better able to tailor its capabilities to the job at hand.