Even for mainstream users, it's not hard to tell the differences between using a PC that has its OS installed on a solid-state drive versus a mechanical hard drive. With an SSD, the OS will start up faster, while apps and multi-tasking won't bring certain processes to an absolute crawl. With SSD pricing where it is right now, it's easy to justify including one in a brand-new build (even a modest one) for the obvious speed boost.

If we can see benefits as end-users, you can imagine the benefit that flash-based storage offers companies like Google in their data centers. The company appears to have been one of the first to put SSDs into production at a massive scale. as new research results Google is sharing now encompasses SSD use over a six year span at one of Google's data centers. Looking over the results led to some expected and "unexpected" findings.

One of the biggest discoveries is that SLC-based SSDs are not necessarily more reliable than MLC-based solid state drives. This is surprising, as SLC SSDs carry a price premium with the promise of higher durability (specifically in write operations) as one of their selling points. Without this research, though, it's not hard to understand why SLC should be more durable: it stores less data per cell, compared to MLC, so each cell is not re-written to quite as often. However, when we're talking about the data center, we're talking about drives that are utilized 24/7. A battered SSD is a battered SSD - errors are going to creep up sooner or later.

It's the way SSDs fail that's a concern. The research also showed that if a drive is put to use for 4 years, it has a 20~63% chance of developing at least one "uncorrectable error." That sounds scary, and in some ways, it could be. However, four years in the data center is not as kind to an SSD as it would be in your desktop -- even as an enthusiast power user. Our chances as consumers to see an uncorrectable error is fairly low, and given the growing densities and dropping prices, we're all likely to upgrade to another SSD before our current drive ever causes such issues. However, if you're powering a web server or some other sort of mission-critical installation with solid state storage, hardware refresh updates should be scheduled at regular intervals.

Other results point to the uselessness of the RBER value (raw bit error rate). It was found that there was absolutely no correlation between the number of these warnings and the number of uncorrectable errors that creep up. It's noted that RBER and the number of uncorrectable errors increase with P/E cycles, and that the rate of growth is linear, not exponential.

The fact that SSDs are prone to errors at all might scare some away, but this research finds that flash-based drives are in fact much more reliable than spinning disks, with the report noting, "Comparing with traditional hard disk drives, flash drives have a significantly lower replacement rate in the field, however, they have a higher rate of uncorrectable errors."

It will come as a surprise to no one that there are trade-offs of both SSDs and mechanical drives, but ultimately, the benefits SSDs offer can often far outweigh legacy HDD storage technologies. SSDs use much less power, take up less space, run cooler, and are an order of magnitude faster than spinning disks. It's clear that SSDs are here to stay in the data center and in fact will only become more prevalent. The Google-backed report can be found here.