Final Words

Our data centre survey exclusively covers Intel SSD failure rates because those are the drives that big businesses currently trust the most. Given the challenges in determining SSD reliability at a high-level, we're not purposely trying to take a magnifying glass and determine which vendor sells the most reliable solid-state drives, but brand does matters.

Google's research team writes the following on hard drives: "Failure rates are known to be highly correlated with drive models, manufacturers, and vintages. Our results do not contradict this fact. Most age-related results are impacted by drive vintages.”

The experiences reported by data centres imply that the same holds true for SSDs. One executive we spoke with off the record said that he thought prices on OCZ's Vertex 2 were great, he thought their reliability was awful. Late last year, his company was trying out some new gear and cracked open a case of 200 Vertex 2 Pros, only to find about 20 of them DOA. And this isn't the first gentleman to pass on a story like that.

What Does This Really Mean For SSDs?

Let's put everything we've explored into some rational perspective. Here is what we know about hard drives from the two cited studies.

MTBF tells you nothing about reliability.

The annualized failure rate (AFR) is higher than what manufacturers claim.

Drives do not have a tendency to fail during the first year of use. Failure rates steadily increase with age.

SMART is not a reliable warning system for impending failure detection.

The failure rates of “enterprise” and “consumer” drives are very much similar.

The failure of one drive in an array increases the likelihood of another drive failure.

Temperature has a minor effect on failure rates.

Thanks to Softlayer's 5000+-drive operation, we know that some of those points also apply to SSDs. As we saw in the published studies, hard drive failure rates are affected by controllers, firmware, and interfaces (SAS versus SATA). If it's true that write endurance doesn't play a role in random drive failures and that vendors use compute-quality NAND in MLC- and SLC-based products, then we'd expect enterprise-class SSDs to be no more reliable than consumer offerings.

Higher Reliability Through Fewer Devices

Of course, enterprise needs touch on reliability and performance. In order to push the highest I/O throughput in storage-bound applications using hard drives, IT professionals deploy arrays of short-stroked 15 000 RPM disks in RAID. Scaling up sometimes requires cabinets of servers loaded with mechanical storage. Given the superior random I/O characteristics of SSDs, just a handful of drives can not only simplify that configuration but also cut power and cooling requirements.

Fewer devices installed means fewer devices to fail, too. Since you use one solid-state drive to replace multiple hard drives, consolidation ends up benefiting the business adopting flash-based storage. If the swap were a 1:1 ratio, that argument wouldn't work. But at 1:4 or more, you're really cutting into the number of disks that would eventually fail, and we can't let that point be under-emphasized.

From there, it's really up to you to be smart about the way you deploy storage in order to get the most value from solid-state and hard drives. Of course you can't entirely replace mechanical disks with SSDs; they're too expensive. So rather than trying to protect data from some of the issues currently affecting SSDs by creating redundant (expensive) arrays, just make sure the information exists in multiple places. As Robin Harris at StorageMojo writes, "Forget RAID, just replicate the data three times." Data redundancy with SSDs doesn't have to be high-cost. If you're in a medium-sized or large business, you really only need one copy of performance-sensitive information on flash that's continuously backed up by hard drives.

The idea that you can spend less money and still get a substantial performance increase should be very attractive. And it's not a new concept. Google has been doing for years with its hard drive-based servers. Translating it to a world with SSDs yields extremely high I/O, high reliability, and data redundancy at the low cost of simple cluster storage file replication.

Bringing It Back To The Desktop

Eh hem, sorry. We went off on an enterprise tangent there. Blame it on the data centres we've been talking to. When it comes to enthusiasts, we really can't make the assumption that an SSD is more reliable than a hard drive. If anything, the recent flurry of recalls and firmware bugs should be proof enough that write endurance isn't our biggest enemy in the battle to demonstrate the maturity of solid-state technology.

At the end of the day, a piece of hardware is a piece of hardware, and it'll have its own idiosyncrasies, regardless of whether it plays host to any moving parts or not. Why is the fact that SSDs aren't mechanically-oriented immaterial in their overall reliability story? We took the question to the folks at the Center for Magnetic Recording Research. Don't let that name fool you; CMRR does a lot of solid-state research, and it's the hub for most of the exhaustive storage research done worldwide. Dr. Gordon Hughes, one of the principal creators of S.M.A.R.T. and Secure Erase, points out that both the solid-state and hard drive industries are pushing the boundaries of their respective technologies. And when they do that, they're not trying to create the most reliable products. As Dr. Steve Swanson, who researches NAND, adds, "It's not like manufacturers make drives as reliable as they possibly can. They make them as reliable as economically feasible." The market will only bear a certain cost for any given component. So although NAND vendors could continue selling 50 nm flash in order to promise higher write endurance than memory etched at 3x or 25 nm, going back to paying £5 per gigabyte doesn't sound like any fun either.

Perhaps the most frustrating part of this challenging exploration is knowing that each vendor selling hard drives and SSDs alike has the data we'd all like to see on reliability. They build millions of devices each year (IDC says there were 11 million SSDs sold in 2009) and track every return. No doubt, failure rates depend on quality assurance, shipping, and ultimately how a customer uses the product, which is out of the manufacturer's control. But under the best of conditions, hard drives typically top out at 3% by the fifth year. Suffice it to say, the researchers at CMRR are adamant that today's SSDs aren't an order of magnitude more reliable than hard drives.

Wrapping Up

Reliability is a sensitive subject, and we've spent many hours on the phone with multiple vendors and their customers trying to conduct our own research based on the SSDs that are currently being used en masse. The only definitive conclusion we can reach right now is that you should take any claim of reliability from an SSD vendor with a grain of salt.

Giving credit where it is due, many of the IT managers we interviewed reiterated that Intel's SLC-based SSDs are the shining standard by which others are measured. But according to Dr. Hughes, there's nothing to suggest that its products are significantly more reliable than the best hard drive solutions. We don’t have failure rates beyond two years of use for SSDs, so it’s possible that this story will change. Should you be deterred from adopting a solid-state solution? So long as you protect your data through regular backups, which is imperative regardless of your preferred storage technology, then we don't see any reason to shy away from SSDs. To the contrary, we're running them in all of our test beds and most of our personal workstations. Rather, our purpose here is to call into question the idea that SSDs are definitely more reliable than hard drives, based on today's limited backup for such a claim.

Hard drives are well-documented in massive studies because they've been around for so long. We'll undoubtedly learn more about SSDs as time goes on. We leave a standing invitation to Intel, OCZ, Micron, Crucial, Kingston, Corsair, Mushkin, SandForce, and Marvell to provide us with internal data demonstrating reliability rates for a more comprehensive investigation.

A special thanks goes out to Softlayer, RackSpace, NetApp, CMRR, Los Alamos National Labs, Pittsburgh Supercomputing Center, San Diego Supercomputer Center, ZT Systems, and all of the unnamed data centres who responded to our calls for information. Some of the data that we have cannot be published due to confidentially, but we appreciate everyone that took the time to chat with us on the subject.

The 'drive completely dead, data unrecoverable' failure mode is not the worst; I can restore yesterday's image and lose, at most, a day's data (acceptable for my usage - obv. tailor backup frequency etc. to what's acceptable to you).

The worst is what happened to my last SSD. For weeks I thought the problems I was seeing were software issues: the occasional crash, the odd SxS error in the event log, a game failing Steam file validation, an old email showing half garbled. Eventually, I managed to diagnose the problem.

Old, untouched, files on the SSD were being corrupted at a very low rate (a few bytes per GB, I'd estimate). A file could be written and verified after writing, but days later might fail a checksum test when read. Without any error notification, SMART or otherwise, to indicate that the data read was anything other than perfect.

Now that was a problem. Who knows when the last backup image without any corruption was? How can you even tell? The vast majority of files will be fine, but some will be backed up corrupt, and may have been for some time. With much manual effort I eventually did recover everything important, but my new backup regime involves checksumming everything on the SSD weekly. If something has changed data but not changed timestamp, this time I'm going to get some red flags!

I can't say for certain that this failure mode is SSD specific, but it happened on my first SSD, and never on any of my spinners. Not enough data to be statistically significant, but enough to make me cautious.

Can second the findings with regard to OCZ Vertex 2 drives. Mine has just gone and without any warning - all data lost after a year of light use. OCZ are completely useless in helping to fix it. It's like they know that their SSDs fail a lot and aren't at all surprised. Have gone onto Intel 320 SSD based on the hardware.fr findings.

Thanks Andrew, that's an interesting article even for a layman operating a single SSD ^^So far my OCZ Vertex 2 is doing fine, but then failure is always only a probability. System drives shouldn't be used to store important data in my eyes anyways.If not having mechanical parts doesn't really lower the percentage of dying drives, that only means that backup is just as important (and as often forgotten) as it always was.

Good news: this website (http: proxy4biz.com ) we has been updated and add products and many things they abandoned their increases are welcome to visit our website. Accept cash or credit card payments, free transport. You can try oh, will make you satisfied.Tshirt price is $12Jeans price is $34