Posted
by
Zonk
on Saturday February 17, 2007 @11:18PM
from the fun-saturday-night-reading dept.

oski4410 writes "The Google engineers just published a paper on Failure Trends in a Large Disk Drive Population. Based on a study of 100,000 disk drives over 5 years they find some interesting stuff. To quote from the abstract: 'Our analysis identifies several parameters from the drive's self monitoring facility (SMART) that correlate highly with failures. Despite this high correlation, we conclude that models based on SMART parameters alone are unlikely to be useful for predicting individual drive failures. Surprisingly, we found that temperature and activity levels were much less correlated with drive failures than previously reported.'"

The research results are VERY poorly communicated, as research results often are.

This seems to be the most relevant sentence: "What stands out are the 3 and 4-
year old drives, where the trend for higher failures with higher temperature is much more constant and also more
pronounced." (Page 5, Section 3.4, 4th paragraph)

Often poor communication in research pages is intended to hide the fact that the results are not very useful. The above sentence can be translated to: "If you run hard drives hot, afte

Here's a quote from the Google paper: "Power-on hours -- Although we do not dispute that
power-on hours might have an effect on drive lifetime, it happens that in our deployment the age of the drive is
an excellent approximation for that parameter, given that our drives remain powered on for most of their life time." (Page 10, 4th paragraph)

Translation: The number of hours the drives are powered is the same as the age of the drives, since the drives are always powered.

You really didn't read the article, did you? On page 3 (Section 2.2 Deployment Details), the authors state: "More than one hundred thousand disk drives were used for all the results presented here. The disks are a combination of serial and parallel ATA consumer-grade hard disk drives, ranging in speed from 5400 to 7200 rpm, and in size from 80 to 400 GB. All units were put into production in or after 2001. [...] The data used for this study were collected between December 2005 and August 2006."

What are you waiting for Google to tell you? Are you really accusing them of being evil because they did a study, described their methodology, detailed their results, presented their analyses, and published it all for anyone who is interested?

You describe their conclusions as:

Uselsess

But there is no contradiction at all if you are smart enough to understand. They are telling you that if SMART identifies a problem with a drive then it is very likely that drive will fail within 60 days. But in a sample of 100,000 drives, many drives will also fail that have not returned errors on SMART scans. Thus SMART is a reliable indicator of impending failure but is not a silver bullet that can recognize and predict all failures before they happen.

Next time you have access to 100,000 hard drives, can analyze patterns of failure among them, can use those failures as a benchmark against which to measure analysis tools, and can come up with better recommendations for predicting failure than this study, then by all means let us know. But if you're looking for Microsoft or Western Digital or Seagate or Yahoo to perform and publish this kind of study for free, I think you may be waiting a good long while.

To me it's useful - if I get a SMART warning, then I'm definitely backing up my drive and will replace it before it croaks.

Sensitivity/specificity always presents a balancing act of testing, and they are usually in a push/pull relationship. If you make a test too sensitive, then you get too many false positives, and wind up over treating something (i.e. the test says it might fail so you replace the drive even though it's not going to - a false alert)

If you make the test too specific, then usually you wind up decreasing it's sensitivity, or ability to detect something. Now you get false negatives, so when the test works, you can be sure that it's accurate, but it always doesn't detect the problem.

What you want to know is the Positive Predictive Value PPV, which is determnined by the formula PPV=TP/(TP+FP). TP= true positives, FP = false positivesAlso useful is the Negative Predictive Value NPV, or this formula NPV=TN/(FN+TN) where TN = true negative, FN = false negative.

What information these give are as such. If a test is positive (i.e. the drive temperature is >80 C), then it accurately will predict that the drive will fail. If the test is negative (drive temp 40 C0 then it accurately predicts that the drive is ok.

To me it's useful - if I get a SMART warning, then I'm definitely backing up my drive and will replace it before it croaks....
What information these give are as such. If a test is positive (i.e. the drive temperature is >80 C), then it accurately will predict that the drive will fail. If the test is negative (drive temp 40 C then it accurately predicts that the drive is ok.

But according to the paper none of the SMART parameters was very useful in this regard. Over 50% of drive failures were not predicted by SMART errors, so the "negative test" can't give much confidence that the drive is ok. Conversely while some types of SMART error (e.g. scan errors) indicated a much higher probabily of impending failure, they still weren't all that indicative. 70% of drives that reported a scan error were still functioning normally after 8 months. So the "positive test" isn't all that con

... is not only a breakdown by age, but by other parameters, such as size, model, series, etc. I am sure that the IBM DeathStars would have greatly biased the statistics, for example, and it would be useful to have breakouts not only for such well known disasters, but also for the sample excluding the Deathstars, etc.

It is also interesting to note the magnificent jump in failure rates once the drives get outside the three year warrenty period. No coincidence there.

They are hardly trade secrets. Google isn't in the hardware business. There are only so many patterns of disk usage on can have, and knowing what pattern Google has would hardly be useful to figure out how they did anything that they do. At least, to any level of detail useful enough to copy.

The amount of positive press they get from these types of releases easily justifies the effort to polish internal reports up to a publication standard. By releasing these types of papers, others may change their buying habits, which in turn will change the products sold. Google may believe that these types of papers would cause shame, not from individual manufacturers, but the industry in a whole, and thus cause better products to be produced.

Personally, I think this is more geared as a "shot" to drive makers and big enterprise users not so much the general desktop user crowd. After all, how many companies even HAVE 100,000 hard drives to test? Google is unique in their use of LOTS of hardware... in generally better controlled environments than most have. Google has issued other papers and industry "suggestions" about performance of mother boards, power supplies, and other OTS hardware... as big of a CUSTOMER as Google is, they can push the i

There are several SMART signals which are highly correlated with drive errors, but the authors note that 56% of the failed drives had no occurrences of these highly correlated errors. Even considering all SMART signals, 36% of failed drives still had no SMART signals reported.

So, if you have errors in those highly correlated categories your drives are probably going to fail, but if you do not have errors in these categories your drives can still fail.

So, if you have errors in those highly correlated categories your drives are probably going to fail, but if you do not have errors in these categories your drives can still fail.

It isn't even that good. Many of the failure flags indicate between 70% and 90% survavability to 8 months. This is much worse than the ~2%/year baseline failure rate, but not as strong of a predictor as you might like. It would be nice to see data on this out to 2 or 3 years, so you could calculate the integrated chance of failure

In summary: Your statistical analysis on a sample size of one showed a 100% failure rate, so Samsung are crap. You found some other people also had failed Samsung drives, so Samsung are crap.

Search the net and you will find people ranting about Seagate drives failures, Western Digital drive failures, IBM drive failures, Maxtor drives failures and failures of drives made by companies neither of us have even heard of. You won't find many, if any, reports of recent failures with 8" floppy drives though, so I suggest you use one of those. They must be more reliable, right?

There are apparently several SMART parameters that are correlated to eventual disk failure. If a disk starts throwing SMART errors in these categories then your best bet is to replace the disk ASAP. While it may be true that most disks fail without warning that doesn't mean it isn't a good idea to look for early warning signs of failure.

So if the article summary is correct does it even matter if the consumer desktop pc has SMART enabled or not?

Well, I was a little disappointed by the article. They looked at a lot of different SMART categories and they looked at the different ages of the drives, but they didn't delve into the different types of failures. I get about 1 "I think my drive crashed and I was hoping you could recover it" call per month and I see a variety of failure types. Probably the most common ones I see now are ones where something has gone wrong with the control circuits/mechanism and not the media itself. For example, something can go wrong with the motor that spins the platters, or you can seize the bearings for the head traversal, etc. I've even seen some where a chip on the controller board literally popped when it got too hot. These aren't going to be detected by SMART... I don't know what would predict failures like that.

The article states that, in about half of the failures, there were no SMART warnings at all. Okay, but what was the breakdown in the kinds of failures of these unpredicted ones? If they were all spindle motor and head traversal failures, then you can't blame SMART for that. If it turns out that SMART gave warnings for 95% of all failures that were media-degradation related (like bad sectors, etc... where the drive still talks to your machine properly, and just can't get the data you want), then I'd say SMART is pretty darn useful.

I noticed this too. If a Google-sanctioned report had charts of which brands were more reliable, this would do serious damage to the brands that didn't perform so well. No wonder they sidestepped the whole issue!

It's no wonder that Google sidestepped the issue, but, if you assume they purchase primarily from the manufacturers that are more reliable, perhaps those manufacturers will begin to gloat and publish numbers about their Google contracts, if this study gains traction.

I'm confident that Google is fairly drive agnostic: you just can't run distributed networks that large and stay locked into a single vendor. And given that even reliable vendors have disasters like the IBM Deskstar drives some years ago, and given the remarkable growth of drive sizes over time, there's just not much point for them in buying the extremely stable but vastly more expensive hardware. They've foubtless learned that hardware flexibility provides valuable software flexibility.

One of largest retailers in Russia (and maybe in Europe - more than 300 terminals for orders in person at ex-factory building, busy 24/7) "Pro Sunrise" released information on failure rates of major components (CPU, Videocards, motherboards, IDE/SATA, etc) of PC they sold for Q1-Q2 of 2005.

When a friend broke down, she asked the breakdown man who came what were the most reliable cars. He said he wasn't allowed to comment but that "he carried no honda parts". I guess the same thing applies here - Google won't say, they'd get sued.

On the other hand, hard drives change so much that this year's model will be totally different design and mechanics than next years, so blaming (say) IBM for its crappy deskstar range should not be reason to blame their (ok, Hitachi's) current line.

If you do want to know more about which drives are best - check out storeagereview [storagereview.com] and enter details of your drives to their reliability database.

It appears that sentence was right after the part I read about how some makers had better results than others. So of course I scan the whole document looking for said data immediately after reading the first part, but did not return to that exact point thinking I had read it already...

There are several good reasons to not release the brand names. First, while the sample size is huge, the sample size for a particular model of a particular brand might not be. If they only happened to have 10 of one particular model, and one failed within a month, then 10% fail within a month, but it could just be a fluke. Second, liability. This wasn't a controlled test, it was done live within the Google servers (presumably). Whoever is on the bottom of the list could very well sue Google for libel. Without merit? Probably, but they might eke a few million in a settlement out of them. Google can't appear to be doing evil after all.

They didn't include any data at all about brands.They should have done brand analysis (without naming the brand) and also rpm analysis.

From the article..

3.2 Manufacturers, Models, and VintagesFailure rates are known to be highly correlated with drivemodels, manufacturers and vintages [18]. Our results donot contradict this fact. For example, Figure 2 changessignificantly when we normalize failure rates per eachdrive model. Most age-related results are impacted bydrive vintages. However, in this paper, we do

No. They explicitly said they would not disclose that... which is a shame because that is probably the only interesting bit of information. The question that really needs to be studied is what distinguishes good drives from bad. This would probably involve disassembling drives of various 'vintages, models, manufacturers' and trying to pin down the relevant details. That way when new hard-drives get released, reviewers can pull them apart and judge them on something other than read/write performance, hea

That would have been the perfect way to divulge this data without causing direct harm to any maker - I would really have liked to see if there was a large variance between brands, which might even lead me to purchase brand Y more, even if it's not at the top of the reliability chart - just so long as it was cheaper.

That way when new hard-drives get released, reviewers can pull them apart and judge them on something other than read/write performance, heat, and acoustics...

You forgot one metric of comparison: the warranty. As far as I'm concerned, this number alone is the most important in determining the reliability of the hard drive. If the manufacturer is willing to say "This drive will last for X years or we replace it free," it speaks volumes about their confidence behind their product. When buying hard drives,

If the manufacturer is willing to say "This drive will last for X years or we replace it free," it speaks volumes about their confidence behind their product.

Or maybe the manufacturer just realized that 5 years down the road, a replacement for your then 5 year old HD will cost them peanuts. Accoring to the graph at http://en.wikipedia.org/wiki/Hard_drives#Capacity [wikipedia.org], HD capacity seems to be increasing by roughly ten times every five years.

You forgot one metric of comparison: the warranty. As far as I'm concerned, this number alone is the most important in determining the reliability of the hard drive. If the manufacturer is willing to say "This drive will last for X years or we replace it free," it speaks volumes about their confidence behind their product. When buying hard drives, I actively seek out drives with at least a 3 (preferably 5) year warranty (some Hitachis and Seagates IIRC) and explicitly avoid those with only a 1 year warrant

More likely: "We buy millions of dollars worth of drives each year, and our buying decisions are driven in part by the reliability data that we collect. If we told everyone what kind of drives work best, more people would buy those drives, driving up the price that we pay."

We're not so bloody stupid to believe that our competitors are standing in the aisle of Circuit City and scratching their head over whether to buy a Seagate or WD drive.

We know that our competitors all have their own metrics and their own relationships with manufacturers and frankly, we don't care. We know our competitors also measure these things, and we're not telling them anything they don't already know.

We aren't particularly worried about saying that some drives fail, because everyone who cares already knows that some drives fail. Everyone whose job it is to know which drives fail first already knows that as well.

But we're not going to tell you which brand fails at a higher rate than normal because we don't need a lawsuit that would cost us a lot of money but in the end would only confirm what the people who need to know these things already know.

We will, on the other hand, describe the tests we ran, our methodology, our results, and our analyses. We do this just for kicks and we hope you can learn something from the results.

RTFA. It says: However, in this paper, we do not show a
breakdown of drives per manufacturer, model, or vintage
due to the proprietary nature of these data.

It is clearly not proprietary to the drive manufacturers, because it came from Google's study. This means they regard it as proprietary to themselves.

How do you know that their competitors have done equally good studies? Given the large population (100,000) and the fact that people are surprised even by some of the published conclusions, it is very l

It's not that surprising. The only mildly interesting thing I see is that high load seems to *not* increase failure-rates much, other than the first few month. They hypothesize that this may be because some drives don't handle high load -- and die early -- however those drives that survive the first ~6 months with high load are the more robust ones, and those hold up well.

So, you're saying there is no better choice, and can be no better choice, than simply selecting a disc randomly ?
It's possible it is like you say -- that each year the stats change enough that no consistent trends are recognizable. It is however also possible (I'd say likely even) that different manufacturers are different statistically over time.

You need backups anyway, that's not the point. But it makes a difference for your maintenance-costs if you experience 1% of your disc-drives dying in an anvera

More likely: "We buy millions of dollars worth of drives each year, and our buying decisions are driven in part by the reliability data that we collect. If we told everyone what kind of drives work best, more people would buy those drives, driving up the price that we pay."

You tard. Demand and price in a free market are reversely proprotional. Go back to high school economics! Not only would that, but the great drive company mentioned would probably get more press and money leading to more R&D and even better drives.

I wish Google released the data they found because it would force the crappy drive companies to improve their products.

One way to spot someone who doesn't really understand economics is how quickly they make statements like that. You would need to know a lot more about the thing in question before being able to make a generalization like that. Sometimes, they're directly proportional, sometimes, they're reversely proportional, and sometimes they're neither. It depends on a lot of other things which relationship hold true, if any.

This is awesome, but the conclusion of such an interesting study leaves a lot to be desired. FTA...

"In this study we report on the failure characteristics of consumer-grade disk drives. To our knowledge, the study is unprecedented in that it uses a much larger population size than has been previously reported and presents a comprehensive analysis of the correlation between failures and several parameters that are believed to affect disk lifetime. Such analysis is made possible by a new highly parallel health data collection and analysis infrastructure, and by the sheer size of our computing deployment.

One of our key findings has been the lack of a consistent pattern of higher failure rates for higher temperature drives or for those drives at higher utilization levels. Such correlations have been repeatedly highlighted by previous studies, but we are unable to confirm them by observing our population. Although our data do not allow us to conclude that there is no such correlation, it provides strong evidence to suggest that other effects may be more prominent in affecting disk drive reliability in the context of a professionally managed data center deployment.

Our results confirm the findings of previous smaller population studies that suggest that some of the SMART parameters are well-correlated with higher failure probabilities. We find, for example, that after their first scan error, drives are 39 times more likely to fail within 60 days than drives with no such errors. First errors in reallocations, offline reallocations, and probational counts are also strongly correlated to higher failure probabilities. Despite those strong correlations, we find that failure prediction models based on SMART parameters alone are likely to be severely limited in their prediction accuracy, given that a large fraction of our failed drives have shown no SMART error signals whatsoever. This result suggests that SMART models are more useful in predicting trends for large aggregate populations than for individual components. It also suggests that powerful predictive models need to make use of signals beyond those provided by SMART."

... at the same conference, Bianca Schroeder presented a
paper [cmu.edu]
disk reliability that developed sophisticated statistical models for disk failures,
building on earlier work by Qin Xin [ucsc.edu]
and dozen papers by John Elerath... [google.com]

C'mon, slashdot. There were about twenty other papers presented at FAST this year.
Let's not focus only on the one with Google authors...

While at a glance, it may seem like this is simply "the latest thing google did," and... let's be honest, given the editor in question... this was most likely the reason it made the front page. But while Bianca Shroeder's report, for instance, uses statistics from various unnamed sources and for various unnamed uses, the Google report is interesting because we know exactly where it's coming from and what it's being used for.

Of course, a truly insightful story would have taken this opportunity to compare Google's findings with the others and report on that.

Their statistics on temperature seem very unusual. I'm surprised they didn't explore this more. For example, is the high failure rate associated with low temperatures because the drives were more likely to be inactive due to failure?

My guess is this graph on temperature distribution is more or less a graph of temperature sensor accuracy. I can't imagine that drives at 50C had the lowest failure rate.

While this would require a more laboratory-like environment, a dozen drives of each type and manufacture could have been sampled at known temperatures, and a data curve could have been established to calibrate the temperature sensors.

There are lots of studies out there where drives were intentionally heated, and higher degrees of failure were indeed reported (this is mentioned in the google report too). So the correlation is probably still valid, just not well-proven.

If hard drives are anything like car engines [especially those made with iron and aluminum], the designers have taken the standard operating temperature into account in the design. The parts of varying composition fit together best at the right temperature, and temperatures higher or lower result in damage or accelerated wear.

This is why, if you want your engine to last, you should let your car warm up before driving it hard.

To my mind the most significant piece of info:
"The gure shows that fail-
ures do not increase when the average temperature in-
creases. In fact, there is a clear trend showing that lower
temperatures are associated with higher failure rates.
Only at very high temperatures is there a slight reversal
of this trend."

But did the lower temperature actually cause the failures? Such a counterintuitive conclusion seems like it'd be worth some further examination...I can turn off some fans in my cases and get the drives back up into the 40-45C range pretty quickly if need be!

Yes, the low temperature finding is most interesting. I have an hypothesis as to what might be going on. I suspect that absolute temperatures, within certain limits, are not important to drive reliability, but that temperature variation is. Drives that, because of their location and pattern of use, tend to fluctuate in temperature between, say, 20 and 35 degrees centigrade are being stressed more than those an a steady 40 degrees.

"...we do not show a breakdown of drives per manufacturer, model, or vintage due to the proprietary nature of these data."

Litigation avoidance may be a consideration here but why not take Google at their word? Google is a search company that buys lots of hard drives. Based on their own internal research, they have developed information about which hard disk models and/or manufacturers are shite.

Yahoo is also a search company that buys lots of hard drives. Why should Google give that hard drive reliability information to you, me and Yahoo for free? Let Yahoo/Excite/MSN and the competitors figure it out for themselves.

Yeah, sure I'd like to have access to Google's data the next time I'm in the market for a hard drive but I won't hold a grudge against them if they don't do my consumer research for me. On the other hand, whereinafuck is the data from Tom's Hardware Guide, Anandtech, Consumer Reports and all the other reviewer and consumer sites? If someone doesn't have a handy link to their results, I'll see if I can google something up:

This is completely anecdotal, unscientific... Since building out two servers a couple years ago, each with approximately 800G of drive space, I've had to replace drives on average of one every 8 weeks. In my lab there are about twenty drives across 8 machines, so that number is not too bad. Or so I thought. After replacing all my power supplies my drive failures have gone way down. The only drive I've lost recently is one in an older machine with an ancient 300W power supply.

About a year and a half ago, a presentation by Google concerning a massive online storage service called GDrive , was leaked . It was pretty much confirmed that it is on some level operational . The study might have something to do with it , maybe even so kind of clever PR .
Just my 2c.

Their conclusion (and a glance at their results) indicates that drives fail because of product defects. However, home-use parameters such as brown power (low voltage on the line) are probably not taken into account in their server environment.It's interesting, and I tend to trust their results, but these conclusions may not be relevant to single-drive situations. That is, if two customers purchase 1 drive each, and both drives are not defected, then this study doesn't explain why one drive would fail befo

The paper claims "more than 100 thousand drives". But the nice thing is that you can derive the actual number from the error bars, for example those in figure 4. The data should be governed by Poisson statistics, which means that the standard deviation in the counts is equal to the square root of the count. However, their error bars seem to be about a factor 2 larger than the standard deviation, because normally around 68% of the data points should lie within one standard deviation from the "smooth curve". Let's assume the error bars are 95% confidence intervals, i.e. 2 standard deviations.

Look at the data for 20 to 21 C. It tells you that it represents a fraction 0.0135 of their total drive population, with an average failure rate of 7 +- 0.5 %. Following the reasoning above, this 7% should represent 784+-28 drives. Since these represent 7% of 1.35% of the total number of drives, we can derive that the total number of drives is 784/0.07/0.0135 = 830,000 drives. Trying the same thing for 30 to 31 C gives 826,000 drives, which seems fairly consistent.

So can we assume that Google has deployed 830,000 hard disk drives since 2001? How many servers do they have now?

So can we assume that Google has deployed 830,000 hard disk drives since 2001? How many servers do they have now?

Do you really think that they don't store every cookie and search pattern that everyone who uses their search engine? Cross-reference all this data, alter their ranks, follow your interests, use those to make money and target you with ads?

There is a ton of money for this information, and with enough stored data and having the facility to mine it, filter it, and sort it to location level for vario

An interesting document, and I found the data on temperatures particularly interesting.

I have been previously led to believe that it's not so much the average temperature of a hard drive that causes failure, but temperature fluctuations. This makes sense, since repeated expansion and contraction of the disk platters is likely to cause warpage before too long. This, I guess, is where glass platters like what IBM toyed with would come in useful. In the meantime I guess we still need our HVAC units to keep a constant temperature, just not too low anymore.

This also has implications for data centers that spend a considerable amount of energy pumping heat out of the server room. If we can raise the undustry-accepted temperature ceiling from 22C to say 30C then a lot of energy can be saved over time. Perhaps not quite enough to dip below 1% of US-wide power use but every bit helps.

I think you are partly right in this assumption, but for the wrong reasons. Some failure modes are a function of temperature and other failure modes are a function of temperature variation. A long time ago
platter expansion and contraction was a major cause of problems when drives used stepper motor positioning;
since they switched to servo positioning, the drive automatically tracks the expansion and contraction
of the platters and that is pretty much a non-issue as long as the coating on the platters

You take the Google paper and the twenty others on disk failure, take the third page of each, sort them by their papers' Google rankings and take the middle letter of every 42nd word, whilst standing in the middle of a pentagram under the second full moon of the month.

From my experience, Western Digitals are (relatively) reliable. They unfortunately do not have the same power connector orientation as any other consumer drive on the planet, so if you want to use IDE RAID you have to get the type that either (1) fits any consumer ide drive or (2) fits a Western Digital Drive. (grr)

Had some good experiences with Maxtor. A couple of years ago (OK - maybe 6 or 8) we had batches of super reliable Maxtors - 10GB.

I had a bad run with Western Digital drives a while back and switched to Maxtors, which I found to be very reliable when they were first putting out 250GB drives. Had a bad experience with a Seagate dropping dead within the first week after purchase, fortunately I got most of my data off of it.Seagate also does NOT offer advance drive replacement in Canada, which means I'll never buy another of their products until this policy changes.

Had good luck with more recent Western Digital drives. Put 5 x 500GB in

I had a disk reporting a SMART failure once. The result was that the disk was red in the list in Disk Utility, but there were no other warnings. So you might want to check Disk Utility once in a while.