Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

jcatcw writes "A Carnegie Mellon University study indicates that customers are replacing disk drives more frequently than vendor estimates of mean time to failure (MTTF) would require.. The study examined large production systems, including high-performance computing sites and Internet services sites running SCSI, FC and SATA drives. The data sheets for the drives indicated MTTF between 1 and 1.5 million hours. That should mean annual failure rates of 0.88%, annual replacement rates were between 2% and 4%. The study also shows no evidence that Fibre Channel drives are any more reliable than SATA drives."

...Carnegie Mellon researchers can't tell a mean from a median. This is inherently a long-tailed distribution in which the mean will be much higher than the median. Imagine a simple situation in which failure rates are 50%/yr, but those that last beyond a year last a long time. Mean time to failure might be 1000 years. You simply can't compare the statistics the way they have without knowing a lot more about the distribution than I saw in the article. Perhaps I missed it while skimming.

"If they told me it was 100,000 hours, I'd still protect it the same way. If they told me if was 5 million hours I'd still protect it the same way. I have to assume every drive could fail."

Just common sense.

It's "common sense," but not as useful as one might hope. What MTTF tells you is, within some expected margin of error, how much failure you should plan on in a statistically significant farm. So, for example, I know of an installation that has thousands of disks used for everything from root disks on relatively drop-in-replaceable compute servers to storage arrays. On the budgetary side, that installation wants to know how much replacement cost to expect per annum. On the admin side, that installation wants to be prepared with an appropriate number of redundant systems, and wants to be able to assert a failure probability for key systems. That is, if you have a raid array with 5 disks and one spare, then you want to know the probability that three disks will fail on it in the, let's say, 6 hour worst-case window before you can replace any of them. That probability is non-zero, and must be accounted for in your computation of anticipated downtime, along with every other unlikely, but possible event that you can account for.

When a vendor tells you to expect 1 0.2% failure rate, but it's really 2-4% that's a HUGE shift in the impact to your organization.

When you just have one or a handful of disks in your server at home, that's a very different situation from a datacenter full of systems with all kinds of disk needs.

the MTTF is always much lower than the observed time to disk replacement

SATA is not necessarily less reliable than FC and SCSI disks

contrary to popular belief, hard drive replacement rates to not enter steady state after the first year of operation, and in fact steadily increase over time.

early onset of wear-out has a stronger impact on replacement than infant mortality.

they show that the common assumptions that the time between failure follows an exponential distribution, and that failures are independent, are not correct.

It was an interesting paper (won the best paper award) at this year's FAST (File and Storage Technologies) conference. Here is a link [cmu.edu] to the paper, and the summary [usenix.org] from the conference.

Slashdot has a high rate of RAID, which is a bad thing. Which is a bad thing. It has been a whole 9 days. Slashdot needs a story moderation system so dupe articles can get modded out of existance. Ditto for slashdot editors who do the duping!:) (I have long since disabled tagging since 99% of the tags were completely worthless: "yes", "no", "maybe", "fud", etc. If tagging is actually useful now, please let me know!)

TFA seems surprised by SATA drives lasting as long as Fibre...why one earth would your data interface have any consequences on the drive internals?

Fibre Channel drives, like SCSI drives, are assumed to be "enterprise" drives and therefore better built than "consumer" SATA and PATA drives. It's nothing inherent to the interface, but a consequence of the environment in which that interface is expected to be used. At least, that's the idea.

In other news, Carnegie Mellon researchers know more about statistics than you give them credit for; blame ComputerWorld for crappy coverage of what the paper says. If you read the paper or the abstract, the researchers actually claim the opposite of what you are suggesting, namely, that the "infant mortality effect" (bathtub curve) often claimed for hard drives isn't actually the case. See Figure 4 in the paper and Section 5 ("Statistical properties of disk failures"). The paper is online here:

They certainly charge enough more. SATA drives run about $0.50 per gig. Comparable Fibre Channel drives run about $3 per gig. A sensible person would expect the Fibre Channel drive to be as much as 6 times as reliable, but per the article there is no difference.

A good rule of thumb is 3 years. Most hard drives fail in 3 years. I dont know why, but im currently seeing alot of bad 2004 branded drives and consider that right on schedule. Last year the 02-03 drives were the ones failing left and right. I just pulled one this morning thats stamped march 04. Just started acting up a few days ago. Like clockwork.

I've noticed this personally. Now, anecdotal evidence doesn't count for a lot, and it may be a case that we are pushing our drives more. But back in the day of 40MB hard drives that cost a fortune, they used to last forever. The only drive I ever had fail on me in the old days were the Syquest removable HD cartridges, for obvious reasons. But even they didn't fail that often, considering the extra wear-and-tear of having a removable platter with separate heads in the drive.

But these days, with our high-capacity ATA drives, I see hard drives failing every month. Sure, the drives are cheap and huge, but they don't seem to make them like they used to. I guess it's just a consequence of pushing the storage and speed to such high levels, and cheap mass-production. Although the drives are cheap, if somebody doesn't back up their data, the costs are incalculable if the data is valuable.

Before computers were used in real engineering, we could get away with "k" sometimes meaning 1024 (like in memory addresses) and sometimes meaning 1000 (like in network speeds). Those days are past. Now that computers are part of real engineering work, even the slightest amount of ambiguity is not acceptable.

Differentiating between "k" (=1000) and "ki" (=1024) is a sign that the computer industry is finally maturing. It's called progress.

I just can't believe that the same vendors that would misrepresent the capacity of their disk by redefining a Gigabyte as 1,000,000,000 bytes instead of 1,073,741,824 bytes would misrepresent their MTBF too!

Not that this is actually relevant or anything, but there's been a long-standing schism between the computing community and the scientific community concerning the meaning of the SI prefixes Kilo, Mega, and Giga. Until computers showed up, Kilo, Mega, and Giga referred exclusively to multipliers of exactly 1,000, 1,000,000, and 1,000,000,000, respectively. Then, when computers showed up and people had to start speaking of large storage sizes, the computing guys overloaded the prefixes to mean powers of two which were "close enough." Thus, when one speaks of computer storage, Kilo, Mega, and Giga refer to 2**10, 2**20, and 2**30 bytes, respectively. Kilo, Mega, and Giga, when used in this way, are properly slang, but they've gained traction in the mainstream, causing confusion among members of differing disciplines.

As such, there has been a decree [nist.gov] to give the powers of two their own SI prefix names. The following have been established:

2**10: Kibi (abbreviated Ki)

2**20: Mebi (Mi)

2**30: Gibi (Gi)

These new prefixes are gaining traction in some circles. If you have a recent release of Linux handy, type/sbin/ifconfig and look at the RX and TX byte counts. It uses the new prefixes.

Where I work we have some large compute clusters where the nodes report memory errors. It's actually very common for a memory module to start throwing errors that eventually exceed a threshold for replacement.

...to those of you who haven't managed 24x7x365 servers very much. And little news to those of you who have a computer at all.

I expect most desktop drives to last 5 years max. MAX. No manufacturer has an edge. It's just the way it is. MTBF is fiction.

For an always-on server, I expect failures about every 3-4 years. For my clients who cared enough to pay for the very best, I replaced the drives in the 3rd year without waiting. No failures costa a bit more.

My experience is that Seagate and Fujitsu are my best server drives. IBM was also on the list, but I'm watching Hitachi. No decision.

Seagate: Early IDE drives and the 'stiction' problem. Remember banging drives to get them started?

Quantum 'Bigfoot' drives: popular in Compaq machines, the 5.25".7" thin piece of junk. died often. Even Compaq admitted these were bad.

Seagate SCSI drives: Many different types had a bad habit of going off-line for no apparent reason. Your Novell server would log the 'device deactivated to a non-media defect' error. Just restarting the bus controller would sometimes wake them up. Sometimes repowering the drives. Would happen every few months. Usually when I was elsewhere...

And then there was Miniscribe.

But MTBF numbers are universally fiction. Imagine trying to sell the idea of a wave bearing lasting 16 years to an engineer with real-world experience. I figure MTBF numbers come out of the marketing department.

Except MTBF is just pulled out of their asses. Look at the development cycle of a hard drive. Look at the MTBF. I used to work for an engineering company, and have worked doing test suites to determine MTBF. Sure, there's numbers involved, but it's probably 60% wishful thinking and 40% science.

Slightly off-topic, but if you haven't checked the Google paper on Self-Monitoring, Analysis and Reporting Technology (SMART) info provided by your drive to see if it is having errors, you probably should. The paper is available here: http://hardware.slashdot.org/hardware/07/02/18/042 0247.shtml [slashdot.org]

The conclusions are roughly the following: a) if there are SMART errors, the disk will fail soon, b) if there are no SMART errors, the disk is still likely to fail. They saw no SMART errors on 36% of their failed disks.

When I was trying the Vista RC, it told me that my drive was close to failing.... About the only feature that impressed me in Vista, sadly.

Be sad no more. SmartMonTools [sourceforge.net] will run in UNIX or Windows and notify you if it detects SMART errors. For the Windows installer look for the phrase "Install the Windows package" on the smartmontools home page..

I think one of the key problems here isn't necessarily the statistical methods used, it is that the CMU team was comparing real-life drive performance to the "ideal" performance levels predicted by the drive manufacturers. Allow me to provide two examples of this "apples to oranges" comparison problem.

I have had two computers with power supply units that were "acting up." They ended up killing my hard drives on multiple occasions - Seagates, WD's, Maxtors, etc. It didn't matter what type of drive you put in these systems, the drive would die after anywhere from a week to two years. I later discovered that the power supplies were the problems, replaced them with brand new ones, and replaced the drives one last time. That was quite some time ago (years), and those drives, although small, still work, and have been transferred into newer computer systems since that time. The PSU was killing the drives; they weren't inherently bad or had a manufacturing defect. A friend of mine who lives in an apartment building constructed circa 1930 experienced similar problems with his drives. After just a few months, it seemed like his drives would spontaneously fail. When I tested his grounding plug, I found that it was carrying a voltage of about 30V (a hot ground - how wonderful). Since he moved out of that building and replaced his computer's PSU, no drive failures.

The same type of thing is true in automobile mileage testing. Car manufacturers must subject their cars to tests based on rules and procedures dictated by state and federal government agencies. These tests are almost never real world - driving on hilly terrain, through winds, with the headlights and window wipers on, plus the AC for defrost. They're based on a certain protocol developed in a laboratory to level the playing field and ensure that the ratings, for the most part, are similar. It simply means when you buy a new car, you can expect that under ideal conditions and at the beginning of the vehicle's life, it should BE ABLE to get the gas mileage listed on the window (based on an average sampling of the performance of many vehicles).

My point is that there really isn't a decent way to go about ensuring that an estimated statistic is valid for individual situations. By modifying the environmental conditions, the "rules of the game" change. A data-center with exceptional environmental control and voltage regulation systems, and top-quality server components (PSU's, voltage regulators, etc.) should expect to experience fewer drive failures per year than the drives found in an old chicken-shack data center set up in some hillbilly's back yard out in the middle of nowhere where quality is the last thing on the IT team's mind. It's impractical to expect that EVERY data center will be ideal - and since it's very very difficult to have better than the "ideal" testing conditions used in the MTTF tests - the real-life performance can only move towards more frequent and early failures. Using the car example above, since almost nobody is going to be using their vehicle in conditions BETTER than the ideal dictated by the protocols set forth by the government, and almost EVERYONE will be using their vehicles under worse conditions, the population average and median have nowhere to go but down. That doesn't mean the number is wrong, it just means that it's what the vehicle is capable of - but almost never demonstrates in terms of its performance - since ideal conditions in the real world are SO rare.