For me the MTBF doesn't mean a thing, whether platter or SSD. For SSD I do look around to see if there are widespread issues, and if they have been fixed with a newer firmware release. Otherwise I look at the performance/$ ratio and nothing more.

Last edited by kleinkinstein; 01-07-2013 at 02:10 PM.
Reason: annunciation

If you assume a constant failure rate and continuous usage, you can convert MTBF to average annual failure rate (AFR) with this formula:

AFR = 1 - e^( -8760 hours / MTBF in hours )

So, for example, MTBF of one million hours corresponds to an average annual failure rate of 0.87% , meaning that if you had one thousand devices running for a year, you would average 8.7 failures in that year (assuming you immediately repaired or replaced any failures to keep the total operational count at 1000).

However, this type of analysis is so simplistic as to be almost useless for typical consumers to compare SSDs (or HDDs). One reason is that the actual failure rate probably follows a bathtub curve, with higher failure rates at the beginning of life, and higher again at end of life. So this analysis is only valid during the middle of life. But many consumers are probably interested in failures during the beginning of life period, not just the flat part of the bathtub curve.

Worse is that MTBF numbers are determined using a highly controlled environment. For example, a number of SSDs may be exposed to a certain workload, connected to certain hardware, at a certain temperature, and then run until a specified number of hours (or failures) have occurred, and that data is then used to compute MTBF. But this obviously does not include failures that may be the result of different workloads or use with different hardware.

Even worse is that different manufacturers use different techniques to determine MTBF, and the details are rarely specified.

So comparing MTBF numbers given by SSD manufacturers is not likely to yield any useful information.

For me the MTBF doesn't mean a thing, whether platter or SSD. For SSD I do look around to see if there are widespread issues, and if they have been fixed with a newer firmware release. Otherwise I look at the performance/$ ratio and nothing more.

So if you have decided that MTBF is a useless piece of data why have you created a thread to ask us what we think when directed to a definition of it?

If you assume a constant failure rate and continuous usage, you can convert MTBF to average annual failure rate (AFR) with this formula:

AFR = 1 - e^( -8760 hours / MTBF in hours )

So, for example, MTBF of one million hours corresponds to an average annual failure rate of 0.87% , meaning that if you had one thousand devices running for a year, you would average 8.7 failures in that year (assuming you immediately repaired or replaced any failures to keep the total operational count at 1000).

However, this type of analysis is so simplistic as to be almost useless for typical consumers to compare SSDs (or HDDs). One reason is that the actual failure rate probably follows a bathtub curve, with higher failure rates at the beginning of life, and higher again at end of life. So this analysis is only valid during the middle of life. But many consumers are probably interested in failures during the beginning of life period, not just the flat part of the bathtub curve.

Worse is that MTBF numbers are determined using a highly controlled environment. For example, a number of SSDs may be exposed to a certain workload, connected to certain hardware, at a certain temperature, and then run until a specified number of hours (or failures) have occurred, and that data is then used to compute MTBF. But this obviously does not include failures that may be the result of different workloads or use with different hardware.

Even worse is that different manufacturers use different techniques to determine MTBF, and the details are rarely specified.

So comparing MTBF numbers given by SSD manufacturers is not likely to yield any useful information.

If you assume a constant failure rate and continuous usage, you can convert MTBF to average annual failure rate (AFR) with this formula:

AFR = 1 - e^( -8760 hours / MTBF in hours )

So, for example, MTBF of one million hours corresponds to an average annual failure rate of 0.87% , meaning that if you had one thousand devices running for a year, you would average 8.7 failures in that year (assuming you immediately repaired or replaced any failures to keep the total operational count at 1000).

As observed by many very large users who source a variety of drives. In any given year, for any given HDD drive on the market with 1.4 million MTBF (claimed), the failure rate varies between 5 and 10%/year for any given year (yes, bathtub curve, with a min of 5% on good years).
The exception are individual models with a defect in design for which failure rate is much much higher. (such models have been released by all companies at one point or another)

Furthermore, for some reason such defective models come with the exact same claimed 1.4 million hours MTBF... [sarcasm]Why... its almost as if marketers slap it on them without actually having run quality tests and calculating a figure![/sarcasm]