What lifetime can be expected of the typical hard disk? Or are there big differences between different types? And does it make a difference if it is used heavily instead of never being connected to system (for example serving as a backup medium)?

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. If this question can be reworded to fit the rules in the help center, please edit the question.

5 Answers
5

The correct answer to your question of "What lifetime can be expected of the typical hard disk?" is "Not long enough for you to not have a backup of your data from day 1."

Seriously, most techies since ages immemorial have felt the sudden urge to run out and buy a replacement hard disk within 3 years. There was a really good Google white paper on the lifetime of consumer-level SATA drives, and it was scary to read, to say the least.

Are there big differences between types?

We have had SCSI, SAS, IDE, SATA, etc. Also, now we have Enterprise models, 24/7-capable, etc etc. Usually, enterprise (SCSI, SAS, Enterprise-models) should have a longer lifespan... however there are still some bad eggs that slip through the gates and hurtle towards the abyss of failure.

Does it make a difference if it is used heavily?

A hard drive that is not often used in theory should last longer than a constantly used drive - however don't take that as the gospel truth.

So what are you trying to say here, you wishy-washy guy?

What I am trying to say is, when it comes to data and data storage, it is never too extravagant to assume your drive will fail tomorrow - and plan according to it.

+1 for paranoia. Always assume a drive could fail in the next few minutes, because it could. Have a good backup regime for data that you care about, and if high availability is a concern some sort of RAID arrangement with multiple drives to give your data resilience in the face of certain physical faults.
– David SpillettSep 7 '09 at 10:37

That google white paper was a great read, as I recall. Not scary, really. I expected worse :] Was unfortunate enough to experience that "infant mortality" once. Have a proper backup plan since that day :]
– Kirill StrizhakSep 7 '09 at 11:42

SpinRite always helps if run every few months to keep an eye on how bad your drive is doing. I don't do it personally, but then i started backing up my data only a couple weeks ago ^-^
– RCIXSep 7 '09 at 13:25

To add something to this answer (even though the question is closed): You should watch how many times your hard drive turns off. Turning them off and on is one huge point of failure.
– ShikiJan 10 '12 at 12:21

What we have is only statistical evidence over a relatively short time period (3 to 5 years at most). We can't necessarily infer the life expectancy of current drives from old ones, or of one particular drive from some other one. Some anecdotes :

I have some 20 years old hard drives (40 to 400 MB) that work perfectly fine today.

one of my customers have a RAID array of 4 320 MB drives running 24/24h since 1993 without any failure so far.

on the other hand, 80% of 1996 vintage Micropolis 9GB drives failed in the first year.

However :

drive technology changed very significantly in the past 15 years. I wouldn't bet that current drives come near to older (and simpler) drives from a durability standpoint, though they may fare better on average.

on a large sample, current drives failure rates is about 0.6 to 1% per year for the 5 years that drive makers are interested in. After those five years, there is very little actual data.

About disk usage :

Most of our storage servers fits in the 0.6% range of annual drive failures (data collected upon about 3000 disks).

but one particular heavily used cluster (total 300 disks) is in the 3 to 5% annual disk failure rate (5 to 10 times worse).

What to do?

Use RAID. Do backups. Keep some backups on some other technology (tape, optical). Do more backups. Then some more. Only the paranoids will survive.

does it make a difference if it is used heavily instead of never being connected to system?

This point is the only one not covered by other answers so far.

A drive in use is going to see more wear and tear on the physical mechanisms (i.e. the head moving apparatus and the spindle motor) and is exposed to environmental conditions (changes in operating temperature inside a machine for instance, and increased chance of physical knocks if it is an external drive).

Inactive media may still degrade over time though. Changes in environment (mainly temperature for hard drives, humidity too for tapes) can cause the magnetic storage to slowly degrade as can exposure to other factors in storage (local magnetic fields, temporary or otherwise, contaminants in the air, ...). Also you may find that a drive that has been powered off for a long period of time will fail to spin-up once reconnected due to mechanical parts having "seized up" - there are techniques that sometimes rescue a drive from this long enough to get the data off onto another drive but they are not reliable. I've only every had one drive fail in this way, and I managed to get it going with the risky "quick spin" technique but it does happen. So if you are storing data on drives for a long time, store the data on at least two drives and test them occasionally (though that applies to any other medium too, not just drives - don't just store and forget if the data is important).

The biggest killer is temperature. Keep your hard disks below 30 deg C. The next biggest killer is shock, either through physically dropping the disk or what is known as a 'head crash' where the cantilever scrapes against the magnetic coating of the drive due to a power or mechanical failure.

The MTBF (mean time before failure) is a rough indication of how long a drive (on average) will last irrespective of load and is usually supplied by the manufacturer, although do take it with a pinch of salt.

Do take manufacturers' MTBF values with all the salt in the Dead Sea. If they were any accurate, we would have hard disks that lasts 30 years at least.
– calibanSep 7 '09 at 10:23

1

MTBF is just that. "mean". An average. Some drives will survive 30+ years and some will last five minutes, without invalidating the average. Also there is will caveats with any MTBF (usually in the small print somewhere) like how many power cycles and spin-up/spin-down cycles are assumed over a given time period, and the fact that the MTBF assumes perfect conditions that no drive experiences for its whole life in the real world.
– David SpillettSep 7 '09 at 10:54

The advice of temperature is wrong according to a study by Google: "Contrary to previously reported results, we found very little correlation between failure rates and either elevated temperature or activity levels."
– Emily L.Apr 4 '16 at 9:56