Post Your Comment

53 Comments

I think the Re at least is only relevant when you are space or controller constrained, as otherwise getting a second cheaper disk is probably going to give better speed and reliability on average.

Generally, I'd have preferred a comparison with the cheaper drives, as I don't see the point of spending more on something that will probably have the same observed failure rates in real usage, and will saturate Gbit LAN when streaming

Of course, if you commit to only a 2-bay NAS, then it might pay off to go with disks with slightly tighter tolerances and more thorough QA, but once you hit 4+ bays, there's rarely a reason to not just throw redundancy at the problem.Reply

Running the hard drive(s) at temperatures beyond their stated maximum simply decreases their lifespan; it won't cause a dramatic failure or lead to an escape scenario for the magic smoke within the drive. At least, not for the duration that Ganesh T S devoted to this comparison.Reply

Ignorant here, but I want to raise the issue. In casual research on a home NAS w/RAID I ran across a comment that regular drives are not suitable for that service because of their threshhold for flagging errors. IIRC the point was that they would wait longer to do so, and in a RAID situation that could make eventual error recovery very difficult. Drives designed for RAID use would flag errors earlier. I came away mostly with the idea that you should only build a NAS / RAID setup with drives (eg the WD Red series) designed for that.

A VERY broad and simplistic explanation is that "RAID enabled" drives will limit the amount of time they spend attempting to correct an error. The RAID controller needs to stay in constant contact with the drives to make sure the arrays integrity is intact.

A normal consumer drive will spend much more time trying to correct an internal error. During this time, the RAID controller cannot talk to the drive because it is otherwise occupied . Because the drive is no longer responding to requests from the RAID controller (as it's now doing it's own thing), the controller drops the drive out of the array - which can be a very bad thing.

Different ERC (error recovery control) methods like TLER and CCTL limit the time a drive spends trying to correct the error so it will be able to respond to requests from the RAID controller and ensure the drive isn't dropped from the array.

Basically a RAID controller is like "yo dawg, you still there?" - With TLER/CCTL the drive's all like "yeah I'm here" so everything is cool. Without TLER the drive might just be busy fixing the toilet and takes too long to answer so the RAID controller just assumes no one is home and ditches its friend.Reply

brshoemak, that was the clearest and most concise (not to mention funniest) explanation of TLER/CCTL that I've come across. For some reason, most people tend to confuse things and make it more complicated than it is. Reply

I can't really follow that reasoning, maybe I am missing something. First off, error checking should in general be done by the RAID system, not by the drive electronic. Second off, you can always successfully recover the RAID after replacing one single drive. So the only way to run into a problem is not noticing a damage to one drive before a second drive is also damaged. I've been using cheap drives in RAID-1 configurations for over a decade now, and while several drives have died in that period, I've never had a RAID complain about not being able to restore.Maybe it is only relevant on very large RAID seeing very heavy use? I agree, I'd love to hear somebody from AT comment on this risk.Reply

"you can always successfully recover the RAID after replacing one single drive."

This isn't true. If you get any errors during the rebuilt and only had a single redundancy drive for the data being recovered the raid controller will mark the array as unrecoverable. Current drive capacities are high enough that raid5 has basically been dead in the enterprise for several years due to the risk of losing it all after a single drive failure being too high.Reply

My experience for home usage is raid 1, or no raid at all and regular backups is best. Raid 5 is too complex for it's own good and never seems to be as reliable or repair like it's meant too. Because data is spread over several disks if it gets upset and goes wrong it's very hard to repair and you can loose everything. Also because you think you are safe you don't back up as often as you should so you suffer the most.

Raid 1 or no raid means a single disk has a full copy of the data so is most likely to work if you run a disk repair program over it. No raid also focuses the mind on backups so if it goes chances are you'll have a very recent backup and loose hardly any data.Reply

++ this too. If you *really* need volume sizes larger than 4TB (the size of a single drive or RAID-1), you should bite the bullet and get a pro-class raid-6 or raid-10 system or use a software solution like ZFS or Windows Server 2012 Storage Space (don't know how reliable that is though). Don't mess with consumer-level striped-parity RAID: it will fail when you most need it. Even pro-class hardware fails, but it does so more gracefully, so you can usually recover your data in the end. Reply

Avoid Storage Spaces from Windows. It's an unproven and slow "re-imagination" of RAID as Microsoft likes to call it. The main selling point is flexibility of adding more drives, but that feature doesn't work as advertised because it doesn't rebalance. If you avoid adding more drives over time it has no benefits over conventional RAID, is far slower, and has had far less real world testing on it.Reply

For home use I've gone from RAID 5 to pooling + snapshot parity (DriveBender and SnapRAID respectively). It's still one big ass pool so it's easy to manage, I can survive two disks failing simultaneously with no data loss, and even in the event of a disaster where 3+ fail simultaneously I'll only lose whatever data was on the individual disks that croaked. Storage Spaces was nice in theory, but the write speed for the parity spaces is _horrendous_, and it's still striped so I'd risk losing everything (not to mention expansion in multiples of your column size is a bitch for home use).Reply

If you have a good hardware raid card, with BBU and memory, and decent drives, then I think Raid 5 works just fine for home use.

I currently have a Raid 5 array using a 3Ware 9560SE Raid card, consisting of 4 x 1.5TB WD Black drives. This card has battery backup and onboard memory. My RAID 5 array works beautifully for my home use. I ran into an issue with a drive going bad. I was able to get a replacement, and the rebuild worked well. There's an automatic volume scan once a week, and I've seen it fix a few error quite a while ago. But nothing very recent.

I get tremendous speed out of my Raid5, and even boot my Windows7 OS from a partition on the Raid 5. Probably, eventually move that to a SSD, but they're still expensive for the size I need for the C: drive.

My biggest problem with Raid1 is that it's hugely wasteful in terms of disk space, and it can be slower than just a single drive. I can understand for mission critical stuff, Raid5 might give issues. However, for home use, if you combine true hardware Raid5 with backup of important files, I think it's a great solution in terms of reliability and performance. Reply

"First off, error checking should in general be done by the RAID system, not by the drive electronic."

The "should in general" port is where the crux of the issue lies. A RAID controller SHOULD takeover the error-correcting functions if the drive itself is having a problem - but it doesn't do it exclusively, it lets the drives have a first go at it. A non-ERC/TLER/CCTL drive will keep working on the problem for too long and not pass the reigns to the RAID controller as it should.

Also, RAID1 is the most basic RAID level in terms of complexity and I wouldn't have any qualms about running consumer drives in a consumer setting - as long as I had backups. But deal with any RAID level beyond RAID1 (RAID10/6), especially those that require parity data, and you could be in for a world of hurt if you use consumer drives. Reply

RAID systems can't do error checking at that level because they don't have access to it: only the drive electronics do.The problems with recovering RAID arrays don't usually show up with RAID-1 arrays, but with RAID-5 arrays, because you have a LOT more drives to read.I swore off consumer level raid-5 when my personal raid-5 (on an Intel Matrix RAID-5 :P) dropped two drives and refused to rebuild with them even though they were still perfectly functional. Reply

Just fix it by hand - it's not that difficult. Of course, with pseudo hardware RAID, you're buggered, as getting the required access to the disk, and forcing partial rebuilds isn't easily possible.

I've had a second disk drop out on me once, and I don't recall how exactly I ended up fixing it, but it was definitely possible. I probably just let the drive "repair" the unreadable sectors by writing 512 rubbish bytes to the relevant locations, and tanked the loss of those few bytes, then rebuilt to the redundancy disk.So yeah, there probably was some data loss, but bad sectors aren't the end of the world.

And by using surface scans you can make the RAID drop drives with bad sectors at the first sign of an issue, then resync and be done with it. 3-6 drive RAID 5 is perfectly okay, if you only have intermediate availability requirements. For high availability RAID 6/RAID 10 arrays with 6-12 disks are a better choice.Reply

Intel chipsets do not offer hardware RAID. The RAID you see is purely software. The Intel BIOS just formats your hard drive with Intel's IMSM (Intel Matrix Storage Manager) format. The operating system has to interpret the format and do all the RAID parity/stripe calculations. Consider it like a file system.

Calling Intel's RAID "hardware" or "pseudo-hardware" is a misconception I'd like to see die. :)Reply

"First off, error checking should in general be done by the RAID system, not by the drive electronic. "

You need to keep in mind how drives work. they are split into 512b/4k sectors... and each sector has a significant chunk of ECC at the end of the sector, so all drives are continually doing both error checking and error recovery on every single read they do.

plus, if it is possible to quickly recover an error, then obviously it is advantageous for the drive to do this, as there may not be a second copy of the data available (i.e. when rebuilding a RAID 1 or RAID 5 array)Reply

With a difference of 1 to 2 watts for the Seagate I fail to see how that would be too much of a cause for concern for cooling systems? Even with a 5 disk array it should still be under 10 watts difference in the most demanding circumstances and about 5 watts on average.Reply

I was thinking about that. 1W difference has got to be negligible for any desktop based system. Even 3-4W differences, while large on the relative scale, are small in the absolute sense. I don't see how you could make the statement "if you want more performance, Seagate, if you need cool and quiet, WD" Is there no other reason to pick one drive over the other besides a 1W performance consumption difference?Reply

It is hard to see the relative differences quickly switching between the performance graphs for the different drives because some of them are on different scales for each drive. Is there any way the graph scales can be made uniform?Reply

I can see the scale on the side, but for example the random read graph has the max Y scale value at 50ms for the WD SE drive, 100ms for the Red drive and WD RE and 200 ms for the Seagate. At first glance, it looks like the Seagate is owning because of the scale -- it requires extra thought to figure out what the graph would look like on the same scale for comparison.Reply

The Seagate NAS HDDs seem quite good in terms of reliability thus far. I have a 3 TB and 4 TB in my WHS (JBOD) and they've made it past the crucial 1 month mark without issues. But as mentioned in the review, these haven't been on the market very long.

These are the first Seagates I've purchased in years due to past issues you alluded to.Reply

Does anyone know how read patrolling factors into usage numbers? There is no way I would come even close to 150 TB/yr in a home NAS with my own data, but with ZFS read patrolling going on in the background I don't exactly know what the true load is.Reply

I don't really understand these read or read/write ratings... iirc, Google's data said reads and writes do not affect failure rate on hard drives. (SSD's are obviously a different story, for writes).Reply

I have had good experience with Hitachi drives in NAS use. HGST has both consumer class and enterprise class 7200 rpm 4tb drives capable of NAS use. Any plans to include the HGST in the review evaluation of 4tb NAS capable drives?Reply

To me, Speed doesn't matter any more. Not for NAS Market. Since even the slowest HDD will saturate 1Gbits Ethernet in Sequential Read Write, and Random Read Write are slow as well as mostly limited by the NAS CPU as well. I want Price and Disk Size. Reliability is also a concern as well but since most HDD will just fail in one way or another over time It is best to have something like Synology where you over a number of disk you could have up to 2 HDD failure. Reply

The power numbers are wall power, so it includes power supply losses and the power consumed by the LenovoEMC PX2-300D, in addition to the power consumed by the hard drive. So the absolute values aren't useful (unless you own a PX2-300D), but the numbers do show which drives consume less power.Reply

Doing a 'torture test' means you use them a lot constantly though, not that you put them on a burner to see what happens.And frankly a drive should adhere to its stated lifetime/performance somewhat regardless how heavy you use it.And don't forget that all drives unless powered down spin constantly anyway.

And quite a few NAS boxes for the home have so-so cooling, so it would be valid to test how hot HD's get during intensive (but normal) use.Reply

I am planning to buy a Drobo 5N as a Plex video server and also for TimeMachine backup. That would seem to require limited data transfer.From the review it would seem that the Red is just as good as the RE and at nearly half the price would be the better choice.Do you agree that the Red is a better choice than the RE for my needs?Reply