I was already planning on getting 4x 4TB Red drives as I no longer trust Seagate's reliability (and their lame 2-year warranty), and the 4TB Hitachi's are only available in 7,200rpm flavour and I know to not be particularly quiet.

Now I have seen the relative noise and vibration levels via SPCR this cements my position. All I need to do know is find the money before the 5TB models start coming out.

That means it is pretty much unusable for NAS, because it will die in 2-4 years, like most of the Greens. From 6 WD20EARS i had over the time i have only 1 - all the remaining were pieces were RMA'd and replaced by another WD20EARS or WD20EARX. 2 of 6 WD20EARX did die over the time too.

That means it is pretty much unusable for NAS, because it will die in 2-4 years, like most of the Greens. From 6 WD20EARS i had over the time i have only 1 - all the remaining were pieces were RMA'd and replaced by another WD20EARS or WD20EARX. 2 of 6 WD20EARX did die over the time too.

After seeing your comment, I contacted my rep at WD. She pointed out head parking is in ALL the Red models, contrary to Larry's findings in the reviews. I checked the WD tech datasheets -- http://www.wdc.com/wdproducts/library/S ... 771442.pdf -- and sure enough, all the Reds are rated for 600,000 load/unload cycles for "Reliability/Data Integrity." The WD Green datasheet specs 300,000 load/unload cycles. So...

Your experience is contrary to ours, faugusztin. I've used Greens in many builds over the years and afaik they're all working A-OK. Ditto for Larry, who might have even more of them deployed. The HP Microserver we reviewed became SPCR's primary server, and it has several Greens in it, some of which came from the previous server, so longevity hasn't been an issue for us at all -- 0 failures from at least a dozen or more drives used over the years, usually not with kid gloves, either, as some get handled quite a lot.

Anyway, even if you believe the head parking can cause premature failure, the Reds are rated for double the # compared to Greens. I'm not convinced the lock/unlock cycle count is directly attributable to drive failure. There's been a lot of discussion of this question -- an endless thread somewhere in the storage forums -- and no one has ever shown a cause/effect relationship.

As to why Larry didn't notice the head parking in the 1 & 3 TB models, perhaps it's due to those models being so damn quiet. Usually we listen for it rather than look for the power drop. We'll have to check those samples out again. Look for some errata postscripts in those reviews.

And now the single remaining WD20EARS (treated with WDIDLE too after i found out about the problem) :2 years 10 months 17 days / 574 / 472 / 156826

The only other WD20EARS i have is technically dead - WD Data Lifeguard Diagnostics just simply says "too many bad sectors" at long test and the drive is out of warranty. All other failed drives except one were all sent to RMA after their Current Pending Sector started climbing - everytime this happened i took out the drive, ran through WD Data Lifeguard Diagnostics, it usually said READ ELEMENT FAILURE, status code 7, i packed the drive sent for RMA, got a WD20EARX in replacement (or money in few months after the Thai floods). The only exception actually reported a SMART error serious enough for BIOS to halt the boot, i didn't even bother testing that one, i sent that one to RMA right away.

All drives were mounted via gromets or other soft mounting systems in Nanoxia Deep Silence 1 or Fractal Define R3, all drives are/were in 30-47C temperature range according to their own temperature sensors.

So unless WD30EFRX doesn't report load/unload cycle count incorrectly then it doesn't do the 8s sleep.

Somehow i doubt that the 3TB Red has 8s spindown, unless they are not reflected in SMART data. My four WD30EFRX has following SMART data (runtime / power cycle count / power-off retract count / load & unload cycle count:

Are you using any of these drives in a RAID array, if so are you using a hardware RAID card.?

I will be using a hardware RAID card (albeit a low-end PCI-E SATA 2 card thats a few years old (still has a hardware XOR controller though and has no CPU usage)), and I would like to know whether or not the "Red" series of drives can somehow detect whether they are in a RAID array rather than a single (or JBOD) drive, or if there is some kind of 2-way communication that disables the head parking feature or even if it has a much longer head-parking delay in the same way that my RAID card and the drives both support "staggered spin-up" which is enabled and no found on any desktop grade motherboard that I have ever seen (simply spins up one drive at a time so there isn't any kind of power surge when 8x HDD's all draw power simultaneously).

I do know that HDD's such as the "Red" line of drives leave a much longer period of time before trying to address any errors with the drives themselves to allow time for the RAID card to address the errors. This was one of the reasons why I was wanting to eventually get a set of HDD's that are actually "designed" to be in RAID arrays, and why many drives over the years have not been very reliable in a RAID array, or have simply "broken the array".

Nonetheless, I have absolutely no trust at all in Seagate (ironically I used to have no trust at all in WD, but many years can make a huge difference) and they are the only 2 manufacturers of large capacity drives left (I don't count Hitachi as they are owned by WD, and I don't count Toshiba because they make shit not hard drives, and I dont count Samsung as apart from being owned by Seagate no longer seem to sell drives under their own name at all).

No RAID, just plain drives connected to the Intel onboard controller. And TLER is actually other way around - a TLER enabled drive (good for RAID) gives up the recovery process in time up to 7 seconds maximum, after that the RAID controller takes over. A normal drive (Green, Blue, Black) with no TLER can do recovery for as long time as it needs - seconds, minutes, if it wants even hours.

To be fair with WD20EARX, one of them did die within a month after purchase, so that could be classified as sort of DOA. So in my case it was about the pathetic show WD20EARS drives provided - i actualy had a situation where i bought a drive, it died within few months, then the replacement died within a month, then a replacement of a replacement died after a year, and then it got replaced by WD20EARX which runs to this day.FYI: http://www.macobserver.com/tmo/article/ ... ure_rates/ - WD20EARS had 4.83% failure rate. Maybe i was unlucky, i don't know. Still, i have only one working and one failed WD20EARS remaining, all other drives were RMA'd over the time to WD20EARX drives .

Now the question is whether or not to wait until later this year to get the 5 TB WD Red as indicated by this roadmap.

I wonder if those 5 TB models will be 5 platter x 1TB/platter or 4 platter x 1.25TB/platter? If it's the former, I wonder how much noisier they will be compared to the 4 TB Red. Are there any existing 5 platter HDDs that we can use as approximation for how much the noise increases when going from 4->5 platters?

No RAID, just plain drives connected to the Intel onboard controller. And TLER is actually other way around - a TLER enabled drive (good for RAID) gives up the recovery process in time up to 7 seconds maximum, after that the RAID controller takes over. A normal drive (Green, Blue, Black) with no TLER can do recovery for as long time as it needs - seconds, minutes, if it wants even hours.

I am glad you understood what I meant, we are talking about the same thing but it has been years since I read about it. Up until my next drive array purchase I will never have used it because previously the drives that supported it cost double the price.... I assume that the "Red" drives support this technology, otherwise they are a long way short of being RAID ready drives.

Quote:

o be fair with WD20EARX, one of them did die within a month after purchase, so that could be classified as sort of DOA. So in my case it was about the pathetic show WD20EARS drives provided - i actualy had a situation where i bought a drive, it died within few months, then the replacement died within a month, then a replacement of a replacement died after a year, and then it got replaced by WD20EARX which runs to this day.

A work colleague of mine had a similar thing happen with a bunch of Seagate drives, the entire RAID 5 array destroyed itself twice within 2-weeks, with no recoverable data, each drive tested OK, then the massive scandal broke. Seagate (my once beloved HDD manufacturer) released an entire range of HDD's without even testing them in real world situations, basically the firmware was totally fu*ked, they denied there was a problem at all for 3 whole months..... either way, back to the story, my colleague had 4 drives replaced "twice" (8x drives replaced inside 1-month (only bought 4)), they were all useless, he got a full refund and bought 4x drives from another manufacturer (either WD(if so Green) or Samsung cant remember which manufacturer) and has never had a problem with them since.

Quote:

WD20EARS had 4.83% failure rate. Maybe i was unlucky, i don't know. Still, i have only one working and one failed WD20EARS remaining, all other drives were RMA'd over the time to WD20EARX drives .

In my personal (and working as a computer engineer) experience we have a pretty good correlation (not scientific) between HDD packaging from the supplier to us and the failure rate. We once bought 20 (twenty) identical HDD's and had a massive failure rate compared to a previous 20 (twenty) otherwise identical drives bought across several orders, we assumed that these drives had been thrown around by the couriers or shipping. Going back even further than this, over a decade ago I used to work in the sales department of a very large UK PC builder/supplier, in one month there was a silly amount of PC's returned with dead HDD's..... it turned out that an entire "Pallet", many hundreds of HDD's were given a hefty drop onto an unmoving concrete floor, the failure rate was above 50% within 1-month, no doubt it spiralled higher and higher as time went on and the drives were used more and more.

That is one thing that is almost worth ignoring about SSD's.... packaging, its almost irrelevant compared to HDD's. Personally I will actually spend £5 per drive more buying from a company who I know treats their HDDs with respect and packages them well, OCUK is pretty damned great on this point, Scan comes second IMO and the rest vary quite a lot including once I saw a case of "return without testing" because you can see the HDD through the ripped cardboard box..... no packaging at all, 4x 1/5lb drives (2KG) will rip a cardboard box open when thrown around by the average courier" (that was an unknown distro with 4x £400 server grade SAS drives (they repeated this again because they are dicks and at last count wiped out £2400 of HDD's (6 out of 8 that was in 2-days, likey chances are that the other 2 were toast as well) due to a total disregard of packaging)).

But I am now more uncertain than ever whether to get the 3TB or 4TB Red. I'm going to populate a HP Microserver so I'm looking for 5 drives total, over the next couple of years. Probably looking to buy 3x3TB or 3x4TB drives initially. Aside from the extra space, and cost/GB do you guys reckon the slightly added noise/vibration of 4TB will present an issue?

Oh and btw, I will run a fan mod + picopsu mod + ssd bootdrive on the Microserver so the base hw will be completely silent.

No RAID, just plain drives connected to the Intel onboard controller. And TLER is actually other way around - a TLER enabled drive (good for RAID) gives up the recovery process in time up to 7 seconds maximum, after that the RAID controller takes over. A normal drive (Green, Blue, Black) with no TLER can do recovery for as long time as it needs - seconds, minutes, if it wants even hours.

I am glad you understood what I meant, we are talking about the same thing but it has been years since I read about it. Up until my next drive array purchase I will never have used it because previously the drives that supported it cost double the price.... I assume that the "Red" drives support this technology, otherwise they are a long way short of being RAID ready drives.

So does WDIDLE3 work on the 4 TB reds? If not, is the parking audible? For reference, my WD20EARX and WD20EARS are basically silent in the bottom chamber of my Acoustipacked Antec P182, which has two Nexus case fans (upper and lower chambers) at 500 RPM and a Slipstream PWM CPU fan at 200 RPM, plus a fanless GPU and Seasonic X650; I don't even hear the seeks. These are great drives. Also, can anyone compare these drives to the 4 TB Seagate NAS version? The 4 TB Seagate non-NAS head parking is definitely audible from 3-4' in the lower chamber of my non-Acoustipacked P180, and I wouldn't put one in my main system.

I was already planning on getting 4x 4TB Red drives as I no longer trust Seagate's reliability (and their lame 2-year warranty), and the 4TB Hitachi's are only available in 7,200rpm flavour and I know to not be particularly quiet.

Technically Hitachi still sells 4TB 5400RPM green drive, but it has 5 platters as opposed to 4 platters in the 4TB "green" Seagate ST4000DM000 and 4TB "green" WD WD40EZRX.

For what it's worth I'm migrating my data to a new fileserver and I'm exclusively using ST4000DM000. The only reason for doing this is because at the time when I started the project, which is still ongoing but in the final stages, it was the only 4 platter 4TB green drive available from any manufacturer. WD was way too slow in bringing green 4 platter 4TB drives to the market and they're still way more expensive than Seagates. I got eight new ST4000DM000 drives from ebay for $115-130 a piece, on the other hand the cheapest price on WD40EZRX I've seen on newegg was $170. It may not make much difference if you're buying just one drive, but when you have to buy 7 to 10 drives the difference starts to add up.

We'll see how those seagates drives will fare, but hard drive failure is a fact of life. It is my opinion however, that aside from shipping damage/isolated design flaws such as infamous deathstar debacle, or the crappy Seagate firmware a couple of years back most of the hard drives have very similar mechanical failure rate. You just can't make a judgement one or the other way. I've been very lucky because the last hard drive that failed on me was 250GB IDE Hitachi (the RMA'ed drive worked perfectly fine by the way until the day I retired it). However, I'm not assuming my luck will continue. You have to presume that your hard drives will fail which is why you need some kind of backup or redundancy. Either buy two drives and backup regularly, or if you need more storage do like I do and build some sort of RAID system. My new fileserver will be using 7+ drives with 2 drives dedicated to parity. I will be using SnapRAID to calculate parity nightly and I will also install automated tools to monitor hard drive health continuosly. That way I figure I've done as much as I can. Relying on a single hard drive manufacturer because it has perceived quality edge is betting against fate. Anyway, what I'm trying to say is that the debate which manufacturer is more reliable should be secondary to whether every person has a backup or not.

Agree about the brand choice. The oldest hard drive in my storage machine is a 1.5TB 7200.11 that continues to run very well. My only slight complaint is that they're a bit loud, but all my 7200rpm drives happen to be Seagate. WD Caviar Greens have caused me the most trouble. I like the Reds a lot, although they're a bit expensive.

Wow, that's quite a find that these drives do have head parking enabled? I must say I doubt it.

I am considering buying the Red's for my ZFS raid system, but really need to know if this headparking is audible or not. I dont' need extra noise every 8 seconds.

It's not a matter of noise; ZFS is quite I/O intensive and the cycle count will rise quite fast as shown on post #5, leading to premature failure. I was thinking about buying WD Greens for my ZFS install, until I just read that thread. Reds all the way!

I'm planning on building a new NAS and was wondering why so many people where complaining about loud seeks (only when using WD Reds). Seems even here there is some mention about it on one thread: viewtopic.php?f=7&t=65849

And based on those threads it would seem that WD silently raised the seek noise in their own specs by 3db between 2013 and 2014 without changing the model name?

Now I understand the reviews where most probably done before this, but as usual when doing a new build my primary source of info on noise of components is your reviews, and I think it's quite possible I might have been quite disappointed if I had not found this info beforehand... If you can't do a new review maybe the conclusion page should have a big warning about this? I also wonder if they might send out drives with old firmwares to sites who they know value silence and a new noisier firmware to sites who value speed...

I just installed a new 6TB Red, and in my opinion it is still a very quiet drive! It may not be as quiet as the early ones (although I have 2 x 3TB Reds from 2 years ago, and adding the new 6TB I can't tell the difference), but the new drives are plenty quiet enough in my opinion. Maybe not the absolute quietest at the moment, but everything is relative, and they have many advantages (low power consumption and low heat being my favourites, after the low noise) making them worthwhile.

I don't know if this affects 6TB models since they might have been released after?

The whole thing sounds weird, but at the same time, WD did make quite a significant change to the specs on seek noise, and it would seem like there is a lot more people around the internet complaining about loud seeks after that happened..

It becomes a complex thing because almost all the reviews of the 3 and 4TB models are from before this. Almost all the random threads where people complain that the drives are noisy have other users telling how "they have many drives in use that are very silent". I only find two threads (linked above) where the change in specs is actually mentioned.

Who is online

Users browsing this forum: Google [Bot] and 0 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum