Do you guys spindown disks in zfs pools?

I have a setup with a few SAS drives for home NAS running on an intel server with openmediavault. ( Debian 7 ish).
But I cant for the life of me get them to spin down. I would like it for power saving. At least half of the day.
I guess one of the issues is that hdparm doesnt want to talk to my disks at all. But i found some other tools that could force them to spin down. But that led to io errors on zfs and the pool was lost until next reboot. Was probably possible to get it back somehow.

So the question is, do you let your disks spindown? And if so, how? =)

You generally can't use hdparm with SAS disks (or in some cases even on SAS controllers with SATA drives - depends on the capabilities exposed by the driver). You need to use its SAS/SCSI brother, sdparm. You can set the spindown timers on the drives and they should spin back up on access. ZFS should be happy with it.

Pay special attention to the arguments for sdparm - they are radically different than the arguments used on hdparm.

Also - you'll likely hear the "spinning down drives is not a good idea" comments coming from others on this thread. Its a trade off and I don't judge, just know that you are trading a bit of power savings for an approach that likely shortens the useful life of the drives. I do it on my "archive" server that collects weekly snapshots, but don't use it on things that get accessed more frequently. YMMV.

Perhaps not helpful, but I don't bother. Power use at idle isn't that much, and spin up/down seems like it would be the most stressful operation on the drive. There's even a spinup counter in the SMART data.

If that causes more wear, do I save money in the long run with power or replacement drives? I honestly don't know. But power is pretty cheap here, perhaps in areas that have higher energy costs would have a different result as well.

With only a few drives and only spinning down half day... I wouldn't as it's not worth the additional likelihood of failure, wearing out faster, etc...

Say you have 5 spun down for 10hrs a day... so they use 5*4w @ idle (estimate) = 20w * 10hrs = 200w saving per-day and * 30 = 6000 watts or 6kWh and then * that by your power cost per kW for me that is 25 cents now. $1.5/mo saving or $18/year. It would take 10+ years to 'pay' for a drive with the savings from power so ymmv with value of that vs. causing a failure... def. not worth it in MY opinion

For loads of folks at home, the power savings are simply too small to be worthwhile. Apart from any additional wear on the disks themselves, you also need to consider things like what doing it does to the environment that they are in. If you are constantly/ regularly temperature cycling other components, you are likely to see a higher failure rate of those too. Things like power supplies and mainboards really don't like temperature cycling for example.

I personally don't spin down my little array and the only home use case I would deem it worth the trade off for, would be archive storage, where the disks are cycled along with the rest of the storage host on an extended timescale, monthly for example. Perhaps if you had racks and racks of disks located in a datacenter somewhere, half of which were not needed 3/4 of the day, then you may benefit financially from spinning down some disks

With only a few drives and only spinning down half day... I wouldn't not worth additional likelihood of failure, wearing out faster, etc...

Say you have 5 spun down for 10hrs a day... so they use 5*4w @ idle (estimate) = 20w * 10hrs = 200w saving per-day and * 30 = 6000 watts or 6kWh and then * that by your power cost per kW for me that is 25 cents now. $1.5/mo saving or $18/year. It would take 10+ years to 'pay' for a drive with the savings from power so ymmv with value of that vs. causing a failure... def. not worth it in MY opinion

Click to expand...

Damn T-,

Breaking down the numbers and all!!

Even *IF* my 846 was fully populated I would only save but ten bucks per month!

Not really worth it, when looking at the cost of a new drive... but Im more concerned with the instant access and not needing to wait if I pull a file that's on spinners.

I remember when I got my first (and only) WD Green 1TB many years ago... upset me to no end with the head parking, and having to spin back up etc... those seconds! I actually still have those droves and the ReadyNAS, lol way too slow for these days! Need to junk it.

@Madhelp - They're mechanical and starting and stopping electric motors does put more stress on them. The heat cycles may be even bigger depending on air flow, and 'spun down' temp vs 'in use' how many times it cycles per-day, etc...

Would be interesting to see it on the 'extreme' something like every other minute until something dies

There's likely other forces here too like starting/stopping a HDD may not be as bad for it if it's in a quality chassis vs. DIY or home-tower setups. The starting vibrations may be worse? I know it's nitpicky but that's kind-of what we're doing here lol!! Just more thoughts.

@T_Minus Thanks for your insight. I hear everything your saying and it's very logical and hard to disagree with. On the other hand I'm not sure I fully agree.

In regards to temperature cycling, I doubt for most end users we are really talking about huge temperature swings here. If your spinning the drive up and then pounding it for hours and then spinning them down all without adequate cooling then maybe.

The vibrations is a good point. In crap setup with drives spinning up and down constantly I could see issues as well.

From the hundreds of drives that I've had the privilege of owning or working with, on the ones that died or were dying I would estimate 97% of them didn't have any issues spinning up. And if we are talking about wear I assume we are talking about the motor?

Unless a drive is spinning down and then spinning up every other minute I don't think you will see much difference in life. (Although if it is doing that you will use more power actually)
If it's 2 or 3 times a day I am sure your fine in terms of life and well within the manufacture rating.

Don't know about you guys but generally all my non-SSD (which I allmost all of still even intel 320/330 series 120gb ones) have been replaced after a few years since the huge increase in size and significant price reduction.

My drives are rated for 600.000 load/unload cycles. Thats a pretty long lifetime of spindowns i think. But I think i overestimated my savings if i would spindown. Probably around 40$ yearly in savings. Maybe not even worth the hassle.

I have not enabled drive spindown on my drives in arrays since the first 6 months I had my first NAS years ago - the power savings even in extortion rate areas of CA where your top tier electrical rate is about $0.36 per KWH comes out to about $13/drive per entire year spent spun down vs spinning - not worth it. (based on 4W difference per drive spinning vs non for low power drives)

Spinup/Spindown puts a lot of wear on drives if you are doing it frequently or on demand. A good drive can last at least 50K hours spinning without wearing out the bearings enough to cause excess vibration in the drive (6 years or more) - my long term plan is to replace older drives in my main pool one by one with new drives twice as large and reclaim the old drives for backup pool, eventually to be twice as many half size drives. Then the backup pool will be spun up, synched, and scrubbed once every month or two and then spun down again - even the old drives should last many extra years that way, adding on only 6-12 spin cycles and 600-800 hours per year versus over 8K hours per year.

Not trying to convince anyone one way or the other. You pay your money and takes your choice.
I've always spun down my HD's. Probably have about 40 running at the moment, so that 4w (or so) per disk does add up. I've had very few disk failures over the years. Typically disks get aged out of being useful due to their capacity way before they finally give up.
Most of the disks spent most of their time spun down, lots or rarely accessed media files. Everything frequently accessed lives on ssd pools to avoid spinning up the disks.

Remember that in FreeNAS 9.* your main pool with the jails dataset would never stay spun down, something similar is probably going on in 10 unless you have multiple pools going. I seem to remember that most of the spindown techniques boiled down to sending the smartctl command to power save to the disks involved, but I could be wrong.

I do spindown disks on my self-built NAS (FreeBSD 11 with 9 2TB WD Black disks (8 disks in a raidz1 zpool, and 1 disk for local backups). Powerusage goes from nearly 80 watt to about 35 watt when I spindown those 9 disks. This NAS has been running now for almost a year. I keep stats about the state of the disks, and roughly 70-80% of the time, the disks are spinned down (the machine is on 24h/day). I keep track of IO activity with self made scripts (zpool iostat 60), and when I see no IO for about 20 minutes, I spindown the disks using camcontrol (camcontrol standby /dev/$disk). Then when a request comes for disk access (either local on the machine, or via Samba on my home network), the disks will spin up and the share will be available.

When using "camcontrol sleep /dev/$disk", ZFS would loose disks and degrade the zpool tank.

Polling if a disk is awake or sleeping is done via: "/usr/local/sbin/smartctl -n standby /dev/$disk". When the disk is awake, it exits with (0), if it's in STANDBY mode, it exits with (2).

To optimize disk STANDBY time, the zpool only contains 'data', the OS of the NAS itself runs from a seperate SSD drive, and further tuning/config on the machine is not needed.

The NAS has 16GB of ECC RAM, so a lot is cached. I frequently see that via Samba I can browse quite far into the tree of dirs on this zpool, while the disks are sleeping.

About Us

Our community has been around for many years and pride ourselves on offering unbiased, critical discussion among people of all different backgrounds. We are working every day to make sure our community is one of the best.