The ZFS NAS Box Thread

I figured that ECC support was the reason. The bit of reserach I've done on mini-ITX hadn't yielded any other options. That board is somewhat hard to find at a decent price, so I'm going to pick it up ASAP.

If I had to do it again I would probably go with a MicroATX chassis for a NAS build, but I'd purchased the current chassis I have a few years ago before I planned to go ECC/ZFS on my NAS. It started out with an Atom board, and those along with AMD options are plentiful, so long as ECC is not a requirement. I'd been holding off on the rebuild because 3TB WD Reds weren't available, but fortune smiled on me as I found an open box deal on the board the same day WD Red 3TB drives went on sale on Amazon for $149. I ordered the whole kit that day.

Actually, when I rebooted last night, I checked it again in the BIOS. It was 56C. But then using coretemp in FreeBSD, it was around 48C. Perhaps the BIOS is inaccurate. But I will say that the BIOS is a pain. I don't know if it's a problem for anyone else, but the Intel splash screen won't go away (I've disabled it in the settings with no change) and while it lists the options (F2 for setup, F10 for boot menu) for the entire time, it seems there's only a brief window for it to work. Because I've hit F2 a string of times and had it go right to the OS. To have any shot at actually getting into the settings, I have to go nuts on the F2 key.

SkyMonkey wrote:

That board is somewhat hard to find at a decent price, so I'm going to pick it up ASAP.

I see that it's OOS at Newegg. I've seen it go out a couple times before. It should come back, usually at $150. And if you're patient (and get the Newegg newsletters), there are semi-regular sales on either server boards or Intel boards.

SkyMonkey wrote:

WD Red drives are on a nice sale right now on Amazon and the Egg, probably going to pick up 5x 2TB or 3TB here pretty soon.

Looks like normal price to me. Camelegg.com is your friend.

SkyMonkey wrote:

Currently I'm trying to determine what power supply I want to use. I really would like something modular, but that limits me to a Silverstone Strider 500W (ST50F-P) or a Silverstone 450W SFX (ST45SF-G), both of which are not all that well reviewed (noise, power quality). In reality, my requirements for a power supply are: power quality first, and modularity and silence second. I think I'll end up with the Seasonic G360 (SSR-360GP), or the OEM Seasonic 300W (SS-300ET) for reasons of quality. Seasonic is also apparently bringing a 450W G-series to market which is fully modular, but it seems to be unavailable in the US yet (or maybe not even released).

I have the SS-300ET and it's just fine. It fits well and is very efficient. I've been using the 350ET in the server this is replacing for years without any issues. The cables are a bit on the short side, but that's what you want if you can't have modular (and you can't at a low wattage like this).

2TB WD REDs are at their low currently that I've been able to find via Camelegg etc (~$109). They are also avalialbe at the Amazon and some other places for the same price; I'l likely be buying two drives from three different places to spread out my failure luck a bit. The 3TB drives have been priced better before (though not on the Egg it seems). ~7.5TB should be more than enough space for a couple years at least.

SS-300ET is what I'll be going with most likely. I can always modularize the PSU myself if it really bugs me.

2TB WD REDs are at their low currently that I've been able to find via Camelegg etc (~$109). They are also avalialbe at the Amazon and some other places for the same price; I'l likely be buying two drives from three different places to spread out my failure luck a bit. The 3TB drives have been priced better before (though not on the Egg it seems). ~7.5TB should be more than enough space for a couple years at least.

SS-300ET is what I'll be going with most likely. I can always modularize the PSU myself if it really bugs me.

2TB WD REDs are at their low currently that I've been able to find via Camelegg etc (~$109). They are also avalialbe at the Amazon and some other places for the same price; I'l likely be buying two drives from three different places to spread out my failure luck a bit. The 3TB drives have been priced better before (though not on the Egg it seems). ~7.5TB should be more than enough space for a couple years at least.

SS-300ET is what I'll be going with most likely. I can always modularize the PSU myself if it really bugs me.

$109 for a 2TB Red is pretty awesome.

How do the Red drives compare to the RE series (RE3 RE4)? I'm getting close to filling up 3 older 1TB drives. I've been pretty happy with the RE series 3 and 4, not at all familiar with the Reds.

I'd also like to find a gigantic drive to slap in an esata enclosure for monthly backups. I figure I'd use zfs on that as well and scrub it regularly so I at least know the data is good. It looks like 4TB drives may start falling below $300 soon. 3TB pricing varies wildly.

Is there any disadvantage to running a RaidZ spread across the 4 motherboard ports + 2 ports on a cheap HBA? Or should I go for a M1015 and put all the drives on it?

Well, since only two of the onboard ports are SATA 6Gbps, one reason to get a dedicated HBA card would be to run all your drives on 6Gbps ports. I'm not sure if there would be a significant difference in performance, but it's worth considering.

Well, since only two of the onboard ports are SATA 6Gbps, one reason to get a dedicated HBA card would be to run all your drives on 6Gbps ports. I'm not sure if there would be a significant difference in performance, but it's worth considering.

Given that no mechanical drive even comes close to saturating SATA-II, that doesn't seem like a very good reason.

One drive is a drive built for 24/7 use in a server, the other is a rebadged Green with funny firmware marketed for 24/7 use.

I think there is a good bit of truth to this statement, but despite this I went with the Reds anyway.

I was planning on using Greens originally, but if the firmware has been changed to better accommodate systems like FreeNAS, Synology, et al. then I suppose it is worth pay a little extra for the assurance that when a drive drops out of the array (which it most likely will at some point -- since they're likely just re-badged Greens) that it behaves in a manner that won't take down the whole array (and all your data) with it.

Admittedly, the RE4 drives are assuredly a much better drive, but they're also much more expensive. You get what you pay for, I suppose.

One drive is a drive built for 24/7 use in a server, the other is a rebadged Green with funny firmware marketed for 24/7 use.

I think there is a good bit of truth to this statement, but despite this I went with the Reds anyway.

I was planning on using Greens originally, but if the firmware has been changed to better accommodate systems like FreeNAS, Synology, et al. then I suppose it is worth pay a little extra for the assurance that when a drive drops out of the array (which it most likely will at some point -- since they're likely just re-badged Greens) that it behaves in a manner that won't take down the whole array (and all your data) with it.

Admittedly, the RE4 drives are assuredly a much better drive, but they're also much more expensive. You get what you pay for, I suppose.

While the RE4 drives may be better than the WD Red or Green drives, they're also the worst Enterprise drive in the industry for use with a hardware RAID controller, especially in large file servers.

HellDiver wrote:

I prefer the RE4 - it's more expensive, it's louder, it's hotter, but it's a drive that's got a good history of reliability.

Not from anyone I've ever worked with. I have four customers who tried WD drives (against my advice), and all four have sworn off of them due to poor reliability.

One drive is a drive built for 24/7 use in a server, the other is a rebadged Green with funny firmware marketed for 24/7 use.

I think there is a good bit of truth to this statement, but despite this I went with the Reds anyway.

I was planning on using Greens originally, but if the firmware has been changed to better accommodate systems like FreeNAS, Synology, et al. then I suppose it is worth pay a little extra for the assurance that when a drive drops out of the array (which it most likely will at some point -- since they're likely just re-badged Greens) that it behaves in a manner that won't take down the whole array (and all your data) with it.

Admittedly, the RE4 drives are assuredly a much better drive, but they're also much more expensive. You get what you pay for, I suppose.

While the RE4 drives may be better than the WD Red or Green drives, they're also the worst Enterprise drive in the industry for use with a hardware RAID controller, especially in large file servers.

Your anecdotal evidence says that. Every Dell server in this place runs WD drives now, and exactly none have failed, and some of those are 5 years old.

Accs wrote:

HellDiver wrote:

I prefer the RE4 - it's more expensive, it's louder, it's hotter, but it's a drive that's got a good history of reliability.

Not from anyone I've ever worked with. I have four customers who tried WD drives (against my advice), and all four have sworn off of them due to poor reliability.

Again, I see you anecdote and raise you an opposite anecdote. Only drives I've had fail in RAID arrays are Seagate and Toshiba...

My setup is not as serious as most here are. Currently it just holds a couple of disks and does no RAID or mirroring at all, although that might change in the future. It also uses non-ECC memory.

I wanted to get a dedicated NAS for a long time, but among other minor things costs were always the final reason. The stuff from e.g. Synology is not versatile enough, especially not for its price, and using a dedicated computer had tremendous costs for the hardware and also guzzles quite some power, which is terribly expensive here. An Atom based build? I somehow dislike those and they're borderline usable when anything demands the slightest CPU power.

Enter the Biostar-NM70I-847. It costs 65€, has a soldered on Celeron 847 with 17W TDP that rapes any Atom, 4 SATA ports with one being 6GBit, Gbit Ethernet and an x16 PCIe slot with 8 lanes connected. For another 40€ it has 8GB RAM, 15€ add a thumb drive to boot off and the last 12€ were for a used Fujitsu micro tower. That's 120€ for hardware ready to swallow 4 disks, with the option to stick in something like a Dell PERC 6iR for 60€ to add another set of 8 - with a new case, though.The PSU will be changed soon and there's some UPS shenanigans planned that will raise the costs without an HBA to about 230€ and should lower the power consumption dramatically.

I've decided to take the slightly harder route and set up FreeBSD 9.1. Samba with full domain integration is also set up, as is istgt and an apache to host the iPXE boot scripts. Automatic snapshots are managed by zfSnap and exposed as shadow copies. The problem with that is that I can only expose snapshots of the same lifetime, as shadow_copy2 seems to not accept any wildcards for snapshot name formatting. Setting up power saving for the disks was a bit of a hassle but works fine and SMART monitoring was a breeze.I'm currently trying to assess how well deduplication works for some of the backups that target the NAS and it seems worth it. So worth it that I ordered a second 8GB stick to have enough RAM left for caching data

So yeah, I'm quite happy with my build. Sooner or later there will be an additional HBA and the assortment of disks currently in use will get replaced by a RAIDZ2 with a ZIL.

Quick question: what sort of CIFS read speeds are you guys getting to a Windows box over gigabit ethernet? I can't seem break 35MBps from either a Solaris express 11 box or an OpenIndiana 151a7 box. Write speeds max out at around 113MBps. This is with sync disabled, atime off, no compression and no dedup.

I'm sure that back in the day I could max the link in both directions with OpenSolaris.

Quick question: what sort of CIFS read speeds are you guys getting to a Windows box over gigabit ethernet? I can't seem break 35MBps from either a Solaris express 11 box or an OpenIndiana 151a7 box. Write speeds max out at around 113MBps. This is with sync disabled, atime off, no compression and no dedup.

I'm sure that back in the day I could max the link in both directions with OpenSolaris.

Is this with "use sendfile" disabled? If not, I'd give that a try; at least on FreeBSD, there are known (and rather severe) performance issues when using sendfile() on ZFS volumes.

EDIT: Also, check what the raw disk throughput is during those transfers (as opposed to the throughput reported for the pool).

Quick question: what sort of CIFS read speeds are you guys getting to a Windows box over gigabit ethernet?

FreeBSD to Win7 Pro, 32 Bit. Single file >1GB, sync standard, primarycache=all, Samba 3.6: maxes out the client disk, over 80MB/s.With primarycache=metadata and no L2ARC, though, it's about 15MB/s and gstat shows over 80% busy time for the host disk.

Just in case: for lots of small files CIFS impersonates a snail. For e.g. untared sources of a linux kernel 35MBit seem fine.

Write speeds are basically the same. For files in the couple of MB region I get ~40MB/s, for large files it's over 80MB/s. I did not test those with with anything else than primarycache=all, though.

Intel S1200KPR with a Celeron G555 and 16GB of Kingston ECC Valueram went into a PC-Q25B for initial testing using a SeaSonic SS-300ET power supply. Everything went together well, by FAR the hardest part was getting the I/O shield into the case. I've built in Lian-Li cases before, and am used to working with thicker side panels, but that was the tightest interference fit on an I/O shield I've ever experienced. I actually had to bend the tabs in quite a bit on the short sides, and then use a mallet and tons of force to get it in there. I then had to blow the case out and wipe it down to get the aluminum filings out :X.

Anyway, got Ubuntu running off a live USB and also ran 5+ runs of Memtest86+ to make sure the hardware was working. Looking good so far.

Next steps are to get a replacement cooler (the stock Intel has some hum/harmonic that is just too annoying) and find some thin, short, left angle SATA cables (these are nice, but much longer than I need, and pricey: http://www.moddiy.com/products/Orico-SA ... le%29.html).

Any recommendation on a cooler that will fit in there without running into the PCIe slot or the RAM? Considering a Scythe Kozuti or Shuriken if I can find one. I'm not sure if either of those will block the PCIe slot, I know the Big Shuriken will. Silence is the goal here.

Intel S1200KPR with a Celeron G555 and 16GB of Kingston ECC Valueram went into a PC-Q25B for initial testing using a SeaSonic SS-300ET power supply. Everything went together well, by FAR the hardest part was getting the I/O shield into the case. I've built in Lian-Li cases before, and am used to working with thicker side panels, but that was the tightest interference fit on an I/O shield I've ever experienced. I actually had to bend the tabs in quite a bit on the short sides, and then use a mallet and tons of force to get it in there. I then had to blow the case out and wipe it down to get the aluminum filings out :X.

My PC-Q25B was the same, but I was using an Asus board. I thought it was just the Asus shield. My PSU lost some paint going through the hole, too (OCZ CoreXtreme 500W jobbie).

Everything went together well, by FAR the hardest part was getting the I/O shield into the case. I've built in Lian-Li cases before, and am used to working with thicker side panels, but that was the tightest interference fit on an I/O shield I've ever experienced. I actually had to bend the tabs in quite a bit on the short sides, and then use a mallet and tons of force to get it in there. I then had to blow the case out and wipe it down to get the aluminum filings out :X.

Yeah, I had the same experience. Someone wasn't thinking with that one.

SkyMonkey wrote:

Any recommendation on a cooler that will fit in there without running into the PCIe slot or the RAM? Considering a Scythe Kozuti or Shuriken if I can find one. I'm not sure if either of those will block the PCIe slot, I know the Big Shuriken will. Silence is the goal here.

I've seen a couple units from Nexus out there, but they're quite expensive. Most anything quiet and reasonably priced is too large in one or more dimensions. Someone elsewhere mentioned Arctic Cooling HSFs, but looking around Newegg at the compatible models, it seems that the reviews are mixed. I used the stock fan from an 1156 Core i7 on my Celeron and I'm pretty happy with the noise profile. Have you changed the temperature settings for the CPU fan in the BIOS at all?

I ran the Intel retail cooler, and the Asus' Q-Fan or whatever it's called basically only spun up the fan during heavy loads (unRARing and the likes). I also had some Akasa heatsink on there, minus it's fan, which worked perfectly, as it was close to the 120mm fan of the PSU. That's running a Celeron G530.

Finally got my FreeNAS box setup and running for the most part. Works surprisingly well. Power draw at Idle is 45w and that's without spinning all 4 hard drives down.

Unfortunately, I quickly discovered that the GigE ports on home PC suck (Marvell) because they couldn't reliably transfer to the NAS at faster than 150Mps. Meanwhile my crappy Thinkpad Edge (over the exact same network drop) could consistently transfer 900+Mbps to/from the NAS.

Got myself a PCI-E Intel Pro/1000 GT card, slapped it in the PC and everything is much better now. File transfer rates exceed 900Mbps consistently. I did notice that if I was streaming an HD video from the NAS while copying files to the NAS that my file transfer rates fell considerably. Though I suppose this is to be expecting since I'm reading and writing to the NAS simultaneously. Yes?

Also: my CPU core temps seem to hover between 51 and 58C when receiving files. (Celeron G540 dual-core processor) Is this a reasonable temp for them to be running at?

Got myself a PCI-E Intel Pro/1000 GT card, slapped it in the PC and everything is much better now. File transfer rates exceed 900Mbps consistently. I did notice that if I was streaming an HD video from the NAS while copying files to the NAS that my file transfer rates fell considerably. Though I suppose this is to be expecting since I'm reading and writing to the NAS simultaneously. Yes?

If you are writing to the same disk you're reading from, then yes it makes sense that you get limited at the disk.

Deffexor wrote:

Also: my CPU core temps seem to hover between 51 and 58C when receiving files. Is this a reasonably temp for them to be running at?

Depends on the ambient temp and how good the cooling is. However, any temperature under 70C under load is not worth worrying about. If you idle above 40C in a low ambient temp, you might want to go over your cooling. For instance, run it with the case open to see if the inside of your case gets warmer than it should. Or if the chip gets pretty warm but the heatsink stays cooler than you think it should, make sure that it's properly seated and bolted down.

Black Jacque: So it turns out the issue had to do with a bug in Windows Vista where whenever any A/V playback was taking place, it would throttle my network connection so as to assure no hiccups in my A/V playback.

I disabled this by setting the NetworkThrottlingIndex entry in the registry to 0xFFFFFFFF. Now my network runs at full-speed when A/V is playing back. No A/V hiccups either.

Well, it's about time I consolidate all the storage at home into one big box, and with the reliability of harddisks failing nowadays (you're certain you'll swap a WD before warranty runs out), is RAIDZ2 the way to go?

The idea is to make a 6 disk pool, 4 usable and 2 parity with a boot SSD/L2ARC/ZIL drive, but I'm kinda out of date with what software to use - since the last time I checked, OpenSolaris kinda died, Nexenta still exists, and FreeNAS seems to have grown up quite a bit - any recommendations? SMB 2 is a must as almost everything is running Windows except some VM and an Macbook.

RAIDZ2 should survive all but the worst hardware failures, but I'm curious as using a NAS is basically placing all your eggs into a single basket, do you guys still backup data regularly to an external disk just to be sure? I don't think any online vendor would accept a 8 or 12TB dataset without charging for storage, besides, uploading/downloading that is going to take uh, a while.

Also: hardware recommendations welcome. I think I'll go for WD Reds or similar disks, RE Blacks get kinda hot without forced ventilation and RE4 GP (greens) are gone from the market? I haven't decided on a case, CPU or motherboard yet, a small one is welcome, but if there are too many drawbacks I could just grab a tower and a single socket workstation/server board.

RAIDZ2 should survive all but the worst hardware failures, but I'm curious as using a NAS is basically placing all your eggs into a single basket, do you guys still backup data regularly to an external disk just to be sure?

Yes. For some part the NAS box is a backup device. My server's storage pool gets snapshotted and sent there.What's only on the NAS gets shoved elsewhere for backup.

There's some potential scenarios that even RAIDZ2 does not protect against. And an accidental "zfs destroy -R poolroot" is just one of them. RAID and RAIDZ are no substitutes for backups.

But, the big question remains ... which OS? I have serious Linux experience and would like to misuse the NAS as a lightweight test platform for self-programmed webpages based on content management systems. The options are:

-Ubuntu (personal fav, but is it difficult to set-up?)-FreeNAS-FreeBSD (ugh)-OpenIndiana