Currently, I'm tasked with progressively building a ZFS store. IOPS are not the priority, data security and capacity are.

In essence what we have is one brute of a machine, that contains an OS (FreeBSD most likely but OmniOS is a possible), 2 RAID-1 OS disks, and 4 SSDs, for caching and ZILs. RAM will start of at 16GB but I can go all the way up to 128GB if need be (and if accounting doesn't suffer a stroke). That in turn is connected to a JBOD chassis that can handle upto 45 drives. Each drive is 4TB.

I was considering incrementing by 9 drives, using a RAID-Z2 schema and then as needed, creating a new vDEV of another 9 drives until the 45 drive capacity is fulfilled, after which we would add another chassis that also handles another 45 drives. There is a limit of 3 or 4 connected chassis before a new similar configuration would need to be created.

Another possible option is to go up in 7s (RAID-Z2) and once we hit the 42 drive mark, we can slot in 3 drives to be used at any moment by any of the vdevs if a drive pushes up the daisies.

On a side note, I'd also highly appreciate any input on FreeBSD vs Illumian vs OmniOS with respect to ZFS. I'm fairly fluent on FreeBSD and I've played a bit with OmniOS but no working knowledge on Illumian.

TIA

[edit]

Referencing Nex7's blog post point 9, we're building our VDEV's on 9 disks, 2 are for parity, so we're still in the clear, but we could I believe exchange that for a 15 vdev with 2 for parity since we're using RAIDZ2.

Point made on the SATA drives, switching to SAS will not be a problem. The SSD drives are Intel server drives, but I'll be checking. In all, because we have a few dozen boxes running 100% on the same hardware, I'm not worried about compatibility being an issue.

The reason we did not got with Linux is because we're not 100% certain of how decent and stable the implementation is. If it were stable...

What advantage does OpenIndiana have over FreeBSD for ZFS? What is the advantage of Napp-it?

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
If this question can be reworded to fit the rules in the help center, please edit the question.

2

Please, don't ask for opinions on Server Fault. We only want questions that are answerable by facts. For more infos, see the FAQ. A valid question would be of the form "I have requirement A and plan configuration B. Does this meet A?".
–
Sven♦Apr 26 '13 at 21:09

Just a word of warning: What you plan looks fine in theory, but realize that ZFS on FreeBSD is not well tested on systems with so many disks. After research & discussions with FreeBSD devs, we realized that there are very few FreeBSD contributers who actually have a ZFS system with more then 30-40 disks. The developers, many who work for free, simply don't have systems to test large configurations like that. You will be in uncharted territory and will mostly be on your own. If you do choose the FreeBSD/ZFS route, get a support contract to support it, such as IXsystems (Maintainers of FreeNAS).
–
Stefan LasiewskiApr 27 '13 at 20:16

Interesting. So what OS actually does have ZFS systems with more than 40 disks? Illumian? OpenIndiana? OmniOS?
–
SteveMustafaApr 28 '13 at 21:05

That, my friend, is the million dollar question. The answer may be "Solaris", or use traditional hardware-based storage appliances. Neither Illumian & OpenIndiana seemed mature enough when I last looked 6 months ago. zfsonlinux looks really cool and the latest stable release sounds good for scientific computing workloads, but I'm waiting for some time to test it.
–
Stefan LasiewskiApr 28 '13 at 22:12

Wow, we're running some really large loads on OpenIndiana for our Node.js installations (aka the farm) and it is ROCK solid. This started before I showed up there because the previous SA was a Solaris buff. I'm more the FreeBSD guy, but even on the ZFS channel, no one seemed surprised at the number of disks, but many have validly commented on the difficulty of backing that up.
–
SteveMustafaApr 30 '13 at 0:00

If you're asking if this is the best way to achieve scale-out storage... Probably not. Your design has no storage-head high-availbility. What happens if your "brute" of a server has a motherboard failure? You'd lose access to EVERYTHING until it's repaired.

If you're asking if the hardware is supported... It depends. You didn't provide any hardware or component specifications. Really. What server, what controllers, which disks, which SSD manufacturer?

If you're asking if there are best-practices for what you're trying to do, yes, there are. (I'd be rethinking the number of disks in each vddv if I were you)