I have one freenas box that is just dedicated to data backup from my workstation. I went with mirrors. Started with 2x 6tb and then added another mirror of 2x6tb. Mirrors are the cheapest to expand as they need only 2 drives.

Hit send by mistake before completing. If you can get another drive, then go with a 6 drive raidz2. More peace of mind.

It is probably a valid concern, the worry of a second disk failing on the same vdev while resilvering but I've never encountered it in multiple zfs resilvers as well as raid rebuilds in my home systems.

My big NAS that backs up everything else is right now is 4x 7x8tb raidz3.

@K D Z3 on that size array is too much! Maybe if you had all 28 drives in the same array. Otherwise just wasting disks. 24 drive + 3 parity + 1 hotspare and you're set. That would let you rebuild immediately with hotspare and 4 drives for redundancy instead of 12

@K D@TangoWhiskey9@darkconz any thought on whether I need the LSI controller? I initially got it to visualize, and while I've successfully got the drives to be served to the VM, I opted to build this separate machine because I am not only new at FreeNAS, but I'm new at Hyper-V. Didnt want two battle two fronts at the same time, worst case scenario.

For backup I'd agree that a 7 drive Z3 is probably overkill (or maybe he just really values his data!), but I'd say the optimal would be 2x13 disk Z3 vdevs and that would allow two hot spares and result in 160tb of storage with twice the write speed of a monolithic array. Of course that depends on his performance needs...

Bigger issue though would be how to migrate the data from current config.

Bigger issue though would be how to migrate the data from current config

Click to expand...

You are right. It's probably an overkill. It was just a matter of having started off with a 7 drive z3 vdev and when I ran out of space, being too lazy to reconfigure and adding the same set again till I ended up with what I have.

It's just easier to maintain this rather than go with the whole migration to a new config. I can probably reclaim a drive worth of space by purging some old data and snapshots and cleaning up duplicates but just don't have the time for it.

Larger disks have a higher density so sequential performance of 5x8 vs 8x5 in a Raid-Z can be quite similar.
IOPS scale with number of vdevs (as all heads must be positioned for every io) so with a single Raid-Z vdev
this is quite similar and equal to one disk in both cases.

Mostly I would prefer less disks due less power and a lesser chance of disk failures.
If you use multiple Raid-10 this is different as in this case more disks (more vdevs) mean more iops.

Without advanced features like Solaris sequential resilvering, this is mainly limited by pool iops so pool performance = resilvering performance. This is the same (1 x disk) in 8x5 as in 5x8 as is datasecurity with both a Raid-Z2 with slight advantages with less disks.

Without advanced features like Solaris sequential resilvering, this is mainly limited by pool iops so pool performance = resilvering performance. This is the same (1 x disk) in 8x5 as in 5x8 as is datasecurity with both a Raid-Z2 with slight advantages with less disks.

Without advanced features like Solaris sequential resilvering, this is mainly limited by pool iops so pool performance = resilvering performance. This is the same (1 x disk) in 8x5 as in 5x8 as is datasecurity with both a Raid-Z2 with slight advantages with less disks.

Click to expand...

I thought if you lose 1 disk, you could end up resilvering less data (portion of 5TB instead of 8TB), isn't it?

A dataloss would be only the case in a classical Jbod or pooling without redundancy.

A realtime Raid like Raid-Z2 allows any two disks to fail without a dataloss as two disks are only there for redundancy and to protect your data. A resilvering process is needed when you replace or repair a disk to regain the full Raid-Z2 datasecurity after a failure.

A Open-ZFS resilvering must read any metadate of the whole pool to decide if data must be repaired and read then affected data from redundancy to repair. This is why it is extremely iops sensitive (Beside the Oracle Solaris way in a genuine ZFS of doing resilvering, see Sequential Resilvering about how resilvering works)

About Us

Our community has been around for many years and pride ourselves on offering unbiased, critical discussion among people of all different backgrounds. We are working every day to make sure our community is one of the best.