Hard Disk IDs in Linux

On 03/12/2013 12:28 AM, Dan Egli wrote:
> The thing I was trying to accomplish with the larger array was the large
> amount of disk space available. I.e. if I fill the enclosure with 32 4TB
> drives and make a single raid 6 I wind up with 120TB of available storage.
> On the other hand, if I was to make four equal array of 8 drives each in
> raid 6 then I loose not just 8TB total but 8TB per array, or 32TB total,
> leaving me with 96TB, which is still a good amount of storage, but also
> quite literally only 75% of the disk capacity. If anyone knows a good way
> to increase the disk capacity availability to raise it above 75%
> (preferably back up to the 90% range) I'm all ears.
The point of RAID is not to give you a large amount of storage, first
and foremost. I use it knowing I'm giving up raw storage for this
security. So forget about 90%. You simply can't get 90% usable storage
while still having any amount of realistic redundancy and security RAID
provides.
Personally, if I had that many disks, I would do RAID10. Basically
that's striping across pairs of RAID1 disks. Or use sets of 4 disks in
a RAID-6 setup, and stripe across them. Same efficiency as RAID10, but
you can lose up to 2 out of any 4 disks. Either way, 50% efficiency,
but I'd sleep way better at night.
With 32 disks the failure statistics mean you will be replacing disks on
a fairly regular basis.
Now I'm not sure why you need 120 TB in your home. Or is this disk
array of yours going to be for work? Are you going to put the disks in
a fibre channel array? iSCSI? Are you going to use some sort of fabric
to connect the drives to your computers? What file system do you plan
to use?