> Why do you automatically distrust hardware raid?
it's that I trust software raid more. upgradability, ability to
audit source code, portability to other machines, etc. there are
certainly still some cases where the host cpu is scarce enough
to want to avoid the processing overhead. and in principle, I believe
HW raid _could_ be faster if the PCI* bus (or memory) is the bottleneck.
let me put it this way: do you know what goes on in your HW raid's
firmware? personally, I like sausage, but often avoid eating it because
I hate to notice the bits of unidentified clearly-not-meat foo in it.
imagine trying to verify that a HW raid controller actually does
perform online parity verification. it would involve moving disks
out of the array to be carefully corrupted on another machine before
being returned to test. with SW raid, I can audit the code, even add
counters and diagnostics, stop the raid, corrupt a parity block, and
start it again to verify it all works...
mostly though, it seems clear that a lot of the hardening aspects of
ZFS are increasingly important. it's also a "doh!" kind of realization
that raid should be done on a per-file basis. (consider the benefits
of writing as raid1, which later transparently transforms to raid5
or some form of FEC. also consider that it's rare for user-level codes
to choose the right blocksize and alignment to permit full-stripe
raid5 writes, but easy to arrange if you're doign it in the filesystem.)
regards, mark hahn.
> -----Original Message-----
>>> not me. raid is too important to be trusted to hardware - have you
> tried MD's check/scrub features? though afaik it doesn't have a way
> to switch on verification during normal reads (or, for that matter,
> verify-after-write.)
>> in any case, I suspect that the trend is away from block-level raid...