a) Instead of RAID 1 plus RAID 0 being called RAID 10; I recommend that RAID 5 times RAID 2 be used. And just for clarity, I am not talking about: www.imdb.com/title/tt2265171/

b) RAID 5 writes will almost certainly be significantly slower for Software RAID controllers; however there are a number of high end controllers where the difference is completely negligible due to these calculations (on the order of 0.03% to 0.1%

Especially since introducing the new position almost seems like a test - as if someone above had enough warnings and decided to hire someone experienced from the outside and see what that person will say after a few days. Nice solution - if everything was ok, the manager wouldn't even notice the distrust, his team was actually strengthened!

I call BS on this. There's no way a PHB like this would allow his underlying (the sysadmin) contradict and make him loose face in front of a new recruit. Also where's the HTML meta commentary that gives more back story.

"however there are a number of high end controllers where the difference is completely negligible due to these calculations" It's not the parity calculation (a 10 year old Intel CPU can XOR gigabytes per second), it's the read-modify-write-cycle. You have to read all the blocks in a stripe, modify the one bit that changed in the database, calculate a bit of parity and write two blocks to disk (the one with the parity and the one where the changed bit resides).

There are ways to (partially) mitigate that (copy on write and buffering writes (at least the unsynchronized ones) for example) but hardware raid usually doesn't do that.

Depending on the memory available to the database server (and limits set to the database software) the performance of the APPS-share might be acceptable, provided there's enough RAM to cache most of it, but I still wouldn't do that for a company that can afford a SAN...

Makes me wonder what kind of "brief investigation" happened. Outside group of tech people brought in to look at the existing policies? Someone outside trying to FTP into the "firewalled" server? Or maybe they just talked to all of the developers, most of which probably knew that things were bad and hadn't said anything...

What isn't clear to me is why RAID 10 would be faster for I/O. If you have 6 disks in RAID10, you are striping across 3 disk. In RAID5, you are striping across 5. Wouldn't reading/writing 60% more disks in parallel make the XOR calculation a drop in a bucket?

If you're doing sequential IO that is true, and RAID5 will give you more throughput for that workload, since you are writing entire stripes at a time, and the only overhead is the XOR, which is trivial.
With random IO, you are are generally only modifying one unit in the stripe for each request, so you have to read either (a) all the other data disks in the stripe or (b) the old value of this data strip and the current parity in order to determine what the new parity value should be.
So your overhead in service time is that you need to perform at least two reads before your modify-write, versus just writing to two drives, so it in absolute terms takes more than twice as long, and your overhead in aggregate is that you are performing at least double the number of operations for each requested IOP - which is guaranteed to be lower than the RAID10 one, and probably much lower, if you are using the simpler read-whole stripe technique, which I believe is more common.

True but it entirely depends on how much data you want to write. If you had a stripe size of 128KB and a RAID Level 5 with 5+1 disks, everytime you write at least 640KB in consecution, you don't need to read the parity stripe, because it will be overwritten anyway. RAID5 will be faster because you can write to 5 disks at the same time.
Now, if you had a large amount of data to write, the dedicated parity disk will become the bottleneck (because for each 640KB block you have to write one 128KB stripe to this particular disk). However, the cache of the RAID controller will probably get this.
Vice versa, if you only have 5KB to write, you'd read a stripe from each of the six disks in order to calculate parity and write back to six disks.

Which makes RAID5 good for file servers or other applications handling big amounts of data, but not so good for database servers where only bits change all over the storage area.

Personal story: We had one server go down once because of a defective RAID(5). I discovered that the controller was configured incorrectly and thus did not warn of the drive failure. Upon repairing I found that nearly all disks had a huge amount of bad sectors.
There was the exact same server, same age, same disks, same configuration - and when I checked it, all disks had ZERO sector problems. One was hosting a SQL database and the other was a file server.

Addendum: Not to be misunderstood - the dedicated parity drive is obsolete in any modern controller, it was called RAID4 then. I just mentioned it as an example when large data could be a bottleneck.
Also, controllers nowadays don't read all disks to calculate the parity information, so the performance impact isn't as huge as it once was.

@Fact Checker why on earth would the PHB loosen his face? I mean, I get that older PHBs can have wrinkles, but surely loosening a whole face would be just too garish for the typical working environment?

Fired for running a very quick disk benchmark in that situation? William, is that you again?

You do realize the disks themselves probably hit max load often anyways and running a very quick benchmark to note the problem is nothing compared to the long term performance impact, right? William you were rightfully fired in this situation, even though the guy you put through hell had to run a benchmark to get it through.

You had me until you mentioned SAN. That they didn't immediately move to AWS RDS or some other managed DB and instead had to deal with LUNS and figuring out which DB files need to be on which spindles, etc. means they were wasting a lot of time doing administration they didn't have to do. And God-forbid that Hitachi SAN has a hiccup and your Oracle DB goes in the shitter.