I wanted to setup a raid01 configuration (a raid1 composed of two raid0s) with one of the raid0 volumes having the write-mostly state set so all reads would go to the other (i.e. one is disk, the other flash). However that entire plan has run into a problem in that reads directly from the raid0 are correct at 64k per disk (the chunk size), but when I add a raid1 on top of raid0, all the reads drop down to only 4k so the performance is terrible. I'm guessing that it is due to md (or something in the stack) has decided that 4k is the granularity for errors so it is doing reads in this size, but that is just a guess. Anyway I really need to find a way to fix it.

To test this I'm using a raid1 with only 1 side for simplicity i.e. that was created via

mdadm --create /dev/md2 -l 1 -n 2 /dev/md1 "missing"

Another item of interest is that the dd bs=512K on the raid0 md1 array shows 64k reads on md1 and all of its components, where I would have expected iostat to show md1 having 512K reads and its component disks to be 64K. The dd bs=512K from md2 shows 4K reads to everything. I'm computing the block size by just taking MB/s divided by tps which = MB/transaction.

update:
This looks only to be an issue for md on md. If I make a raid1 directly on a disk, its read rate is the same as the disk. So I think I can reconfigure to be raid10 (a set of raid1's made into a radi0) instead of raid01 (2 raid0's made into a raid1).