I have a 6-drive raid5 array, I recently added 3 drives (a few days ago) and I already have a failure! I don't actually think the drive has failed already, I want it to just try to rebuild the array onto that drive. Here is my /proc/mdstat:

Ok, so I shut the box down, and went to the console and opened it and checked that all wires were connected, then I powered it on and went into the bios and it saw all 6 drives. Then I let it boot, and it shows that /dev/sdd3 had failed like the first time. Since it was removed, I just added it back (/sbin/mdadm /dev/md2 -a /dev/sdd3) and then it worked! Now it is rebuilding the array. I have no idea why it did this strange behavior. It had been running for about a year with just the 3 drives without any issues.

It appears that the culprit was the cheap SATA cables that came with the new drives. I rebooted the server again and it didn't detect a drive at all, so I powered it off and opened it up again. I noticed the new cables are a bit looser than the old ones, and after pushing them back on and powering up again, it detects all the drives now.

Now, the system won't boot because it says 2 of the 6 drives have failed and it cannot find the superblock. I also noted that it said on sdd, sde and sdf it found no superblock. It said the raid456 error before, I believe it to be spurious.

I was able to repair the array! I ended up first trying a ubuntu livecd, but it wouldn't repair with 'mdadm --assemble --scan -fv' as it didn't see enough drives. I then plugged in an ide/pata drive and installed the same version of Centos (5.5) to that, and then ran the mdadm --assemble --scan -fv command, and it said it found 5/6 and was able to do something, then the machine crashed/locked. I rebooted it and it wouldn't boot to the console, but since it said it fixed something on the array I then removed the ide drive and turned the box back on.