I have a CentOS 6 server with onboard Intel RAID 5 with 5 Seagate server grade 500GB hard drives. The first drive (drive0 of the array) has failed (SMART message on power up)and as-is the server will no longer boot up. I am assuming that the boot sector used in this configuration is located on the first drive of the array, thus causing this behavior. All other 4 drives report that they are in good condition.

I received an exact model this week for a replacement, I was wondering if swapping out the drives and booting to the CentOS Live disk would work or be the best option to rebuild the array with the new disk. I have found a bit of info using Google, but none that seems to apply to my specific situation, and I really don't want to lose the current array by making a bad decision. Anyone out there had this experience? Any advice is appreciated.

The motherboard is a SuperMicro MBD-X9SCL-F LGA 1151 Xeon-E3 with a Intel C202 / Nuvoton NCT6776F chipset if it helps anyone. Not sure if it really matters in this instance, but running CentOS 6 64 bit installed over 4 yrs ago - just before the release of CentOS 7. I do know that there is a correct driver on the Live disk, as the RAID was properly recognized when the OS was installed.

After research and experience, the following would have worked had the drives still been listed as 'online member' disks. As Intel RAID driver modules are included with the Live DVD, replacing the failed drive, booting the system with the Live disk, the drive would be rebuilt as the disk monitor aspect of the driver module would detect the new drive and start to repair the array automatically.

In my case, I had a mishap where I had no response from anyone here on the forum and I wasn't having any luck finding the information that applied to my situation. The drives ended up in a 'offline member' condition after I had booted the Live DVD on the system, and was poking around looking for a manual option to trigger the rebuild. The rebuild operation seemed to have started automatically without me realizing it, and I ended up shutting down the system before it had time to complete. This is what seemed to cause the 'offline member' condition on the RAID configuration boot screen.

So the procedure would be:Remove the failed driveReplace with the new drive (do nothing at the RAID boot config screen, just hit whatever key is promted to continue)Boot the machine (to the installed OS if bootable, or use a Live DVD if it isn't)Log in and walk away from the damn thing (make sure that power settings don't allow it to go to sleep)Go home or watch TV if you are already home, drink a beer, do whatever, but leave the machine alone, then go to sleep, go back and make sure the operation was successful the next morning.

I hope this saves someone a headache down the road, and BTW this seems to apply to both CentOS 6 and 7.Thanks for all the help everyone.