So I've had this system setup and running for a couple of years now, and everything has been happy, with the setup being based on http://en.gentoo-wiki.com/wiki/RAID/NVRAID_with_dmraid#Building_the_Kernel. It's using the nvidia raid built into the mother board, which is running a 4 disk raid0. So foolishly I upgraded to grub2, without knowing much about it, and now everything has gone... well South. I tried uninstalling grub2 and reverting back to the normal grub, but regardless of how much I wiped files and reinstalled, I end up with grub2. While both are currently installed, it would seem that grub2 is being used and grub is sudo (mostly not) functional.

The problem started when I was upgrading my kernel from 3.5.7, which every release after that has been unsuccessful, with the latest being 3.7.10-r1. I've tried using the config from the previous working kernel with no success. I ran clean, mrproper, clean and then let the kernel figure things out, which also failed. I then modified that adding some raid and SATA modules, but all end up with an error that I'm just not understanding. When I boot, I get the message: /dev/mapper/nvidia_jeaacgah3 is not a valid root device. Could not find the root block device in .

To me this says the kernel loaded - thus where the error message came from, but it's just not seeing the raid device for some reason. When I go into the shell and list the /dev directory, sure enough there is no /dev/mapper directory, which I also noticed that I have no /dev/ram0 either. I have to be missing something obviously that I'm just not seeing, but after days of trying everything under the sun, I'm still coming up empty. Luckily my old kernel 3.5.7 will boot, but now with other package upgrades I've lost my display for X. Anyone willing to chuck a dog a bone with some ideas?

I know it's odd that there are 2 raid controllers, but this mother board (ASUS) has two raid controllers built into it. The Nvidia raid can do just about any type of raid between the 4 SATA ports and between the 4 IDE, and can also stack raid configs, although I'm just running a raid0 from the 4 SATA ports. The Silicon Image raid can be used to mirror the raid array to an external SATA port, which can hotplug a drive in for backup, but I'm not using that raid

So sda-sdd are the drives making up the raid0, which is then seen as the /dev/mapper/nvidia_jeaacgah. The sde is just a small drive I try to use to run Windows XP, since it's the only thing that will run Real Flight, only I haven't had much success with that (key expired and I won't pay for Windows). The sdf and sde are both external USB storage drives. The raid0 is partitioned with /boot, swap, and / with the root using ext4 and boot ext2

I believe that's all the relevant setting, although would be happy to provide more. I used the following to compile the initramfs using genkernel:
genkernel --dmraid --install --kernel-config=/usr/src/linux/.config initramfs

This is different than I had it before, although it doesn't seem to make a difference how it's set. When it was working, I didn't have a md0 entry. Can anyone point me to where all my devices went to on boot? Device heaven maybe? I can't seem to get them to be present in /dev. Any help at all would be greatly appreciated. Thanks.

Yeah, that was my thoughts as well, and my hell Mary last charge attempt. The results were... unexpected. So after having genkernel just handle the compiling of everything, it still failed the same, but at least before it failed it loaded modules. Apparently among those modules was the raid device, which I was able to check from the shell and then manually specify the device to boot into (/dev/mapper/nvidia_jeaacgah3). Funny that it would boot from it when manually entered, yet it's the exact same thing configured in grub.

I'm thinking that maybe grub doesn't have the drives and partitions setup correctly. Grub2 is done a bit differently, and doesn't seem to play very well with raid arrays. The boot looked okay though, and other than I have zero modules loaded, X still won't start, and SSH service fails regardless of what I do saying "no such device". During this last batch of updates however, I did upgrade to the new udev, so it's likely related to that, although my NIC interfaces are both up and running.

So I'm sort of fixed. Hacked to mostly operational anyway. Thanks for the advice and help.