From Bugzilla Helper:
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; Win 9x 4.90)
Any chance of a tool to create an initrd image for booting an LVM root
filesystem contained within a raid (md) device?
LVM (as at 0.9.1beta5) has lvmcreate_initrd which creates a ramdisk image
with all the necessary LVM drivers (and also some init stuff to activate
the VG - essential if you're going to mount it!). However, it doesn't
include support for any other storage driver modules.
mkinitrd will create a ramdisk with the necessary raid module support, but
(obviously) doesn't do anything in init to activate VG's
I've worked around this by building a kernel with raid1 support instead of
loading as a module and using lvmcreate_initrd to generate the image, but
this isn't ideal given that I'd need to build a new kernel every time it
was updated (stock redhat ships with MD support as modules)
I guess there are many potential solutions to this, maybe Redhat could
consider choosing one of the following:
a) Change stock kernel configuration to include MD support built in
b) Modify mkinitrd to make it "LVM aware", integrating the current
functionality provided by lvmcreate_initrd
c) Modify lvmcreate_initrd to include optional additional module support
(probably not viable since maintenance of the LVM distribution is not
Redhat's responsibility)
d) Create new tool, combining appropriate features of mkinitrd and
lvmcreate_initrd
e) Provide an "LVM aware" LILO! (certainly, c) above applies here too)
Reproducible: Always
Steps to Reproduce:
1. See description
2.
3.

I suspect that I am having a similar problem. During the installation process I
create RAID 1 arrays for /, /boot, /var, and /home. The installation proceeds
normally, but when I try to boot off of the hard drives I get:
autodetecting RAID arrays
autorun ...
... autorun DONE.
EXT2-fs: unable to read superblock
isofs_read_super: bread failed, dev=09:01, iso_blknum=16, block=32
Kernel panic: VFS: Unable to mount root fs on 09:01
Notice that no arrays were detected. If I boot off of the CD in rescue mode, it
recognizes and mounts the arrays correctly. The same installation worked
correctly under Fisher.

The Disabling (U)DMA turned out to be a failing hard drive, as hdb failed
shortly thereafter. I replaced it with an ST31220A 1GB drive and moved on. I
also reconfigured hda to be the Seagate drive, and hdc to be the remaining WD
drive, with the CD-ROM at hdd.
As often happens, I was wrong. After many, many installations, this is what I
know now:
The error is the same error you get if the partitions/md devices do no exist.
The first autorun sequence failing out on all the md devices is apparently
"normal", since it happens during boot of a system with / on a non-RAID
partition. The bind<> and unbind<> statements occur in the same way under that
condition as well, so that is probably right too.
I hacked the 0.1.14 kernel into the installation, no change in symptoms.
The major difference between booting to a RAID / and an non-RAID / is the VFS
statement. With the RAID /, it just says:
VFS mounted root (ext2 filesystem).
With the non-RAID /, it says:
VFS mounted root (ext2 filesystem) (read-only)
and boots normally, with the other RAID-1 devices being set up in the second
autorun sequence. RAID 0 also does not work as a / partition. I know the
raid1.o and raid0.o modules are loaded into the initrd image. To me, the major
problem seems to be that the raid1 module is loaded after the attempt to mount
the root filesystem, which means that if that filesystem is an md device, it is
unbootable for all intents and purposes. Also, once the raid1 module is loaded,
no md devices are detected (even other non-/ partitions) by the autorun there,
which is not the case in a non-RAID / boot. The inability to boot a RAID device
as the root partition would seem to qualify as a Bad Thing(tm), since this
functionality has been present for a while now and people have come to rely on it.
This situation prompts a few questions from me:
1)LVM is able to start from the initrd -- how much of the LVM functionality is
there already?
2)Why does VFS say that it mounted the / filesystem when it obviously did not do
so in a useful fashion? Why didn't it mount it read-only as is specified in
lilo.conf? Did VFS even mount / at all?
3)Why is the raid1 module loaded after LVM initialization instead of before the
first autorun sequence?
4)During the second autorun sequence, why are no md devices initialized, as is
normal with a non-RAID / boot? Checking /etc/fstab maybe?
5)Would a kernel with the RAID functionality compiled in work without using the
lvmcreate_initrd functionality, or is that required?