Description of problem:
I've been testing the new "install to dmraid" feature. Very cool BTW.
I had been working in January, and this week I installed FC5 test 3.
The install finally went OK after several attempts where it would just hang
during package installation -- maybe another bug needs to be filed?
After install during boot it doesn't appear that the "dm" commands in the
initramfs's init script are doing anything.
The init has these commands:
mkdmnod
mkblkdevs
rmparts sda
rmparts sdb
dm create nvidia_hcddcidd 0 586114702 mirror core 2 64 nosync 2 8:16 0 8:0 0
dm partadd nvidia_hcddcidd
echo Scanning logical volumes
lvm vgscan --ignorelockingfailure
echo Activating logical volumes
lvm vgchange -ay --ignorelockingfailure VolGroup00
resume /dev/VolGroup00/LogVol01
I added echos such as "about to dm create" and then some "sleep 5" after
each of those commands.
There is zero output from mkdmnod on down until the "lvm vgscan" runs.
It produces this output:
device-mapper: 4.5.0-ioctl (2005-10-04) initialised: dm-devel@redhat.com
Reading all physical volumes. This may take a while...
No volume groups found
Unable to find volume group "VolGroup00"
...
HOWEVER, booting into the rescue environment the dm raid is brought up and LVM
activated automatically and correctly.
In the rescue environment the output of "dmsetup table" is:
nvidia_hcddciddp1: 0 409368267 linear 253:0 241038
nvidia_hcddcidd: 0 586114702 mirror core 2 64 nosync 2 8:16 0 8:0 0
VolGroup00-LogVol01: 0 4063232 linear 253:3 83952000
VolGroup00-LogVol00: 0 83951616 linear 253:3 384
nvidia_hcddciddp3: 0 176490090 linear 253:0 409609305
nvidia_hcddciddp2: 0 208782 linear 253:0 63
Another attempt at getting more info, I commented out the "rmparts" from the
init and tried a boot.
When I booted that I did get the expected "duplicate PV found selecting
foo" messages. I rebooted before any writes could happen (I think).