With the stabilization of baselayout 2, things did not go well with the ramdisk. Turns out to be a simple fix, but it looked pretty bad. I have posted an edited script that should fix the problem, but if you have done the upgrade and everything fell apart, you can bring the system up using the following steps

Boot system into the lvm2rescue mode. (add lvm2rescue to the kernel line in grub).
From the lvm2rescue command prompt type
umount /sys
exit

Baselayout-2 did seem to do me in. Up until then I had been using my own fixes to the LVM2-included script. But it sounds like I'm running into what was mentioned: I could build an initrd under BL1 and boot it under BL2, but anything I build under BL2 won't boot. (I have working kernels/initrds so I can try the new and still fall back and prove everything is, in some sense, sane).

I use both md and lvm2. The md stuff seems fine but then LVM2 says "no volume groups found" and falls over to the fallback shell.

Looking at the rc.log on success and what you've mentioned, I think I've got something wrong related to udev/device mapper.

Two things I noticed but don't entirely understand:

- With my old setup, I see errors related to /dev/vg/* not being created by udev but everything works okay. I'm afraid I don't quite understand the relationship between /dev/vg/name and /dev/mapper/vg-name.

- In the past I had lvm2root set to the /dev/mapper path but it sounds like I should be using a /dev/vg path (if that matters)?

Not sure what you are actually running into, but I solved the /dev/vg/... vs. /dev/mapper/... conflict by only using the /dev/mapper/... ones. They seem to always be there whereas the /dev/vg/... ones get added at some later time. They are easy to type and I use them from the command line, but scripts and fstab files get the mapper version.

As to no volume groups found, you might need to add a lvm.conf and/or add the device files into your initrd that contain your pvs.

I noticed that with this initrd, /proc/mdstat doesn't show the md devices, though they seem to be detected via mdadm -Es.

I'm not sure if that's related or not, though, since with both this and the 'classic' initrd create, whether anything is in mdstat or not, pvscan/pvs returns nothing. That I'm finding hard to track backwards. Can't tell exactly what it's doing behind the scenes.

Seems like the device major/minors are different but I'm not sure what to make of that ...

Something to try from your initrd environment is an fdisk -l. If that shows your devices, then things should work. I'm guessing you won't find your disks correctly with that which means you will need to add some devices files either for the md devices or something else.

Meh. Pretty much user error. I wasn't asking for the necessary md devices to get started. Not sure why it worked in the past since the old script had pretty much the same format. (Then on top of that, I messed up the md conf while trying to debug.)

I do get

Code:

The link /dev/vg/root should had been created by udev but it was not found. Falling back to direct link creation.

on boot for all the logical volumes. Appears to be benign and all the search results seem inconclusive ...

Yeah, sorry, aside from the weird warnings, it's working fine. I just migrated a more complex server and it worked fine.

Thanks for maintaining the script. Takes a load off my mind; I'd been worried about my own hacks.

One comment for posterity: cutting and pasting the script caused a minor problem for me: for some reason I was getting a space at the end of the lines that end the here documents which kept bash from recognizing them. Not a big deal, but it'd be a little easier if you could stuff the script into a pastie or github gist.

I've switched to using the kernel swap suspend stuff and I currently have a encrypted disk on my laptop. Using grub2 and this initrd script, I'm able to have a full encrypted system (minus the MBR and grubs module partition (8MB)).