Bug Description

Tried in vmware to do a clean install of dapper flight 3 and select "erase all my drives and use lvm" and it fails to boot.

Error:
ALERT! /dev/mapper/Ubuntu-root does not exist. Dropping to a shell!

Two things here, first of all, it doesn't drop me to a shell. And second, I've noticed in /usr/share/initramfs-tools/scripts/* some lvm scripts exist, but there is no hook for it. And unpacking the initrd I find no vgscan/vgchange and stuff like that.

I tried dist-upgrading from Breezy with a LVM root and the received similar errors with an unbootable system. After stumbling around for a bit I decided to just do a clean install from Dapper media, which worked fine for me (again with a LVM root).

I had the same probleme with a system dist-upgraded from breezy. There is
no /dev/mapper directory, and vgchange -a y says "No volume groups found".
Fortunately, I could still boot with an older kernel, and
dpkg-reconfigure linux-image-2.6.15-14-686 fixed it.

I had this problem upgrading from breezy to dapper: at booting I have this message /dev/hda5 does not exist. I think it depends by installation of a new kernel image with the new dapper udev, the generated initrdramfs lacks of right /dev/*** links. What can I do now? My pc is unbootable...

Adam, do you believe this bug to be part of the class which should be fixed by this change?

initramfs-tools (0.40ubuntu29) dapper; urgency=low

* Make "update-initramfs -u" try to find the running kernel *after* it
attempts to search the symbolic link list and its own sha1 list.
Using this as a fallback, rather than the default, should solve most
upgrade issues, where people found their initramfs was half-baked.

-- Adam Conrad <email address hidden> Wed, 19 Apr 2006 13:51:35 +1000

If not, please gather details from the bug submitters and find out what happened.

Well, this class (half-baked initrd) of bug manifests in two ways. The first way is that when doing an upgrade of a system to a new kernel *and* new udev, depending on the order the packages were unpacked, you may have end up with a new kernel but an old udev in the initramfs. The above change fixed that, so dapper kernels should always work now after a breezy->dapper upgrade.

The second way this fails, though, is that all the packages calling "update-initramfs -u" during upgrade will upgrade the OLD kernel's initrd if the new kernel hasn't been installed yet, so the old kernel can become unbootable (thanks to udev not being backward-compatible). Other than backing out all the update-initramfs magic, or getting dpkg hooks, I'm not sure how best to solve this case.

We can probably get the upgrade tool to try to intelligently order upgrades to work around this, but that won't solve the classic "apt-get dist-upgrade" case.

On Thu, May 04, 2006 at 01:07:46AM -0000, Adam Conrad wrote:
> Well, this class (half-baked initrd) of bug manifests in two ways. The
> first way is that when doing an upgrade of a system to a new kernel *and*
> new udev, depending on the order the packages were unpacked, you may have
> end up with a new kernel but an old udev in the initramfs. The above
> change fixed that, so dapper kernels should always work now after a
> breezy->dapper upgrade.
>
> The second way this fails, though, is that all the packages calling
> "update-initramfs -u" during upgrade will upgrade the OLD kernel's initrd
> if the new kernel hasn't been installed yet, so the old kernel can become
> unbootable (thanks to udev not being backward-compatible). Other than
> backing out all the update-initramfs magic, or getting dpkg hooks, I'm not
> sure how best to solve this case.

Could we have udev's initramfs hook bail out somehow, leaving the old initrd
in place, if the kernel version isn't compatible? It's only udev which
should break it, right?

update-initramfs seems like the simplest place to fix this, though. We
should be able to detect the situation where the running kernel (its
upstream version number, anyway) doesn't match the "current" one (where the
symlink points) and continue successfully with a warning. How about that?

I'm trying to boot Ubuntu from the boot partition set up by my primary Fedora Core 5 installation, where Ubuntu is installed (via the alternate install iso) on an LVM partition as a secondary Linux distro.

I had tried multiple things to get Ubuntu to boot on my system and finally arrived at copying the Ubuntu kernel and initrd to the same /boot set up by FC5. Ubuntu is installed on VolGroup00/LogVol03. My grub kernel root statement is root=/dev/mapper/VolGroup00-LogVol03/. The boot kept hanging at "Waiting for root file system". I followed the advice of threads in the Ubuntu forums and waited. I then got a message that /dev/mapper/VolGroup00-LogVol03/ does not exist, and the initrd brought up a shell with BusyBox.

Just before hanging there was a mesage that four partitions were now active in VolGroup00.

I looked at the /dev of the initrd, and there were /dev/mapper/VolGroup00-LogVol03 and /dev/VolGroup00/LogVol03. So the root filesystem is really there! I then created a mount point /mnt/ubuntu and tried to mount the filesystem. It kept telling me the filesystem didn't exist until I tried "mount -t ext3 /dev/mapper/VolGroup00-LogVol03 /mnt/ubuntu" and it mounted. So it isn't detecting the filesystem type, needs to be told what it is, and is giving a misleading message that the root filesystem simply doesn't exist.

Now the issue is how to tell the kernel that the filesystem is ext3, just like I did with the mount from the initrd. I have tried to add various statements such as rootfstype=ext3 to the kernel statement in grub, without success. I tried it in various places on the line (e.g., before and after the root= statement) with no success. I tried a statement rootflags="-t ext3" but that didn't work either. I also tried rootfstype statements with ext3fs and ext2 without success.

Apparently the boot process is correctly recognizing and setting up the lvm partition, but is somehow failing to recognize and mount the file system there.

I have also experienced this problem (as Stan has described) but with a fresh install of Dapper alt install. Everything works according to plan (and according to the wiki entry) but then I fall into the same pit: "It kept telling me the filesystem didn't exist until I tried "mount -t ext3 /dev/mapper/VolGroup00-LogVol03 /mnt/ubuntu" and it mounted. So it isn't detecting the filesystem type, needs to be told what it is, and is giving a misleading message that the root filesystem simply doesn't exist."

Still there on Feisty.
I just had this problem (Stans, except I only have a boot partition and a ubuntu partition, no Fedora) when upgrading to feisty using apt-get dist-upgrade. After rebooting, feisty doesn't see my VolumeGroup /dev/mapper/vg1, stalling around the the line look"device-mapper: 4?-ioctl initialized.
have finally managed to boot into an old kernel 2.6.15-26. This was actually a surprise, when booting into the old kernel the boot procedure stalled about five minutes (something about waiting for raid), but now finally I have a root terminal open. Sorry I can't be more specific now, but I am afraid to reboot again until I have solved the problem. Now at least I have got some clues from the answers above, thanks.

If apt-get dist-upgrade is an obsolete and dangerous method of updating, maybe one could add a clear Warning message when trying to do it? Now I read afterwards on a forum that there is a new safer method of updating, I should probably have tried that instead. It would be nice if apt-get had told me too. Maybe a script that filters out dist-upgrade and shows warnings before calling the real apt-get.

On lucid server x86-64 after "aptitude upgrade" system was left unbootable. mount in initramfs said "device not found" for /dev/mapper/ubuntu-root. I fixed it after adding "rootfstype=ext4" in /boot/grub/grub.cfg.

I marked bug 255477 as a duplicate of this. I anyone disagrees, please change it back.

This bug has been hanging around for a long time and for those who get bitten by it it means server down time at console level (which is always a pain) and the need to repair the system with an alternate boot media. Here is what happened to me and how I fixed it:

1. Installed a fresh Ubuntu 10.04.1 (i386) through the network because the box has no optical drive and the BIOS is too old for USB booting. I install it with an LVM root.
2. A few weeks later I install updates (apt-get update; apt-get upgrade) which pull in a new kernel. I admit, I did not watch the update closely.
3. As requested by the update, I eventually reboot.
4. The box remains as silent as a brick on the network.
5. I pull it out, attach a console, boot it and see the "ALERT! /dev/mapper/...-root does not exist." and drop to a shell in the initrd.
6. In the initrd I see no sign of lvm tools. I also don't see the /boot partition but that's besides the point, I think.
7. I reboot and go to the grub menu - I don't have an old kernel to chose from !!!
8. I start the installation (through the network) again and enter rescue mode.
9. Eventually I end up on the rescue shell with my root partition mounted.
10. Dpkg-query tells me that lvm2 is not installed. What?
11. I apt-get install lvm2 which automatically runs update-initramfs ... That looks promising.
12. I reboot and all is well! Woohoo!
13. The box goes back into its corner, headless.

I described this in so much detail to make it clear that this little bug means a lot of work for somebody running an Ubuntu server and is a big annoyance. I don't think something like this should happen and something needs to be figured out. It is not a bug in initramfs-tools AFAICT but more in the dependencies as somebody on bug 255477 already mentioned. Can somebody please add the right project or package that this needs to be fixed in, then mark the bug invalid for initramfs-tools?

I will set this bug to critical in Ubuntu because I think it really is and also to maybe draw some attention to it. Whoever downgrades it should please be so kind to explain why it is not. Thank you. ;-)

I was affected by this bug on a lucid (10.04) two step upgrade from 9.04 (Jaunty) (Jaunty to Karmic to Lucid) with an active root snapshot. I found that the 2.6.31.-22 kernel worked consistently and all later kernels attempted (2.6.32-22 and 2.6.32-25) failed in one of two ways:

The most common error was a drop to the shell after reporting a bad block on the root filesystem mount. I was never able to recover from this, even with manual mounts (always successful), mount moves and chroot.

The less common error was the cannot find /dev/mapper/rootvg-rootlv from the wait-for-root procedure. In these cases, I was always able to just type exit and it would retry (successfully) the failed mount.

In both cases, the wait-for-root call would take a full 30 seconds (or longer with rootdelay) - it was NOT detecting the volume before the timeout in any case.

After much experimentation with rootfstype, rootdelay, etc. I finally decided to remove the snapshot of the root volume that I had allocated prior to the upgrade and have booted successfully since then.

Please note - there was a snapshot of the root LV that I had created prior to the upgrade (for safety), but I was mounting the base LV, not the snapshot.

I have since recreated a snapshot of the root volume again with no problems booting.

I would conclude from this that there is a timing problem between the registration of the volumes with DM and when the volumes are actually usable. In some cases, the /dev/mapper links would not be created within the timeout, and in others the links would be created, but the read of the superblock would return garbage. I assume the differences between the kernel versions are related to this timing, or perhaps addition of additional threading that created a race condition. Further, the fact that the 25% full snapshot (the original) failed while the mostly empty snapshot succeeded would indicate the timing problem is related to the number of changed pages in the snapshot. The fact that manipulating the rootdelay never affected the problem (except to increase the time it took to present) indicates that the race-condition is somehow related to a lock held by the wait-for-root.

I did not encounter any problems with the upgrade of the LVM configurations or packages between Jaunty and Lucid as described by others - the initramfs configurations were all correct (with the possible exception of wait-for-root NEVER completing before the timeout)