What kind of initrd are you using? Generated via genkernel, dracut or custom built?

If you auto-create it, don't use genkernel, switch to dracut with the "systemd" USE flag, as this will give you an initrd with systemd that can automatically hand over some info and udev stuff from the initrd to you main systemd process. I had a lot of trouble with genkernel+systemd.

If you custom built your initrd you may have to mount devtmpfs before creating the LVM nodes in /dev/mapper and probably need to start udevd. I'm not sure if that is enough. If not, you may try to add another unit that calls vgmknodes --refresh when systemd starts up, something like this:

Just for now answering your caring question: yes, custom built, even if something was generated automatically I would try to understand what came out and why, and then to tailor to my specific needs anyway...
So now going to follow your nice hints... and report afterwards.

Would you please give me a clue how in iniramfs init script code fragment looked like to mount devtmpfs before creating the LVM nodes in /dev/mapper and to start udevd? Probably first busybox needs some special care compilation options...

As with udev, I guess I would first try to start it the way it is started by systemd as well. In the /usr/lib/systemd/system/systemd-udevd.service file it is just executing the binary. So if you're lucky it will work just by:

Code:

/usr/lib/systemd/systemd-udevd &

(or the path you store that binary in)

I'm not sure though if that will start without systemd running or any other dependencies. Also there might be a problem with starting it in the initramfs and then systemd trying to start it again ... If it doesn't work, I would first try if it also works without udev in the initramfs, as vgmknodes --refresh may trigger all the udev events anyway. (if you try that, have a look at the --noudevsync or --sysinit options of vgchange).

If systemd complains at boot with something like "dev-mapper-xy.device timed out", then it most likely is a udev related problem, as udev needs to apply some tags to make the device available for systemd.

The unit I posted is a .service file. Something like "lvm-refresh.service" or similar should do.

At this point must confirm vgmknodes --refresh works, i.e. in the meaning of refreshing the nodes, but during the boot system complains about udev not possible to start.
After all, do you mean to start udev (as well as yet in such an early stage as from initramfs (having copied /usr/lib/systemd/systemd-udevd there)?

I thought it might have been necessary to have udev running when calling lvm the first time (in the initramfs) in order to have the /dev/mapper/* nodes managed by udev correctly, so that udev can handle events generated by lvm. Because systemd uses the udev database to query devices and might not be able to use the /dev/mapper devices properly, even if they are there as device nodes. This would include copying the udev daemon to the initramfs, yes.

I thought so because the dracut initramfs does that, but when I was trying using a genkernel image that does not, I had issues with systemd (systemd would wait for the .device units until time out, even though the nodes are there).

But I'm not sure if that is really needed. Maybe it is enough to create them in the initramfs (using vgchange with --noudevsync) and then later call the vgmknodes --refresh when udev is actually running. That would at least be much easier to do - I would try whether this is doable first.

If you have to include the udevd binary, make sure it has all linked libraries in the initramfs as well. Get a list with:

Code:

ldd /usr/lib/systemd/systemd-udevd

If that is not enough it's probably a good idea to create a dracut initramfs and have a look at how it is done there.

There you can see in the script which files are copied in order to use udev. For example, they copy the udev config files from /etc/udev/rules.d and /etc/udev/udev.conf to the initramfs. Most certainly a good idea. They also call udevadm trigger and settle after udevd started up. This may be neccessary to get all block devices detected etc.

But most interestingly, they kill the initramfs udevd at the end before switch_root:

Code:

killall -w systemd-udevd

should do it. That should make it possible for a clean startup of the root's udevd via systemd later.

The mounting of sysfs and procfs is correct and should be included. Udevd will most certainly need them, maybe lvm as well.

Custom initramfs does what is should do but systemd can't find the decrypted devices afterwards.
The workaround with a custom service brings no improvement. The device nodes are there but
they are missing from /dev/disk/by-uuid.

I managed to narrow the problem down to systemd-udevd not recognizing the decrypted devices.
So I tried to include systemd-udevd into the initramfs which works up until the point when I try to
decrypt my devices. Then I get the (in)famous "udev cookie waiting for zero" error.
So I included the lvm binarys and links to vgchange etc and all from /lib/udev/, /etc/udev, /etc/lvm.
Still the same problem.

I don't want to use dracut or genkernel, since I need a script to decrypt my gpg encrypted keys and pipe them to cryptsetup.
Since I have many encrypted volumes with different keys and I don't want to type my passphrase more than one time, so I cannot use dracut.

If the device nodes are present in initramfs, but get lost once switching to the real init system - there is a bug in that system somewhere and it's doubtful whether you can fix it from the initramfs side of things. Of course if it's just LVM you can set the lvm.conf on the real system to handle device nodes itself instead of relying on udev (which is one of the workarounds listed in the link above).

Unfortunately I do not have any experience with systemd.

If even the UUID are missing, then something is really weird, because usually even when there are no /dev/mapper/... nodes, you can still fall back to /dev/dm-1, /dev/dm-2, etc.

The dm-0 to dm-x nodes are present. I can also do a udevadm trigger and the mapper nodes are there. A mount -a is possible and I can mount the filesystems.
So the system seems fine, since I can use it normally. Everything is working, but the automatic boot.
And always entering systemd rescue shell doesn't strike me as an optimal boot with gentoo.

The encrypted devices are in /dev/disk/by-uuid. But not the decrypted ones. So fsck from systemd tries to check /dev/mapper/home for example.
But it can't find the device in /dev/disk/by-uuid and times out. That is the actual problem.
So the culprit is LVM.

I have successfully changed my custom initramfs so that it includes systemd-udevd.
The boot now works, but systemd still complains about dev-mapper-home.device timing out.
Afterwards it mounts /dev/mapper/home to /home without complains and the system boots up normally.

So this is not yet solved but if anyone faces a similar problem, feel free to contact me.

But now what did I do:
I copied systemd-udevd with all dependencies to the initramfs and launched it by "systemd-udevd --daemon --resolve-names=never" instead of mdev
after mounting devtmpfs to /dev.
After that decrypted my gpg encrypted keys and decrypted the devices. All show up under /dev/mapper/ but not as links to /dev/dm-x.
Before switching to the new root I killed systemd-udevd and mounted /dev to /newroot/dev via the move option.
In systemd I disabled lvm-etad and set lvm-etad = 0 in /etc/lvm.conf

Hope this helps anyone with a similar problem. I will file a bug about the obviously false behaviour of lvm2 in ~amd64.

An Admin or the original creator of the topic can now set the title to [SOLVED] please.