I only noticed this after a couple of days on the new openrc, but /etc/init.d/localmount now unconditionally skips unmounting /usr in openrc-0.11.5 and later. The line:

Code:

no_umounts_r="$no_umounts_r|/proc|/proc/.*|/run|/sys|/sys/.*"

was changed to:

Code:

no_umounts_r="$no_umounts_r|/proc|/proc/.*|/run|/sys|/sys/.*|/usr"

This showed as an lvm error here, since it did not want to deactivate an active volume.

I've modified the patches, so initramfs is now a variable in /etc/rc.conf so we can change what localmount does when it stops. As usual, if unset or the default, none of these patches do anything.

However I would like to draw your attention to a couple of things I found, while I was digging around to see what had files open in /usr. It simply never occurred to me that localmount was no longer unmounting /usr, so I didn't even look at that part of rc.log til the following didn't fix it. Nevertheless I think what I changed here was useful, and might be useful to you, especially if you find that there are processes still open in /usr which are causing an issue when shutting down or restarting your machine.

The first thing I found which concerned me, is that agetty has /usr/lib64/locale/locale-archive open. I do remember Frysinger mentioning on the dev ML that a /lib/locale directory might be needed. I think as an example of how /usr is getting polluted and it's going to be more and more difficult to keep things on rootfs, but it doesn't really bother me if I need to tweak a few things. As there's only that one file in there, I had no compunction about doing the following.
NB I did this in a console login, after I had run /etc/init.d/xdm stop.

(The cmp line just checks to make sure the file is the same.)
However rmdir /usr/lib64/locale fails since there's a hidden .keep_sys-libs_glibc-2.2 file (that's the version here) in the directory. The following got that taken care of, then I could link the /usr directory to the rootfs one:

Note that agetty is started by PID 1, and its linkage is very tight (see ldd /sbin/agetty): it only links to libc.so.6 (so only needs that, the dynamic loader and the linux vdso gate). Obviously with a libc directory on /usr that goes out of the window; I'm not interested in patching ebuilds unless I really have to, especially if a simple symlink will suffice.

This works thanks to the decades-old Unix tradition of only truly deleting a file once the last descriptor is closed, and allowing unlink to alter the directory hierarchy in the meantime. So processes with the existing file open continue to have its data available. (This is the same reason upgrades of our desktops don't bring down the running machine.)

However there is another glibc directory, /usr/lib64/gconv which holds shared libs for character conversion. This causes bash to have /usr/lib64/gconv/gconv-modules.cache open while it's running, which is bad if /bin/sh is a symlink to /bin/bash (the default on Linux): you end up with runscript having that file open at shutdown time. Again, ldd /bin/bash shows that there's no linkage going on outside /lib64. Clearly the conversion .so must be dlopen'ed by libc from that path, so again, a simple symlink on the directory, and 6.5 MB of rootfs space, means we can get our root dependency back.

Code:

mkdir /lib64/gconv
cp -p /usr/lib64/gconv/* /lib64/gconv
sync; sync

Now a check to make sure files have been copied correctly:

Code:

for f in /lib64/gconv/*; do cmp "$f" "/usr$f" || echo "$f"; done

If that produces any output, there's an issue with the filename/s mentioned.

The rmdir had no issue here, since there were no hidden dotfiles. Obviously if it doesn't work, check what files are in there, with ls -A.

With the above, I no longer have any files in /usr open at shutdown time.

Note that there are lib32 variants of the above directories, but I don't consider those an issue since they're not used by system processes as part of boot or shutdown. In fact there isn't even a .cache file in /usr/lib32/gconv here, so it's not been used yet. On a 64-bit machine, any such apps are very unlikely to run as part of system init, but you should be aware of the possibility. (And if you're on a 32-bit install, you'll only have lib to worry about.)

If you think the above is painful, I'd love to be told what I'm missing, or to see a simple patch for glibc that could be put into /etc/portage. I just wanted to get my machine working right: I actually thought the problem was because of me switching to a *-kit free desktop so spent a lot longer on checking out exactly what files were open, when in fact it was the change in localmount shutdown.

However, by the time I got round to sorting out the localmount initscript, I had already done the above, so I'm writing it up here in case someone using these patches finds that they have processes mysteriously open in /usr at shutdown. As ever, comments, feedback, your experiences, and especially improvements most welcome.

Note also that the second part should not be an issue if you change /bin/sh to point to /bin/bb (which should also give you performance improvements across the board, including in system startup times.) I couldn't as yet see how to do that with runscript without changing the symlink, and I didn't get too far with reading the code. Perhaps setting SHELL=/bin/bb in /etc/rc.conf will work, not sure as yet: I wanted to have the problem solved for people using bash.

..as well, once I realised that lsof was on /usr and ldd showed me it didn't have to be. No doubt it'll get overwritten on next upgrade, but it was only for while I ensured that there was no issue with the file being open.

Note that I'm using /etc/log so that I can monitor early boot and late shutdown, when /var might be unmounted: I use this more for

Code:

rc_log_path="/etc/log/rc.log"

in /etc/rc.conf for when I turn logging on with rc_logger="YES". To set this up I added the following to /etc/logrotate.conf:

Ah nice one ryao: I was wondering about picking up that /usr was mounted at startup, since it's an obvious fix. Good to know that I can
shove crap in "$rc_svcdir" for the future ;) You really should quote that expansion btw: touch"$rc_svcdir"/usr_premounted at minimum (though I usually quote each parameter separately so: touch"$rc_svcdir/usr_premounted". I'm not interested in hearing how it won't ever have spaces in yadda yadda.)

As for the bug report, sorry, but I wanted my machine booting cleanly again, and like I said I was actually switching to a *-kit free KDE, so tracked the problem down to the root cause as I wanted to know what was up. And I did bring it immediately to your attention on IRC, since it will affect forked udevs too ;)

As it is, I'm happy that I've switched the locale and gconv directories to rootfs: as you can see both are needed in normal operation (for agetty and bash for a start) so personally I do not want them in /usr at all come what may. Next step is to move pci and hw dbs to rootfs; I'd actually prefer it if Gentoo kept up with their old ebuild rather than use the udev parts, unless somehow systemd are taking over maintenance of both databases and their web interfaces etc?

Still, thanks for the fix to localmount: I'll add it to front-post when I'm more awake, so we can go back to keeping initramfs in udev.conf alone.

steveL, WilliamH wrote the fix. I just posted it for convenience. Anyway, it looks to me like we could merge your modifications into OpenRC if we moved this check into its own script in the sysinit runlevel and made them depend on it.

By the way, with regard to your comment about udev upstream requiring an initramfs to mount /usr, we have a udev fork in development that has the goal of restoring support for this, among other things:

Support for a separate /usr is currently broken (because we forked off systemd 195), but we plan to restore it before our first release. I would encourage you to try it once we have it in the main tree.

Anyway, it looks to me like we could merge your modifications into OpenRC if we moved this check into its own script in the sysinit runlevel and made them depend on it.

Thanks, that sounds encouraging.

It took me a while to see what you were getting at, but you're right: if /usr is pre-mounted at startup, we don't need to start udev after localmount, and if it isn't then we do, and additionally localmount should umount it at shutdown. The thing is, we still need to configure udev via a variable to change its need and depend settings, and it has to be in boot, not sysinit, which while we can check at runtime is consistent, we still need the user to change it so that openrc can do the dependency calculation correctly, afaict.

If we had dynamic dependencies (and I don't know that we don't, or that they're not coming, I've just never heard of them) then udev would check $RC_RUNLEVEL, and if usr_premounted is unset, and it is starting in sysinit, tell openrc to start it in boot instead. In either case, the runlevel would determines its need and provide settings. So if we could delay its start from sysinit to boot when /usr is not premounted, the whole thing would be automatic.

But udev-mount has to provide dev when udev is starting late, and I don't think that decision should be based on whether /usr is pre-mounted or not, but on whether the admin has configured it to start late, since they know that the kernel and builtin module device nodes are sufficient. Better to complain noisily if things are inconsistent, which is only an issue for people not using an initramfs wrt /usr.

So as it is, we still need to move udev's runlevel, and we still need the variable so that udev-mount and udev can tell openrc the correct dependencies. And even if we didn't I don't think anyone would be happy with patches that made udev-mount provide dev without some sort of configuration opt-in.

Given that, it made sense to use the same variable for the localmount decision, since it's much lighter-weight (no filesystem access: I did consider checking whether /usr was mounted when localmount started, but already had the variable.) However that's not robust for localmount in all configurations: it should umount what it mounted, and not assume that it never mounted /usr.

I will make the current patches more robust by checking run-level for consistency with initramfs, and udev-mount can need sysfs when the variable is set, so that the user only has to move udev, and set the variable, once the scripts are patched. Thanks for the discussion so far, then, it's already leading to improvements :)

Moving the check to something in sysinit, then, isn't needed for these patches in the current state of things. But having that info would be useful for checking consistency, and might come in useful elsewhere. Until it does though, I don't think it merits a new service: why not just do it as part of openrc startup, and provide a runscript/environment variable, not a file? That way once it does become needed elsewhere, it's already in place and efficient, and in the meantime we can use it for localmount, and udev warnings. Or not, up to you. I see it as a small amount of code for something that is always going to be needed by localmount, and will be useful elsewhere, but until it's required elsewhere, localmount can continue as above.

Anyone else who reads this and wonders what other variables are around, check out man runscript. I realise some of you will think that's obvious, but it took me ages to find that for some reason.

Quote:

By the way, with regard to your comment about udev upstream requiring an initramfs to mount /usr, we have a udev fork in development that has the goal of restoring support for this, among other things.

Support for a separate /usr is currently broken (because we forked off systemd 195), but we plan to restore it before our first release. I would encourage you to try it once we have it in the main tree.

Cool, I'll try it out on another machine or a VM when you've put a working release out.

Until it replaces upstream udev in Gentoo, though, there's going to be a need for these patches, so my desktop won't switch for a while. Also, I have to say I'm very conservative about what runs on my machine, even more so with system stuff. I'm much happier delaying udev startup to after localmount, since I know I set my machine up to boot without needing udev to mount drives (nor indeed for network) than changing udev for another piece of software.

Well I finally finished update --toolchain and a monster upgrade was made worse by /var filling up thanks to /var/db/pkg which I'd never eclean'ed ;) Stopping just where it did (at katepart) was really good in one sense. I left my machine up, and after figuring out eclean, let eclean-pkg -d do it's thing (since I was about to upgrade the whole of KDE in any case.) I got 7G back from that, and was so impressed I let it do its thing on distfiles (a partition) and got 9G back :-)

udev-204 was a PITA simply because of what led to the cautionary note in OP: the ebuild silently added udev to sysinit, despite the fact that it was already in boot and this was an upgrade. No warning, nothing, and because I had loads of time with just root konsole and yakuake, I went through all the elog stuff, and copied out bits that were actionable. So I felt confident, after reviewing all the etc-updates several times (nothing to do but wait for stuff to compile..) and optimising the initscripts to make sure I knew exactly what was happening, that things would work... Hey-ho, I know what to watch for next time.

Since there were no actual changes to the patches, I'm assuming anyone using it, spotted that (I was tired, mensch;), or perhaps no-one's using it, or if you're cautious (sensible) and watching this topic: feel free to update, just run:

Code:

rc-update del udev sysinit

before you reboot. And keep an eye on: rc-update show for future: udev must be in boot when initramfs=no._________________

OK, I have a problem with this I think.
I just upgrade an old system and after all, the machine was rebooted and never more back to work.
The info that I have of machine is:

- Have like a 6 partitions (/, /boot, /usr, /var, /home, /var/vmail) all in raid1

When system start, show a lot of errors like commands not founds, md devices are created but can't find them to mount the partitions, all I think cause by the /usr was removed and udev changes.

How can I revert that issue guys, I need your help and Im not really pro with that kind of issues.

Thanks a lot.

PS: This is an old mail server, mails are in a separate partition (/var/vmail) and i was thinking to reinstall it, but I need to backup a lot of configurations, data bases that I cant access now cause I dont have access to the operating system and/or partitions.

- Have like a 6 partitions (/, /boot, /usr, /var, /home, /var/vmail) all in raid1

If root is on raid, then this method does not work; you must have been using an initrd/ramfs before, and you need an initramfs now.

Quote:

PS: This is an old mail server, mails are in a separate partition (/var/vmail) and i was thinking to reinstall it, but I need to backup a lot of configurations, data bases that I cant access now cause I dont have access to the operating system and/or partitions.

Yes you do, if you have physical access to the machine: boot from a live-disk like sysresccd, mount partitions read-only under /mnt, and back it up ASAP.

For the overall thing, I'd log into IRC: chat.freenode.net and ask in #gentoo for live help as you go along.

I successfully moved my primary system over to eudev along SteveL's scripts to prepare for the decision that takes effect tomorrow regarding the Council deciding to force Gentoo users to use an initramfs if we have a separate /usr (I'm not looking to rehash that discussion here).

The most "painful" part of the process was switching my fstab from UUIDs back to /dev/mapper labels, but that was pretty trivial.

I first did the switch to eudev and ensured everything booted normally, then went in and performed the patches in the first post of this thread, taking heed to the warnings about udev being added to sysinit and needing to change that to boot.

Thanks, saellaven, that's great info to have. I've updated the title to reflect that the patches are confirmed to work with eudev.

The way things are going with upstream "systemd-udev-dbus-syslog-kitchen_sink" it looks like I'll have to switch soon as well: I'm not happy with the more and more strident pronouncements from Poettering. It appears the more you allow him to get away with, the more he wants to get away with, and treats his brainwashed followers as vindication of his idiotic approach. Case in point: "Lennart has been effective at making people worry that not using systemd is too dangerous to consider." That's just Stockholm Syndrome, brought on by a constant drip-drip of FUD, afaic.

I may be missing something obvious, but I have a small question regarding this patch set.

I'm under the impression that udev is responsible for the consistent device naming of my disks, sda, sdb etc... If udev is starting after mounting disks, and I'm using /dev/sda, /dev/sdb in fstab, is it possible that some cosmic ray could convince the kernel to reverse my disk names and cause boot failures? And if that is the case, would using LABEL= or UUID= in fstab avoid that?

I may be missing something obvious, but I have a small question regarding this patch set.

I'm under the impression that udev is responsible for the consistent device naming of my disks, sda, sdb etc... If udev is starting after mounting disks, and I'm using /dev/sda, /dev/sdb in fstab, is it possible that some cosmic ray could convince the kernel to reverse my disk names and cause boot failures? And if that is the case, would using LABEL= or UUID= in fstab avoid that?

Hm, never happened in my 20 year experience, including long before udev was invented or UUID became widely adopted

Yeah I've never seen that happen with fixed-disks; the usual case where that could happen is with removable USB drives, which are also usually /dev/sd*; however if you have your mobo and hdd controller built-in (which is what we do when we setup our kernel, and further required for this method to work) then those will be initialised first in any case. USB works better as modules, ime: so you can rmmod at will should you need to reinitialise it.

Presumably if you have both built-in then there is a race: but again, USB tends to be slower, and in any event it's not such a great idea as discussed. The race is much more likely to strike on a bindist, where everything is a module, that can be, since the kernel has to be able to boot on any machine.
That's kinda the antithesis of Gentoo, where you customise for your specific machine/s.

Hm, never happened in my 20 year experience, including long before udev was invented or UUID became widely adopted

steveL wrote:

Yeah I've never seen that happen with fixed-disks; the usual case where that could happen is with removable USB drives, which are also usually /dev/sd*; however if you have your mobo and hdd controller built-in (which is what we do when we setup our kernel, and further required for this method to work) then those will be initialised first in any case. USB works better as modules, ime: so you can rmmod at will should you need to reinitialise it.

Presumably if you have both built-in then there is a race: but again, USB tends to be slower, and in any event it's not such a great idea as discussed. The race is much more likely to strike on a bindist, where everything is a module, that can be, since the kernel has to be able to boot on any machine.
That's kinda the antithesis of Gentoo, where you customise for your specific machine/s.

Ah, thanks for the clarifications. I was reading things from a few other distros, but I guess since most of them are bindists, as you mentioned, udev persistent naming matters more there.

And to answer my own question, it does not look like using LABEL= or UUID= in fstab would help anything, as from mount's man page:

Code:

The recommended setup is to use tags (e.g. LABEL=<label>) rather than /dev/disk/by-{label,uuid,partuuid,partlabel} udev symlinks in the /etc/fstab file. The tags are more readable, robust and portable. The mount(8) command internally uses udev symlinks, so use the symlinks in /etc/fstab has no advantage over the tags.

Ah, thanks for the clarifications. I was reading things from a few other distros, but I guess since most of them are bindists, as you mentioned, udev persistent naming matters more there.

Persistence can be useful for removable disks, eg if you always want to mount a backup drive at a particular location. But that doesn't require udev afaik, just the LABEL or UUID tag in fstab. Both are written by mkfs (label with -L and UUID whenever a new fs is made, iirc) so they're a property of the fs, nothing else.

Quote:

And to answer my own question, it does not look like using LABEL= or UUID= in fstab would help anything

Hmm that says to me that the LABEL tag is preferable to using a udev-specific path; whether in initramfs or not, udev is never the first thing to start up.
You're the one at the coalface tho ;)

Oh minor point: there's no need to quote preceding posts if your reply immediately follows them. We quote to answer specific parts, just like an email conversation. And we try to chop out stuff that's not pertinent, or already present in the post above as with the mount info, to keep bandwidth (mental as well as network;) lower.

The bit from mount seemed to say that fstab using /dev/by-label/whatever is exactly equivalent to LABEL=whatever, just saving some typing, and that mount would just call out the full udev created path for you...

Thanks for the forum advice. I've been on google groups a lot recently, so you get used to massive quote chains. I'll endeavor to remember my BB etiquette.

The bit from mount seemed to say that fstab using /dev/by-label/whatever is exactly equivalent to LABEL=whatever, just saving some typing, and that mount would just call out the full udev created path for you...

Huh? "The recommended setup is to use tags rather than udev symlinks" seems pretty clear to me: all the rest is saying that mount will use the udev symlink under the hood, so you do not gain anything by using it yourself, as it's exactly equivalent. The only difference is that mount will still work with the tag (since it's part of the fs), when there is no udev running, and thus no symlink created by it. Certainly mount works without udev, or we would not be able to run anything.

In any event I wouldn't bother with either personally for fixed disks accessed via your mobo chipset, as discussed.

Quote:

Thanks for the forum advice. I've been on google groups a lot recently, so you get used to massive quote chains. I'll endeavor to remember my BB etiquette.

No problem :) Personally I hate them on google groups as well, and have never understood why people don't take the hint from decent ML threads. It's one reason I don't like google groups -- the other being the fact that it's google, who are just another tax-dodging corporation, despite their marketing propaganda, who make money out of users, and whose clients are in fact advertisers, not users (who are merely the product they sell) and uniquely placed to sell you out to both advertisers and the NSA's illegal wiretapping.

https://duckduckgo.com/ is much better, and to be supported, imo. Soon enough others will start doing the same thing (ie not keeping any data that could be linked back to you, so there's nothing for any "security" agency to be interested in.)

Ok, well, going with testing theories by experimentation, I deleted those by-* symlinks and mounted by label, and you are absolutely right, works fine, so I was just being confused. That happens a fair bit

And yeah, I wouldn't be using google groups, save that every software project I look at these days seems to use it as a mailing list/support forum.

So far so good. The problem is that I haven't rebooted yet. And by the looks of the stringent warnings on this post is looks like I wont make it alive :/

Most worryingly part is boot with UUID being broken. I think I needed because, in the past, my drives sometimes became out of order.

Could someone recap the steps to follow for a succesful boot?

I followed some steps from the OP. But things aren't entirely clear.

I am not booting with some partitions on lvm. Rebuilding lvm wants virtual/udev

LVM2 needs to be re-installed with udev USE flag or what?

Some of the instructions were odd, like the part of initramfs=NO in /etc/rc.conf because is NOT there.

Could someone re-cap the steps to follow in order to have a succesful booting system?

thanks.

ps.: now I am starting to get what's going. I used to have /usr in separate partition since the first time I used gentoo, as the earliest gentoo handbook had in the instructions. But it was my last couple of install that it became impossible to do so. I didn't get it why wasn't working and in the end I had to resort to have /root + /usr, but now I am starting to connect the dots.

Earliest gentoo installs had as an example this partition layout:

/boot
/
/usr
/var
/opt
/home

How sad that now installing original way has to be found in "Unsupported Software". WTF. Not even stickied.

So far so good. The problem is that I haven't rebooted yet. And by the looks of the stringent warnings on this post is looks like I wont make it alive :/

Most worryingly part is boot with UUID being broken. I think I needed because, in the past, my drives sometimes became out of order.

If you have drives on multiple different controllers, it's possible for those drives to come up in a different order depending on changes in the kernel, the order your modules are loaded in if you're using modular drivers for your controllers, etc.

It is a valid concern and even one that I had, but never actually encountered, given that, at one point, my primary system had 2 SCSI controllers in addition to the onboard SATA and ATA controllers, all of which had drives of some sort on them. It would certainly be nice if the kernel absorbed the functionality to boot by UUID directly into it.

Quote:

Could someone recap the steps to follow for a succesful boot?

I followed some steps from the OP. But things aren't entirely clear.

Do it in stages... change one thing at a time so it is easier to figure out where something went wrong if something does go wrong. eudev should be a direct, drop in replacement for udev. Since you've installed that, reboot and make sure it works as is. After that, proceed with SteveL's patches

Quote:

I am not booting with some partitions on lvm. Rebuilding lvm wants virtual/udev

LVM2 needs to be re-installed with udev USE flag or what?

virtual/udev is fulfilled by eudev, so it's fine to rebuild lvm with the udev flag

Quote:

Some of the instructions were odd, like the part of initramfs=NO in /etc/rc.conf because is NOT there.

That is simply a flag to say that you are not using an initramfs to tell the system to go through the early mount steps for /usr

Quote:

Could someone re-cap the steps to follow in order to have a succesful booting system?

I didn't keep detailed notes but...
1) remove udev and install eudev (this step is optional, but I wanted off the udev treadmill since I don't trust the systemd developers) and reboot to make sure it works
2) edit the files as shown in the first post of the thread and reboot to make sure nothing has broken
3) make sure your fstab is setup to mount the partitions by their device nodes or LABELs and not by UUID and reboot to make sure nothing has broken
4) change udev from sysinit to boot (rc-update del udev sysinit && rc-update add udev boot)
5) set initramfs="NO" in /etc/rc.conf and reboot to make sure everything is working

Quote:

How sad that now installing original way has to be found in "Unsupported Software". WTF. Not even stickied.

And it's not because of technical constraints, but political reasons masquerading behind debunked artificial technical constraints... all designed to force people to submit to the agenda of an arrogant few.

Yeah I've never seen that happen with fixed-disks; the usual case where that could happen is with removable USB drives, which are also usually /dev/sd*; however if you have your mobo and hdd controller built-in (which is what we do when we setup our kernel, and further required for this method to work) then those will be initialised first in any case. USB works better as modules, ime: so you can rmmod at will should you need to reinitialise it.

I’m running my work system off a USB disk (did not want to delete the officially supported other system) and I got that problem when changing the scandelay. All my partitions also have labels (in fstab), so all this caused was that I had to give another partition as root when booting._________________Being unpolitical means being political without realizing it. - Arne Babenhauserheide ( http://draketo.de )

I've never seen that happen with fixed-disks; the usual case where that could happen is with removable USB drives, which are also usually /dev/sd*; however if you have your mobo and hdd controller built-in (which is what we do when we setup our kernel, and further required for this method to work) then those will be initialised first in any case.

I’m running my work system off a USB disk (did not want to delete the officially supported other system) and I got that problem when changing the scandelay. All my partitions also have labels (in fstab), so all this caused was that I had to give another partition as root when booting.

Ah interesting. So /dev/sda is not right for root but eg sdb or c is? In grub terms I guess that's hd(2, partid) or 3, instead of 0.

I have an option in my BIOS to turn off early USB scan (says it won't boot from USB without it, which is what I want.) Not worked with booting from USB drive; I tend to see small root as more attractive (you have to have access to /boot don't you?) and then split off /usr which changes much more. with RPi you have the SD-card for root, though I'd be interested in using eg a flash drive and a larger removable, or something.

Are you using an option in fstab to ensure the correct LABEL and what about root=PARTUID=xx kernel cmd-line or the like? Would be interested to see your config setup (ie for root part.)

--
That's why I've been offline: trouble with hard-disks. I got an odd message I'd never seen before (port 00 iirc) but only when the USB scan option was on, with my phone plugged in, and my disk drive refused to boot. With it off, it got as far as loading grub from MBR. Turned out to be a loose power connection, TF, heh but I took a bit of timeout as well :-)

Haven't had a chance to debug it yet, but I've had to mask >sys-fs/lvm2-2.02.100 as it no longer makes the /dev/mapper nodes on boot. I can manually vgscan --mknodes when I get to my shell, but otherwise it's broken. reverting back to lvm2-2.02.97-r1 fixed it.