This has been a problem for me for several years. I am enamored with LVM and most of my system are on LVM logical partitions. As a result, I don't generate a bootlog, since when I mount the /var logical partition, it causes anything resembling a bootlog to scream and die, so I couldn't access a bootlog to see what was happening.

I had been having a race condition when mounting the logical partitions during the bootup init sequence, and had been misdiagnosing the problem since the beginning. I had originally written a script as a kludge and linked to it in /etc/rcS.d, but when I built my new computer, the race condition appeared again, even with my kludge script.

I then started backtracking through the /etc/rcS.d directory to discover where, exactly, the race condition existed. After about a billion reboots of testing several changes to the init scripts, I think that this is, finally, a valid solution to my problems, although there is always the possibility that I am still dead wrong.

I think the race condition was in the /etc/lvm2 script. The external drives didn't have the time to spin up and have the logical volumes recognized and activated before the script gave up on them and decided that they couldn't be mounted, then the /etc/checkfs.sh script (which fscks the filesystems) couldn't recognize the logical volumes as legitimate partitions, which caused fsck to throw an error on the filesystems on the logical volumes.

I took the brute-force-kludge-approach and I inserted a delay into /etc/lvm2 (start) by way of having it attempt to activate all logical volumes twice. I hope this solves my race conditions with my external logical volumes on USB, and I'll have to write a non-brute-force kludge in that script.