Hey, so I wrote a small script mostly taken from the Gentoo Handbook. Its designed to chroot into a Gentoo system in such a way that you are able to begin maintenance on it. It does not handle mounting the partitions for the Gentoo system, since that can be very variable and I don't know of a reliable way to figure out what partitions mount where. That is for the user to determine. Anyways, here it is.

This should probably posted in Documentation, Tips & Tricks._________________Chemtrails don't actually exist.
The optical illusion of seeing white streaks crisscrossing a blue background in the presence of sunlight is a common hallucination that comes from drinking fluoridated water.

After the user has mounted root (at /mnt/gentoo) in the chroot, you can read /mnt/gentoo/etc/fstab and parse that.
You can't use it as is but you can prepend /mnt/gentoo/ to the mount points.

Extracting the filesystem location (partition) is harder. If its given as UUID or PARTUUID, you can use it as is.
If its /dev/sd*, the drive may not be correct but you can compare it to the drive that has a partition attached at /mnt/gentoo/

For multi drive installs, that use several /dev/sd*, you don't have enough information._________________Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.

Why use sudo on every single command? It would be simpler to say that the script must run as root.

StevenC21 wrote:

Code:

touch /tmp/${$}.gntmnt # Create a temporary file to tell the script that we are inside the system

Brace notation is not needed here. You can use $$ (but as above, you shouldn't be putting your lockfile in /tmp).

StevenC21 wrote:

Code:

if [ ! -e /tmp/*.gntmnt ]; then

There are some Time-of-Check/Time-of-Use errors here. If another instance starts after this test and mounts its pseudo-filesystems before you reach the umount, you will take them away from it. If a stray lockfile is present when your script starts, and not present when you reach this line, then you will unmount filesystems that you never mounted. This could be particularly bad if $FS is blank, which you do not guard against. In that case, you would lazy-unmount the host pseudo-filesystems.

The simplest fix for all these problems is to use mount namespaces. Make a private mount namespace. Unconditionally bind mount the pseudo-filesystems you want. Let the kernel remove them when you exit. That removes the need for lock files and eliminates all the associated race conditions.

I am curious about your last suggestion. You mentioned that a second instance could mount the pseudo filesystems... but this would not happen, since it checks for any lockfiles denoting that someone is already in the filesystem. If you could explain in greater detail how this scenario might occur I would be grateful .

Instance #1 resumes, runs the test, and finds no lockfiles. It too decides to unmount the pseudo-filesystems. However, due to your locking at the start, only one of the two instances mounted the pseudo-filesystems, so only one of them should unmount.

Threaded programming is hard. Any time you introduce locks, you need to think very carefully about whether the locks actually lock out all the bad situations.

Considering the second version, you removed some uses of sudo, but not others. The mount calls need to run as root, so the second script only works as root. This is fine, but if it is already guaranteed to be root, it doesn't need to sudo the chroot call.

In both versions, you mount, but never unmount, the proc andd sys pseudo-filesystems.

Also, your lockfile test will malfunction if more than one lockfile exists. The glob expands first, then the shell complains that you cannot test for more than one file.

Its not a lock that's needed, its a counter, to determine how many instances of the script are running in the same chroot.
Eww ... recursion is hard when you have to do it all yourself._________________Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.