2. As in my earlier comment, if you don't mind adding "non-persistent" loading of SFS beyond the original 6 SFS limit.

Testing this - this is currently works partially:
a) Adding SFS beyond the maximum 6 works. The script will create the necessary mountpoint in /initrd (e.g. /initrd/pup_ro10). But it uses the wrong loop device - instead of using loop10, it uses loop2 instead.
b) It still adds into SFSLIST (obviously because this is how it's designed for right now).
c) Removing it (within the same session as adding it) seems to work (success message), but actually fails - the SFS is still mounted. But because it's no longer in SFSLIST, sfs_load doesn't see it anymore.

Well, I know this is probably outside your original scope of design when you started, so if you don't want to hear this, just tell me to shut up and I'll do

Just FYI (and a puzzle) - this works with when busybox mount is used. When mount-FULL is used, it will bring down the system I wonder what's the difference?

Interesting - even calling mount.aufs directly still kills this busybox mount must have a power we all don't know

EDIT: mount.aufs is just a wrapper which will call the full mount - and thus it kills. busybox mount make the mount syscall directly._________________Fatdog64, Slacko and Puppeee user. Puppy user since 2.13.
Contributed Fatdog64 packages thread.Last edited by jamesbond on Fri 04 Feb 2011, 06:04; edited 1 time in total

Thanks to all testers.
But please test again with the new version, sfs_load-0.4.
Mainly improved loading SFS from CD with RAM mode.
It offers COPYtoHDD/RAM/NOCOPY options for the live CD without pupsave.
You can see how it works with the multilingual Wary-500m07.

Doesn't check the number of extra SFS. I am not sure what happens when it exceeds 6...

All my tests have been done from Fatdog64 511 with frugal install with pfix=noram option (/pup_rw is save file, and /pup_ro2 is main.sfs on disk).

sfs load and unload ok. But there is a subtle issue with unloading sfs out of order - e.g. you have sfs1, sfs2, sfs3, sfs4, sfs5, sfs6 loaded (in that order) - it's ok if we unload sfs6, unload sfs5, and then load sfs51 and sfs61.
But if (assuming original order) we tried to unload sfs5, and then tried to load sfs51, problem happens --- because unloading sfs5 will free up /dev/loop8, but when we try to load the next sfs, this free loop device is not detected - because /dev/loop9 is still in use (by sfs6). What happens after that:
- sfs51 will be loaded in /dev/loop2
- but mount point assigned to /initrd/pup_ro10
and all hell breaks loose after that (ie you can no longer unload sfs51). Problem fixes itself after reboot.
This is particularly easy to spot when one is running at full capacity (6 SFS) but it will happen sooner or later no matter how many SFS are being loaded.
It's a bit complicated to explain but I hope you get the idea.

Solution: a few ways to solve, the obvious one is probably to make a smarter function to find free loop device - ie to detect gaps (if loop5 is used and loop7 is used but loop6 is free, we can still use loop6 instead of using loop8)._________________Fatdog64, Slacko and Puppeee user. Puppy user since 2.13.
Contributed Fatdog64 packages thread.

2. Does this allow users to change the maximum number of loops available - so that users can choose how many SFSs are loaded on the fly?
(Very useful!)

No. The sfs_loader keeps the compatibility with the traditional bootmanager and has same limit.
Barry thinks too many layers to the unionfs slows down the performance.

Has anyone ever gone about testing this? I'm using a P4 1.4ghz laptop with 2GB ram running Lighthouse Pup and have ~ 25 SFS files (about 1.7gb worth of SFS files) running and I dont perceive any noticeable slowing of the system... granted booting takes a few seconds more, than when I have 1 or 2 loaded. Granted on a slower system with less CPU and RAM, I can understand the concept of it slowing things down, but how slow of a CPU would you have to have for the filesystem to start slowing down the overall system?_________________

Here is some code from the snapmerge script. Adding more layers makes it slower because each whiteout file needs to be checked on each layer. This script is already painfully slow and the main reason I don't want to add more layers,

I supposed it's worth experimenting and seeing what how much difference it makes.

Here is some code from the snapmerge script. Adding more layers makes it slower because each whiteout file needs to be checked on each layer. This script is already painfully slow

Yes, jemimah. It is so slow.

Code:

#also need to save the whiteout file to block all lower layers

I wonder why we need to check them. Why not unconditionally copy all the file in pup_rw...?
I also wonder what the rc.update does...? _________________Downloads for Puppy Linux http://shino.pos.to/linux/downloads.htmlLast edited by shinobar on Sat 05 Feb 2011, 01:49; edited 1 time in total

UPDATE: 5 Feb 2011 v0.5: fix was the SFS removed from the list even unload failed, search smaller number if pup_roN is not available (thanks to jamesbond)_________________Downloads for Puppy Linux http://shino.pos.to/linux/downloads.htmlLast edited by shinobar on Sat 05 Feb 2011, 03:03; edited 1 time in total

I wonder why we need to check them. Why not unconditionally copy all the file in pup_rw...?
I also wonder what the rc.update does...?

Say I create a new file then delete it and a white out file gets saved to the save file. Then later I add an sfs containing a file of the same name. The file will not appear because the white out file is there blocking it. I believe there is code in the init script to check for this condition and delete the interefering white outs when you add an SFS, but I know from experience that even that doesn't always work.

But that's an interesting thought - maybe the whiteout checking code in snapmerge is redundant and can be removed. However, It may be an error condition in AUFS to have a whiteout file with no file below it. I know for sure UnionFS is really picky about that, but I think AUFS is more tolerant.

However, I think the real bottleneck in the script is checking for free space in the save file for every single file copied down. That could be omitted in the case where your save file has more free space than the size of the files in RAM - but otherwise I think you need to do it.

I think the real bottleneck in the script is checking for free space in the save file for every single file copied down. That could be omitted in the case where your save file has more free space than the size of the files in RAM - but otherwise I think you need to do it.

Would it not be enough to check the free space is larger than the sfs to be loaded on the fly?
(At least twice as big as the sfs file, for example..)_________________Akita Linux, VLC-GTK, Pup Search, Pup File Search

Been thinking about it too ... I'm comparing the situation that requires snapmergepuppy and the one where /pup_rw is mounted directly on pupsave file. In this case, no management of whiteout files is done (as shinobar said) - and yet things will work correctly.

In the specific PUPMODE where merge script is required, these are the conditions:
a) there are, effectively, two pupsave files - the tmpfs layer, and the real pupsave (mounted ro by aufs)
b) we want to create the impression that this two pupsaves work as one
c) we don't want to duplicate items from pupsave to tmpfs
d) optionally, tmpfs and pupsave is allowed to have different size

a) & b) is rather easy to accomplish, it's c) & d) which causes the most headache and the need for merge script. Actually, c) is also the cause of problem if your real pupsave file is almost full, yet the tmpfs is empty (ie fresh boot). One can keep adding things without knowing that one cannot save the stuff anymore. Kinda like vmware thin provisioning, but without enough backing storage

If it's only a) & b) - easy - just load pupsave to tmpfs at start, and then rsync everything to pupsave during shutdown (or during merge). The real pupsave don't even need to be part of the branch.

But we need to do c) and d) since that's the agreed design criteria for now. Based on the above, I think the only check needed is as follows, for a combination of a "real file" and its corresponding whiteout file:

4. real file exists in tmpfs, whiteout doesn't exist in pupsave
Cause ==> new file created in this session
Action ==> copy file from tmpfs to pupsave,
Then delete the real file in tmpfs.

5. real file exist in tmpfs, real file also exist in pupsave
Cause ==> file is updated in this session
Action ==> copy file from tmpfs to pupsave,
Then delete the real file in tmpfs.

Of course when I say "file" it also applies to directories.

I think that should handle 90% of the cases. We skip corner cases of "we only save the whiteout files only if the lower layer SFS have the real files" - I don't really see why this is necessary.

If the slowness comes from checking all those files in the SFS layers, then by dealing only with tmpfs and pupsave, this delay should be greatly reduced. If it's not, then the above may not help. In fact, I'm doubting the need to have c) and d) in the first place ... I mean, you have that very important big file you need to save, you can always save it in /mnt/home (ie the real storage).

Ok, I'm off - jemimah we can start another thread on this if you want to.

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot vote in polls in this forumYou cannot attach files in this forumYou can download files in this forum