has anyone heard about the Gigabyte i-RAM? i'm wondering if it will be supported by existing linux drivers (i don't see why not... just uses a SATA hdd interface), but if so, it could very well make these issues irrelevant. that is, assuming you have the money for it and 4 sticks of RAM, of course.

Yeah, I was thinking about that awhile ago. Supposedly they'll get support for 4GB by the time they launch. I'd mount /usr (as much as possible, anyway) and /var in there. Ah... visions of nearly instantaneous emerge --sync..._________________Who needs reincarnation when you've got parallel universes?

yeah, definitely. though i'm not so concerned about portage, as that's something i can run in the background while i'm doing other things. otherwise yeah... it will be very nice to get away from head latency and finally make full use of that 150MB/s pipe. _________________Sheepdog
Why Risk It? | Samba Howto

Based on some comments here, I've been working on some init scripts to mount only certain libraries in memory. I'm using a bash script (formerly a Perl script, but I wanted it to be able to run at boot time, even if /usr wasn't mounted) that runs ldd on any given files, follows symlinks, and automatically copies the appropriate libraries to ramdisks (one ramdisk for /usr/lib and one for /usr/bin). Then I use unionfs to mount each ramdisk together with its original directory.

I'm actually not terribly impressed with the load-time decrease on the one computer I've tried, but I think it's just because the hard drive is fast. (I can cat about 50MB worth of libraries to /dev/null in about a second and a half, if memory serves.) However, those of you who see 7-second improvements from your first boot of Firefox to the next are probably still very interested.

Then again, you might not even need to mount the libraries on a RAM disk. It might just be enough to read the libraries from disk (cat /usr/lib/whatever > /dev/null) so that they're stored in the cache. This may have much the same effect as starting Firefox (or whatever) once and then closing it. It would also avoid the dilemma of how to handle writes to /usr/lib (like when you update your system); my plan was to offer two runlevels: one that used ramdisks and had /usr/lib mounted read-only, and one that did not use ramdisks and had /usr/lib mounted read-write. If the performance gains are roughly equal, it would be much simpler not to have to worry about this.

Anyway, I'll post the script once I get home. It's not quite finished, especially since I'm not sure it's worth it for the computer I tried it on, but the part that lists all libraries required by the given programs seems to be working fairly well.

# In theory, readlink might not be available because /usr/bin might not be mounted.
# My understanding is that we can assume that /bin is mounted (likewise for /lib, since it has libraries the bash requires).
# Actually, this happens after localmount, so I don't think that's a concern.
# Still, I found that busybox readlink is faster than standard readlink, so I'm not going to mess with it for now.

# We need busybox for this, but it appears to be pulled in by 'emerge system," so we should be able to count on it.
# Interestingly, this even beats the normal readlink -f for speed.
function readlink()
{
busybox readlink -f $1
}

# Looking at the output, I get the impression that recursion is not actually be necessary.
# I'm not sure, but the script is slower if we leave it in.
# Use the commented-out for() instead of the ORIG_COUNT and other for() to "reactivate" it.
#for (( I = 0 ; I < ${#ALL_LIBS[*]} ; I++ )); do
ORIG_COUNT=${#ALL_LIBS[*]}
for (( I = 0 ; I < $ORIG_COUNT ; I++ )); do
CURRENT_LIB=${ALL_LIBS[$I]}

In theory, LIBS, MERGEDIRS, RAMDIRS, and HDDIRS would eventually be moved to /etc/conf.d/ramlibs.

You need busybox installed. I think it's part of system now, so you should probably have it. If not, emerge it.
You also need unionfs, which you can get from portage.

There's a circular dependencies complaint at shutdown.

Here's the most important function:
listLibs file1 [file2 [file3 ...]]
Print a list of libraries (with duplicates removed and symlinks fully resolved) required by the given file(s). (This includes the file itself. Note that this won't work for scripts, e.g., rip. That's why my LIBS contains '/usr/bin/oggenc /usr/bin/cdparanoia /usr/bin/eject')

Note that the current version of the script is for AMD64 (hence lib64 instead of lib).

As currently written, the script expects the "real" /usr/lib64 to be mounted at /usr/lib64_hd. (You would probably have to move the directories at the console and edit /etc/fstab.) It also expects /usr/lib64 and /usr/lib64_ram directories to exist (and, presumably, to be empty). (/usr/bin is similar, with /usr/bin_hd and /usr/bin_ram.) The script scans the files in LIBS, and any files it lists from /usr/lib64 and /usr/bin are copied from the appropriate _hd directory to the appropriate _ram directory. Then each _ram/_hd pair of directories is mounted together with unionfs, e.g., /usr/lib64 is /usr/lib64_ram and /usr/lib64_hd. This way, the system looks on the ramdrive first for any libraries.

Currently /usr/bin and /usr/lib64 are mounted read-only. Mounting the RAM drive read/write is a bad move because any changes you make to it (probably with portage) will be wiped on reboot. (It might also fill up quickly if you emerged something large, like Eclipse.) You could write a script to copy /usr/lib64_ram to /usr/lib64_hd at shutdown time, but if the power goes out or the computer locks up, you might have problems. Mounting the hard drive read/write is also bad, because any changes made by portage, etc. to a library that is also present on the RAM drive will not show up until you reboot, which is odd behavior.

My plan had been to eventually create another init script that just mounted a read/write copy of /usr/lib64_hd at /usr/lib64 (and similarly, /usr/bin_hd at /usr/bin). I would put it in a new runlevel, and then I could choose at bootup whether I wanted the fast, unmodifiable libs or the slower, writeable libs. In theory, you could even switch runlevels once you had started, though it would involve closing down X and other programs, since they would be using the libraries.

Oh, and a warning: Don't try this with /bin or /lib. The system needs to be able to run bash at startup. Bash is located in /bin, and it requires libraries in /lib. I'd go so far as to say that there is almost certainly a way to get around this, but I didn't think it would be worth it, as most of the files in those directories are small, anyway. Your situation may vary.

Let me know if you have any questions. I'm sure I missed some important things. Finally, let me reiterate that this is NOT a finished script. If you want to use it, you'll have to play around with it, and it's easy to make your system unbootable along the way.

I've seen several posts concerning this issue: what happens when you emerge something and write to the ramdisk? I am uncomfortable with the idea of just updating the HD with the contents of the ramdisk on shutdown. If the system fails and for some reason does not shut down properly, you lose the results of your emerges.

It seems to me that the following would be really nice to have (and I don't know how much of it already exists):

When something writes to /usr/lib or /lib or whatever you have mounted in ramdisk, it writes to the hard drive as well as the ramdisk. When something reads from any of these locations, it reads from ramdisk. How would such a thing be implemented?

I've seen several posts concerning this issue: what happens when you emerge something and write to the ramdisk? I am uncomfortable with the idea of just updating the HD with the contents of the ramdisk on shutdown. If the system fails and for some reason does not shut down properly, you lose the results of your emerges.

It seems to me that the following would be really nice to have (and I don't know how much of it already exists):

When something writes to /usr/lib or /lib or whatever you have mounted in ramdisk, it writes to the hard drive as well as the ramdisk. When something reads from any of these locations, it reads from ramdisk. How would such a thing be implemented?

As I understand it, this isn't possible with unionfs, though I may be overlooking something. See "Writing to Union" at the unionfs website:

Quote:

...all changes are stored in leftmost branch.

In other words, just the ramdisk, or just the hard drive, but not both.

This is why I had planned to mount the ramdisk/hard drive union read-only and offer a separate, hard-drive-only mode for running portage, etc. (Well, I was also worried that the ramdisk would fill up if I emerged something like Eclipse:)

i haven't messed with Ramdisk, but i have a few things set up in a tmpfs "partition." what i do is have it untar my stuff into the tmpfs on startup and set my mount points. on shutdown, it retars all the contents (in case something has changed, like i did an emerge). i've also made a backup tarball after a major emerge in case my computer dies before it can have a successful shutdown. i figure as long as i remember to do the backups or at least never have nasty system crashes, i'll be fine. _________________Sheepdog
Why Risk It? | Samba Howto

I've seen several posts concerning this issue: what happens when you emerge something and write to the ramdisk? I am uncomfortable with the idea of just updating the HD with the contents of the ramdisk on shutdown. If the system fails and for some reason does not shut down properly, you lose the results of your emerges.

It seems to me that the following would be really nice to have (and I don't know how much of it already exists):

When something writes to /usr/lib or /lib or whatever you have mounted in ramdisk, it writes to the hard drive as well as the ramdisk. When something reads from any of these locations, it reads from ramdisk. How would such a thing be implemented?

Could one perhaps create a raid1-array with a partition and a ramdisk?_________________There are 10 kinds of people in this world: Those who understand binary, and those who don't.

Some ppl were looking into the Gigabyte ramdisk, but I don't see the real benefit since I/O on SATA-I is capped at 150MB/s. The other, bigger, problem with solid state disks which use IDE or SATA is that you're not getting the 'R' benefit of RAM. All OS's handle IDE, SCSI, and SATA with sectors on a disk in mind. Not to mention, all your fav. file systems: reiser, xfs, jfs, ext*, etc. are also built around the fact that fragmentation slows access because a DISK has to spin and a r/w head has to move. There's no such thing on CMOS'es. But gigabyte and company will never make a ramdisk that ISN'T IDE or SATA b/c M$ Winblows can't handle what linux can: ramfs.

Anyway, if someone has a xeon MB with a PCI-X slot, I'd take a look into this card. Imagine a 16GB big filesystem where every I/O's seektime is in the hundreds of nanoseconds, not singles of milliseconds (about 20 to 25 times faster)! And a throughput of 533 MB/s! Not to mention the joy of seeing your PC boot as quickly as a palm pilot, but I digress.

Is it not possible just to take advantage of the linux disk caching by doing a cat /lib/* > /dev/null ? (or something slightly smarter with find or for... test -f &&
This has no fixed memory usage, no problems with improper shutdowns and is all-in-all simpler. Of course, it doesn't have the security advantages of RAM root partitions.

Is it not possible just to take advantage of the linux disk caching by doing a cat /lib/* > /dev/null ? (or something slightly smarter with find or for... test -f &&
This has no fixed memory usage, no problems with improper shutdowns and is all-in-all simpler. Of course, it doesn't have the security advantages of RAM root partitions.

If your tarballs are correctly updated, I can't see how this could cause troubles. And I would say that yes It can speed a bit things but you won't fell it as much as when your libs and progs are on the disk. (adn I personnaly saw very few difference whan it is on HD)

Is it not possible just to take advantage of the linux disk caching by doing a cat /lib/* > /dev/null ? (or something slightly smarter with find or for... test -f &&
This has no fixed memory usage, no problems with improper shutdowns and is all-in-all simpler. Of course, it doesn't have the security advantages of RAM root partitions.

Haven't tried it yet, but it seems liks this sort of option is much better. Let the kernel itself deal with moving things from page cache -- it's very good at it, afterall. All you need to do is let it know what you want pre-lopaded. This seems safer and less hackish than an involved tar/mount script. It's also probably faster, though I have no proof yet.

So this essnetially moves the hard disk load times from when you run an application to boot time and ensures that stuff is permanently cached. It also adds time cosuming retarballing for when you emerge something into the tmpfs or shut down.

Another way of doing this could be to just cat /usr/bin/blah > /dev/null all the libraries and binarys that you want to be cached on boot. Then they will be cached and if you have lots of ram will stay cached.

I find that with 512Mb that the kernel caching works just fine and loading openoffice or firefox for the second time is instantaneous. Sure I lose the cache if I need to use that memory but thats better than having the memory locked into a tmpfs and having to use swap.

It seems to be more trouble than its worth to do this, especially on a laptop which is being shutdown frequently._________________For since in the wisdom of God the world by its wisdom did not know God, God was pleased to save those who believe by the foolishness of preaching.

Haven't tried it yet, but it seems liks this sort of option is much better. Let the kernel itself deal with moving things from page cache -- it's very good at it, afterall. All you need to do is let it know what you want pre-lopaded. This seems safer and less hackish than an involved tar/mount script. It's also probably faster, though I have no proof yet.

how do you get it configured ? i couldn't find relevant documentation on this one.