Murphy really got to me on Monday as it took over 12 hours to get a working kernel after discovering a partitioning error - ran out of room on / and couldn't do much of anything.

What really annoyed me was the inability to get a working kernel for 12+ hours until I dropped down to the vanilla-sources 2.6.32 kernel. Of course udev complained about that and refused to work but I had a working kernel to boot the system. What a blasted PITA since the 2.6.32 kernel worked, I even considred downgrading to udev-1.71 since the later kernels work with it but I did go for the 3.0.68 just to fix udev-1.97

Then I encountered a damn blocker - sysvinit blocked any emerge -u system until I uninstalled it and I have to wonder why Gentoo is depending on sysvinit as part of the base system when udev does most of what sysvinit does. Alternatively why depend on udev at all for a working system and yes I do agree on live/install discs but I was one of the many that ran w/o udev for quite a while and had no problems once the system was setup.

Once I solved the blocker issue, I managed to get a working base system and am now adding in the rest of my packages so things are finally back to normal for Gentoo but once I get everything installed and stable, I'm locking the system down for 12-18 months before I do any more upgrades. Yes, I don't upgrade often as I prefer a stable working system over the bleeding edge (reason I haven't upgraded my video card yet - Radeon 5670) and Gentoo offers me the ability to decide on dependencies and reduce the cruft installed - helps security no end by not having many of the vulnerable packages installed.

Why didn't you just completely wipe your distfiles? Sure and easy way to unlock a load of free space.

Or indeed move it or /usr/portage onto a different partition?

Other handy tip is to build in a memory mounted /var/tmp/portage. Much faster builds!_________________"The problem with quotes on the internet is that it is difficult
to determine whether or not they are genuine." -- Abraham Lincoln

Other handy tip is to build in a memory mounted /var/tmp/portage. Much faster builds!

Actually, if you have the RAM for this, the files will remain in the disc cache anyway, so while they will be written to disk using DMA, they will never be read back.
The time saved is therefore the DMA setup time and the memory bandwidth to carry our the RAM to disk transfer. This is very small compared to the build time.
Even though I know this, I still have my build space in a tmpfs :)_________________Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.

Other handy tip is to build in a memory mounted /var/tmp/portage. Much faster builds!

Actually, if you have the RAM for this, the files will remain in the disc cache anyway, so while they will be written to disk using DMA, they will never be read back.
The time saved is therefore the DMA setup time and the memory bandwidth to carry our the RAM to disk transfer. This is very small compared to the build time.
Even though I know this, I still have my build space in a tmpfs

Is that right? When I first converted, I did some tests, and it did seem to improve things by some margin (rebooting after each emerge!).
Since I have a SSD, I also use this as a means not to write to the SSD at all. I also run without a swap file for this reason.
I have lots of RAM! _________________"The problem with quotes on the internet is that it is difficult
to determine whether or not they are genuine." -- Abraham Lincoln

A number of you suggested mounting portage on it's own partition and I'm way ahead of you as it has always been that way for my system. What bit me in the ass was the fact that I didn't account for the space I'd normally give to a dedicated /usr partition (4GB) so when /usr filled up, / didn't have any space availalbe forcing me to repartition things. The solution was to double the space assigned to / to 8GB instead of the 4GB I'd orignally gave it but to do that, I has to remove the other partitions since I didn't even have a graphical tool that could fix the problem, so a clean install was called for.

The annoying thing about the rebuild was having problems with the damn kernel builds and yes I had a backup of the kernel config to work from but none of them would finish booting for some reason. Hell they'd start and simply hang - not sure why and the only kernel I was finally able to get working was the 2.6.32 vanilla. Of course udev complained and refused to start (wanted an newer kernel) and that's when ran into the blasted blocker - emerge -upv system threw a block at sysvinit. God forbid I have to fight things at this point. At least I could unmerge sysvinit and solve the problem long enough to get the -e system rebuilt then replace the blasted package. Once past this, I then had to decide if I wanted to move to a newer kernel or downgrade udev to the 1.7 series. Could have gone either way but I decided to move to the 3.0.68 vanilla just to solve the udev issue.

Right now I'm using the LTS vanilla-sources-3.4.35 series and everything works nicely. My system is even creating the /dev/video0 node for my webcam that it wouldn't for any reason on the old build (gentoo-sources-3.7.9). Maybe something changed in udev that fixed the problem.

The kernel aggravation factor almost forced me to give up and reinstall Win7 but thankfully I preservered and now have a working system - sound is borked right now but I'll find a solution.

Why didn't you just completely wipe your distfiles? Sure and easy way to unlock a load of free space.

Or indeed move it or /usr/portage onto a different partition?

Other handy tip is to build in a memory mounted /var/tmp/portage. Much faster builds!

He could use ZFS. ZFS dynamically allocates space to datasets on demand, so you do not need to guess your future space requirements.

As for faster builds, ZFS ARC keeps frequently used data in memory, while letting less frequently used data go to disk. That basically gives you the same effect, but without the artificial space limit of a tmpfs.

Another trick to make things build faster is to do mkdir -p /etc/portage/env/sys-devel && echo 'GCC_MAKE_TARGET="profiledbootstrap"' > /etc/portage/env/sys-devel/gcc && emerge --oneshot gcc. That will build a PGO version of GCC that compiles software faster. The downside is that building a PGO version of GCC takes longer since MAKEOPTS cannot be used for the build process.

hypnos: Fresh install - didn't have any backups as yet while the blasted kernels were fighting me about any of them booting. Once I got past that point, Murphy bit me with the blocker. After I solved that, things finally started flowing and I managed to get the system working but as I said, it was almost enough for me to throw in the towel and reinstall Windows as that at least worked - PITA though it is.

Everyone, it looks as though Murphy bit me again during the damn install. Following the handbook, I used the -T small flag when formatting the /home and /storage partitions and I now run into an error when trying to copy a 30GB file from backup (ntfs formatted external) to the system. The cp error is pretty indictive about it

Code:

failed to extend file

occurs just after the 10GB point. Not good and all I can say is that in checking the man page on tldp.org, the flag is undocumented so someobody screwed up on the handbook with that flag. It should not be used due to potential problems.

Since Murphy saw fit to bite me again, I'm planning another wipe and clean install but I'm cheating this time. I've got an 8GB flash drive that I was planning a Gentoo install on and that's what I'll be doing from the working system (chroot is so damn nice). This means I can actually take the time and try out ryao's PGO tip. Since I'm using the flash drive, I'm thinking about giving the 4.7.1-r1 toolchain a try with the tip as it may offer a major speed boost for the rest of the packages I'll be adding. Hopefully Murphy wont bite me in the ass this time though a replacement flash drive isn't that expensive if that happens. What's going to be interesting is how much I can actually fit onto the damn thing (I have /var/tmp on a seperate partition, so the world file isn't affected - looks like it's in /var/pkg so the chroot idea will work about the same as when I'm installing using the minimal install disk.

EDIT:
Well the tip from ryao doesn't work on my system for some reason but gcc-4.7 sure makes a difference in build speed.

Murphy is already teasing me - I'm getting an exec format error when trying to chroot /media/gentoo /bin/bash in konsole. Looks like a problem with the chroot command (not sure what the issue is yet. Now I'll have to figure this problem out before I can install (consolekit/polkit are both installed and may be the problem).

EDIT:
Not sure what fixed it but switching to fluxbox and reformating the drive then extracting the tarball worked this time around. Just like Windows - a reboot was required. Oh well it's at least working

Last edited by FastTurtle on Wed Mar 20, 2013 4:24 pm; edited 1 time in total

Got a few questions about the pgo tip since it's not working on my system.

Running the command doesn't seem to do anything at all, not even the oneshot step is happening.

Can you provide a breakdown of what the command is supposed to do - I can follow just about anything once it's explained but the command is a bit cryptic for someone who doesn't tend to get into them much.

Another trick to make things build faster is to do mkdir -p /etc/portage/env/sys-devel && echo 'GCC_MAKE_TARGET="profiledbootstrap"' > /etc/portage/env/sys-devel/gcc && emerge --oneshot gcc. That will build a PGO version of GCC that compiles software faster. The downside is that building a PGO version of GCC takes longer since MAKEOPTS cannot be used for the build process.

Well, this is going a bit OT but that is indeed a really nice tip, considering how incredibly few times I build gcc, usually no more then twice in a versions lifetime. I cant find any modern figures other then some 7% speed increase in some 3.X version - still, this is the one package I would use PGO for.

FastTurtle wrote:

ryao:

Got a few questions about the pgo tip since it's not working on my system.

Running the command doesn't seem to do anything at all, not even the oneshot step is happening.

Can you provide a breakdown of what the command is supposed to do - I can follow just about anything once it's explained but the command is a bit cryptic for someone who doesn't tend to get into them much.

Split his commands up if it all fails when input at once.
Another method is:

Got a few questions about the pgo tip since it's not working on my system.

Running the command doesn't seem to do anything at all, not even the oneshot step is happening.

Can you provide a breakdown of what the command is supposed to do - I can follow just about anything once it's explained but the command is a bit cryptic for someone who doesn't tend to get into them much.

Sure. It makes a portage env file that is sourced by emerge when building gcc. It contains a variable telling the eclass (which the ebuild in this case simply wraps) to do a profiled build. Finally, rebuild GCC.

If you copied the entire command properly, it should not fail. Would you paste what you are seeing when this failure occurs?

Well, this is going a bit OT but that is indeed a really nice tip, considering how incredibly few times I build gcc, usually no more then twice in a versions lifetime. I cant find any modern figures other then some 7% speed increase in some 3.X version - still, this is the one package I would use PGO for.

Thanks for sharing, I think I can relate to that 10%; I timed libreoffice a few days before the gcc 4.7.2 pgo rebuild out of curiosity of time taken and timed it after the rebuild since I still had the time in my head, although rough numbers in my mind, unless google calc is wrong my calculations give me 9.52 ==> 10% - of course not 100% accurate but I would be confident in saying it was an improvement between the mentioned 7% and 10%,

Unfortunately I lost my pgo gcc bringing my box back from no-multilib a month ago and moved to llvm/clang long before, for all but as many packages as I have fingers since it wipes the floor with my pgo gcc in terms of speed and in binary sizes (curious before and after of /usr/bin from a backup).

3.9.8 works like a charm for me. id start building a config from scratch from that big of a jump, and just keep up on it to find problems as they occur... even build a junker test vm (or better yet get junk hardware duplicates) to test re-builds and updates.

for me to build a new kernel takes me about an hour. i dont use genkernel as it was broken when i came into gentoo for amd64. i use an external live cd that uses modules for everything so i can see what needs to be compiled in to the kernel quickly. what i have documented so far on kernel compiling is noted here....

You can skim that down to a few minutes with `make localmodconfig`, ccache, the right make options and using tmpfs; you don't need to move the kernel sources to tmpfs, just use the O= parameter to specify your tmpfs directory.

666threesixes666 wrote:

i dont use genkernel as it was broken when i came into gentoo for amd64.

Did you file a bug for that?

666threesixes666 wrote:

i use an external live cd that uses modules for everything so i can see what needs to be compiled in to the kernel quickly. what i have documented so far on kernel compiling is noted here....

Murphy really got to me on Monday as it took over 12 hours to get a working kernel after discovering a partitioning error - ran out of room on / and couldn't do much of anything.

What really annoyed me was the inability to get a working kernel for 12+ hours until I dropped down to the vanilla-sources 2.6.32 kernel. Of course udev complained about that and refused to work but I had a working kernel to boot the system. What a blasted PITA since the 2.6.32 kernel worked, I even considred downgrading to udev-1.71 since the later kernels work with it but I did go for the 3.0.68 just to fix udev-1.97

Eh, did you try to just start from a genkernel instead? You can easily trim down from that; that way, it shouldn't take long to have a booting system.

Or is an actual kernel bug involved that keeps you from using any higher version at all? Could you file a bug for that at https://bugs.gentoo.org/?

FastTurtle wrote:

Then I encountered a damn blocker - sysvinit blocked any emerge -u system until I uninstalled it and I have to wonder why Gentoo is depending on sysvinit as part of the base system when udev does most of what sysvinit does. Alternatively why depend on udev at all for a working system and yes I do agree on live/install discs but I was one of the many that ran w/o udev for quite a while and had no problems once the system was setup.

That is the default as to avoid users from having to do the tedious and old /etc/conf.d/modules approach; also, selecting things at modules in the kernel gives some advantages over selecting them as built-in. But as with anything, YMMV...

FastTurtle wrote:

Once I solved the blocker issue, I managed to get a working base system and am now adding in the rest of my packages so things are finally back to normal for Gentoo but once I get everything installed and stable, I'm locking the system down for 12-18 months before I do any more upgrades. Yes, I don't upgrade often as I prefer a stable working system over the bleeding edge (reason I haven't upgraded my video card yet - Radeon 5670) and Gentoo offers me the ability to decide on dependencies and reduce the cruft installed - helps security no end by not having many of the vulnerable packages installed.

Consider that old packages are a security risk as well; besides that, upgrading in the stable keyword alongside upgrading in small steps (do 5 or 10 packages each time, then when you come back do a`emerge --resume`) should be a way to keep your system up-to-date without any risk of big breakage in a long time. Often downgrading resolves things; besides that, it is probably a good idea to keep back-ups as well as have a fail-safe approach to get your Gentoo system running up fast again. As in, let's say it breaks now and refuses to boot in any way; how fast would you be able to rebuild your current state from a stage3? Building bin pkgs as you merge things can help with this for example; but well, there's a lot more to consider to put back (configuration, user folders, ...)...

Waiting multiple months between upgrades sets you up for breakage; upgrades go well when you follow rolling releases, if you wait to long you have sorts of hoops and incompatibilities you need to go through.

Last edited by TomWij on Wed Jul 17, 2013 7:07 pm; edited 2 times in total