After a full rebuildnig of the system, and if to be correct after
1. emerge -NuDbav @system
2. emerge -NuDebkav @world
it seems I have some problems with glibc. When some programs starting, they cause a segmentation fault in glibc that can be seen in dmesg.
For example:

That is not only deluge but a couple of gnome apps, that need gconfd-2 to be running, but it cannot giving the same string in dmesg, so I cannot use around a ten applications. I thought it could be a broken version of gconfd-2, libgnome, dmesg, tried previous versions and so on, but nothing helped me. I even tried unstable version of glibc but it did nothing (with some hack I returned stable version of glibc on my system)

Globally I use ‘amd64’ as a keyword. ‘~amd64’ is used for a bunch of packages only (such as mplayer and ffmpeg) that I know by sight.
As I remember, I had issues with glibc earlier. Segmentation faults were in 2.11 and 2.12 but never been a cause of such a disaster. (Yes, since I cannot use torrents it’s a disaster).

Yes, I still do have that problem even with glibc 2.14.1-r3. I’ve also tried to do revdep-rebuild, mask prelink, disable distcc and ccache and nothing helped.
Of course I rebuilt all my @system and @world every time after making changes to configuration.

It is almost new top machine but I still use my old hard drive, it isn’t two years old yet and smartctl said there is no reallocated blocks, I also checked filesystems with fsck (ext3) and all seems to be ok.

eccerr0r wrote:

Are all the segfaults happening at the same address, random?

As of addresses, I did a test with deluge running it four times, the result you can see below

You might want to pull glibc from tinderbox or livecd and use that to rule out libc build issues. Use the livecd to mess with it so you don't wreck your machine mid hack.

I still would have to lean towards faulty hardware at this point, I don't see way too many libc issues, then again I don't use deluge, and I've not seen that many libc segfaults on my x86_64 or any of my machines..._________________Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSDWhat am I supposed to be advocating?

You might want to pull glibc from tinderbox or livecd and use that to rule out libc build issues. Use the livecd to mess with it so you don't wreck your machine mid hack.

Okay, let’s say I have livecd, how do I pull glibc from it properly? Or do you suggest to build glibc on livecd and place into chrooted filesystem? Wouldn’t be easier just save /etc/, /var/lib/portage/world and unpack fresh stage3 on cleaned partitions, then move /etc and /var/lib/portage/world to their places and make emerge -NuDave @world?

eccerr0r wrote:

I still would have to lean towards faulty hardware at this point, I don't see way too many libc issues, then again I don't use deluge, and I've not seen that many libc segfaults on my x86_64 or any of my machines...

As I wrote before it is not only deluge but a bunch of apps that cannot be run. The system feels stable except of glibc. I didn’t close Firefox with 80-90 tabs for about four days now since the last compilation and it is ok too. Dont know what else can be proof of stability if this isn’t.

Yesterday I extracted a fresh stage3 image on cleaned partitions, compiled the world and the old problems were solved. But I got new bug in exchange. This time python-2.7 causes segfaults. That’s a plague, but it is deluge again. But not a client binary file like at first time.

But it is clear from the output that deluged tries to use exactly 2.7 version. Though I choosed 2.7 as primary, but that didn′t the trick.
Another strange thing is gnome-screensaver fails at locking my screen.

Code:

$ gnome-screensaver
** (gnome-screensaver:3563): WARNING **: Couldn't get presence status: The name org.gnome.SessionManager was not provided by any .service files
$ gnome-screensaver-command -q
The screensaver is inactive
The screensaver is not inhibited
$ gnome-screensaver-command --lock

# The screen fades to black, and there is nothing, but mouse cursor become I-beam on right side of it. I cannot log in or kill that thing. In `ps aux` list there is no more gnome-screensaver.

Also, remove all traces of distcc.
Perform a full re-build once you have done ths.
I ran into a similar problem with segfaults, and it all boiled down to the fact I had built my system using distcc.
It rendered Gnome-3.2 useless with a segfault appearing everytime in libmozjs.
See the following:
https://bugs.gentoo.org/show_bug.cgi?id=388521_________________Whatever you do, do it properly!

Yes, make sure you remove all esoteric CFLAGS from building system libraries, just to make sure you're not exposing a gcc bug. This has shown up time and time again, and frustrated many Gentoo users over and over again. There is no other Linux distribution that allows these build flags, they all use conservative settings that have been tried and true - such that nobody will odd problems.

Thence I reiterate CFLAGS="-O2" and possibly "-pipe" is basically all you should use. -O2 has been well tested. Even -Os (to generate smaller, but possibly slower binaries) has been a bug point in the past. Using no optimization options (CFLAGS="") should work just fine too.

The reason why I keep on repeating that my machines do not have this problem indicates it's clearly a problem with specifically your installation. I do not have special CFLAGS specifically due to the fact Gentoo users have gotten funny things happening when building with them - this is actually one of the repeatable problems that will break many peoples' machines instead of one-off problems which point to hardware issues. (BTW, mythtv crashes mostly when I receive a broken ATSC signal off the air, so you get all this noise in the signal; I'm not sure how well mythtv should be handling very bad received/stored signal quality.)

When things work until an update, does it not sound like it was some setting you introduced if nobody else is having problems? The livecd/installer CD was built with conservative settings, as well as most other distributions.

Really, tailored CFLAGS should be only used on specific binaries, namely application binaries that should be using most of CPU time. Libc, kernel should use settings that are known to work for it, these, though important, should not be using much CPU time, and by Amdahl's law, clearly has diminishing returns when optimized. This is not to say you shouldn't at least try them, say, for Deluge or even Python... but all bets are off.

I also agree with thistled - I've had problems with distcc in the past as well. Most of the times it's due to my boxes having different versions of distcc but sometimes some of my less reliable machines miscompiles an object file and thus generates a broken binary making it very hard to debug the problem. So agreed, remove distcc to debug (though this is probably not the case as the OP indicates he only has one machine.)_________________Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSDWhat am I supposed to be advocating?

I let the current gcc code decide what cpu-specific compiler options are most
useful on average on the current cpu.

("-fno-strict-aliasing" relates to optimizations that are only safe if the code
is structured without some C/C++ variable references that are entirely legal
according to the language standards but that make those optimizations
unsafe to use. "-fno-strict-aliasing" tells gcc not to bother with those optimizations
at all, instead of optimizing the memory references in that code and issuing a
warning.

"-fpermissive" allows some C++ usages to compile that only triggered warnings
before the 1998 C++ standard but were defined as errors after that and would
prevent the compiler from compiling the package. My feeling is that the need
for "-fpermissive" is an implicit FIX ME, and developers and maintainers will
get around to those chunks of code that need it eventually, so that the program
or library compiles without "-fpemissive" in CXXFLAGS.)

What is with "-fomit-frame-pointer"? It made sense when I was running linux on
a 486, and everything was slow. Saving the cycles and icache used by a frame
pointer was a definite win. On today's cpus, having the frame pointer available
to stack tracers seems like a bigger win to me than saving a few cpu cycles or
having a few less instructions in icache for an executable. (Not running a data
center or hpc system, after all. I can afford a little runtime performance to find
out what went wrong faster when things break.) Is there some feature of today's
gcc code and stack tracers that makes the frame pointer irrelevant for back traces?_________________TIA

My first thought for the OP's problem was ram. If the problem was power supply,
it seems like the memory addresses where the problem shows up would change
more randomly. It could be mis-compilation, but a stuck bit that does not change
on a write is always in the same place in the same dram chip. Reinstalling/recompiling
significant parts of the system commonly changes where things are loaded into
memory, but the error will still consistently show up at a particular address after that.

A compiler bug can do that, but failed memory chips do it more often in my experience._________________TIA

Well, we do have one limited commodity on even modern machines: cache. The more that can be fit in the cache the faster the machine will run.

However, for the most part, yes I'd rather have them compiled in except for embedded machines in which space comes at a premium. However I think this one of the "safe" options that should not cause machines to crash...

I do have to make one observation recently: one of my x86 (32-bit) machines was perfectly stable but after getting a kernel upgrade it's randomly crashing left and right. Swapping back to the old kernel/installation and the crashes disappear. Still trying to figure out what's going wrong here...clearly a software issue it seems so far..._________________Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSDWhat am I supposed to be advocating?

Same happened to me, as soon as I started using anything above 3.2.12 series of kernel, I started to experience segfaults with apps.
Nautilus, nvidia-settings, and evolution.
So I am back to stable gentoo-sources-3.2.12 for the time being._________________Whatever you do, do it properly!

I do not think -fomit-frame-pointer causes crashes, unless one is compiling
with some bleeding edge gcc version that has a bug somewhere related
to that option. I simply tend to want the best back trace possible when
things do crash that have been compiled with a stable gcc version. If that
requires some sacrifice of best possible performance and code size, then
some deployments can afford it without noticeable loss of efficiency
(where the cpu is doing nothing most of the time anyway while the
user edits something, reads text, the system waits on i/o, and so on),
while for others eliminating the extra instructions in production code
is worth more than faster debug._________________TIA

I doubt it has anything to do with crossdev as I just rebuilt both of my systems from scratch, and I don't use crossdev, and I ran into issues with segfaults compiling glibc as well.

Here is my situation:
Boot from minimal USB
chroot into HDD
run bootstrap (no segfault on glibc build)
run emerge -e system (no segfault on glibc build)
compile kernel
reboot from HDD
restore world file from previous system
run emerge -e world (yes I know its overkill, glibc segfaults on install step? Attempts to run emerge --resume causes glibc to segfault during compile phase.)
reboot from HDD (emerge --resume again segfaults on install step of glibc)
reboot from minimal USB, chroot into HDD (emerge --resume compiles glibc successfully)
reboot from HDD and resume emerge after glibc without any other issues.

My other system on the other hand is netbooted from the first machine.
Because it would take too long to build the system from the netbooted machine, I chroot into the netboot folder and build on my server.
run bootstrap (no segfault on glibc build)
run emerge -e system (glibc segfaults on install step? Attempts to run emerge --resume causes glibc to segfault during compile phase.)
reboot from HDD (emerge --resume again segfaults on install step of glibc)
reboot from minimal USB, chroot into netboot folder (emerge --resume compiles glibc successfully)

for some reason glibc does not like to be recompiled from my hdd, but works fine when running from a minimal USB/DVD, and possibly a liveUSB/DVD.

This seems to imply that the kernel on the hard drive is not good for your machine somehow.. What happens if you just copy/use the livecd/liveusb kernel as your hdd's kernel (remember to copy any initrd's, /lib/modules/, etc.) or use the .config of the livecd/liveusb kernel to build your hdd kernel...

I suppose it's ok to muck with the hdd drivers, etc. to make sure it will boot your particular machine without using initrd if you don't want one, but don't touch the CPU config (including memory options even if it will not let you detect all of your memory -- but this shouldn't be an issue for x86_64) or any of the optimization options.

As for the original poster, he should try this too, just to see if there are any similarities (try to build your glibc with a chroot from the livecd/liveusb).

The livecd/liveusb installer kernel tends to be built very conservatively, and allows for any hardware to run properly..._________________Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSDWhat am I supposed to be advocating?