This means that clang svn will be able to build a working kernel some time in the next week or so._________________Freedom is the right of the individual to choose the software he installs, not the right of GNU to force you into GPL.

Is zlib known not to build with clang? It has once again broken my system :/_________________Freedom is the right of the individual to choose the software he installs, not the right of GNU to force you into GPL.

I've now found that a large amount of things build if you replace CPP=clang++ with CXXCPP=clang++._________________Freedom is the right of the individual to choose the software he installs, not the right of GNU to force you into GPL.

I would personally like it (especially if it was possible now, perhaps it is and I don't know) if you could compile some stuff with LLVM/clang, but most with GCC. Like for XUL-based stuff, clang seems to improve things, but for everything else, it probably doesn't make a significant enough difference that i'd still compile the rest of my system with GCC.

For those who are interested, I have setup a VM with a basic Gentoo stage3 install, and each time a clang version gets released I try to recompile system with that. My intent is to try to support full @system compile (yes, glibc and gcc may still not work), after that slowly test all other packages.
My main showstopper has always been zlib. Will keep you posted if that works now or not =)
BTW, clang-3.0 on AMD K10 processor has a problem, in order to make it work there you should make a custom ebuild with this patch or emerge clang-9999. I'm testing this now.._________________Neo2
Unofficial minimal liveCD for x86/amd64 w/reiser4+truecrypt

System compilation seems to go way smoother than the 2.9 release. I have some problems only with 6/7 packages out of 209 or so (tied to some gnu extensions I guess).
Now I'm interested in some benchmarking. I'll do a clean VM setup, and then snapshot and recompile trying to extract the differences in compile time and output file size. I'm hoping that some bash magic will do the trick =) Will keep you posted, and as soon as the first round finished I'll post you the package failures._________________Neo2
Unofficial minimal liveCD for x86/amd64 w/reiser4+truecrypt

Honestly, I would have expected also GCC to miscompile.
Most of the failures are missing symbols in the linking phase, but two of them IIRC are bugs in the code for which clang bails out with error, rather than silently compiling away.
Yes, latencytop is not part of the system profile, I used it to see what the VM was doing when it seemed stuck. Other additions include utility packages such as genlop, mlocate, hdparm, p7zip and so on. I will post a detailed list if anyone is interested. USE flags are also stripped to the bare minimum.
I don't have time now to do the full parsing of emerge.log to compare compile time, du -sh on binpkgs reports:
196M binpkgs_clang/
197M binpkgs_gcc/
but of course, that's a minimal variation due to the fact that everything is bzip2'ed.
I used -j20 to compile the packages. That was done only to avoid the scheduling overhead and other similar issues from interfering with the comparison.
There is some gain in compile time, judging from the results of the bigger packages:

"%clang vs gcc" indicates the amount of time spent compiling by clang against gcc, in percent, eg:
GCC =2m, clang =1m -> %clang vs gcc = 50%
GCC =2m, clang =3m -> %clang vs gcc = 150%
The fourth number is simply (100%-%clang vs gcc). It is there to just highlight the difference rather than the absolute value.
Note that GCC recompiles itself three times, so the time reduction is achieved only in stage1 for GCC. I bet it would count a lot more if it were to compile only once.
All the results have been achieved within a VM, caching is of paramount relevance, the access time to the files is very little when compared to a "real" environment. The VM has 4Gb of dedicated and preallocated RAM (gets reserved when the VM starts instead of being allocated dinamically) and 4 phenom processors, hand-compiled stripped kernel with BFS scheduler, version 3.1.5.
It is important to notice, however, that being in a VM means interposing another layer of indirection for access to memory, disk, network, task execution, etc.. I guess that outside a VM we can expect timings to drop on both of sides, but keeping more or less the relative differences between the two.
I will use a script and genlop to elaborate a more detailed comparison in the next few days.
I'm honestly eager to try and compile www-client/chromium now
Will leave that for some time ahead though, I'd rather fix system for the time being.

Not that I'm aware of, I haven't looked yet though.
I'm sorry but I have little time during the week.
Your problem seems to be with libtool, I don't know much about libtool/clang interaction, it may be a libtool inadequacy, looking only at the error. What are your CC, CPP, CXX, CXXCPP variables? Which version of clang are you running?_________________Neo2
Unofficial minimal liveCD for x86/amd64 w/reiser4+truecrypt

For those who have an AMD K10 CPU, I filed a bug and attached a patch (taken from upstream) for that family of processors. The patch allows clang to recognize the "amdfam10" -march flag. The issue was fixed shortly after the 3.0 release, that explains why clang-9999 works without patching.
I'll have little time to work on and test clang until the end of February (university exams coming).
After that I should be able to be much more active in terms of filing and fixing problems with clang.