...
./configure --prefix=/usr --build=x86_64-pc-linux-gnu --host=i686-pc-linux-gnu --mandir=/usr/share/man --infodir=/usr/share/info --datadir=/usr/share --sysconfdir=/etc --localstatedir=/var/lib --enable-default-binary --with-x --disable-silent-rules --with-aa --with-alsa --disable-altivec --with-bzip2 --with-libcurl --with-dbus --without-gvfs --with-webkit --with-libjpeg --without-libjasper --with-libexif --with-lcms --without-gs --enable-mmx --with-libmng --with-poppler --with-libpng --enable-python --enable-mp --enable-sse --with-librsvg --with-libtiff --with-gudev --without-wmf --with-xmc --with-libxpm --without-xvfb-run --enable-gtk-doc --disable-maintainer-mode
configure: loading site script /usr/share/config.site
configure: loading site script /usr/share/crossdev/include/site/linux
configure: loading site script /usr/share/crossdev/include/site/linux-gnu
configure: loading site script /usr/share/crossdev/include/site/i686-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for i686-pc-linux-gnu-strip... i686-pc-linux-gnu-strip
...
checking if Pango is version 1.32.0 or newer... no
checking if Pango is built with a recent fontconfig... no
configure: error:
*** You have a fontconfig >= 2.2.0 installed on your system, but your
*** Pango library is using an older version. This old version is probably in
*** /usr/X11R6. Look at the above output, and note that the result for
*** FONTCONFIG_CFLAGS is not in the result for PANGOCAIRO_CFLAGS, and that
*** there is likely an extra -I line, other than the ones for GLIB,
*** Freetype, and Pango itself. That's where your old fontconfig files are.
*** Rebuild pango, and make sure that it uses the newer fontconfig. The
*** easiest way be sure of this is to simply get rid of the old fontconfig.
*** When you rebuild pango, make sure the result for FONTCONFIG_CFLAGS is
*** the same as the result here.

Disclaimer: Although I am currently messing around with cross compilers, I haven't yet set up to make a cross compiler work compiling from portage (only my own code for a beaglebone, AVR or arduino), so I can't give you much more advice! I do have an eeePC701 which I want to put Gentoo on (just for the challenge), so I will being going down your path sometime in the future).
[/code]

Distcc is installed but it takes unfortunately much more time. I want avoid distcc for bigger projects. The reason why I need to setup the cross-compiling environment is described here. The hostmachine (corei7) doesn't understand the movbe instruction of the target (atom).

...
checking sys/shm.h presence... yes
checking for sys/shm.h... yes
checking whether shmctl IPC_RMID allowes subsequent attaches... assuming no
checking for shared memory transport type... sysv
checking whether symbols are prefixed... no
checking fd_set and sys/select... yes
checking for XmuClientWindow in -lXmu... no
checking for XShapeGetRectangles in -lXext... yes
checking for X11/extensions/shape.h... yes
checking for XFIXES... yes
checking for gzsetparams in -lz... no
checking for BZ2_bzCompress in -lbz2... no
configure: error:
*** Checks for bzip2 library failed. You can build without it by passing
*** --without-bzip2 to configure but you won't be able to use compressed files then.

I use a cross toolchain on a AMD64 to build for an ARM target (a Raspberry Pi) with mixed success.
A few things won't build because they attempt to run cross compiled code on the host as part of the build process.
Other things fail because of .la files. The linker gets pointed to the files on the host, instead of those in the target environment.

The easiest and most reliable way to build is to use distcc on the target in pump mode with helpers running a cross compiler.
It mostly just works. There are a few things that fail and get built locally.

If cross buildings was a 100% successful, you could install your cross toolchain and do for example

Code:

armv6j-hardfloat-linux-gnueabi-emerge @system

into an empty cross environment and it would just work.
It doesn't as perl and python won't build that way._________________Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.

I use a cross toolchain on a AMD64 to build for an ARM target (a Raspberry Pi) with mixed success.
A few things won't build because they attempt to run cross compiled code on the host as part of the build process.
Other things fail because of .la files. The linker gets pointed to the files on the host, instead of those in the target environment.

That's what I'm afraid of.

I ran it before in a chroot. But after the change of the host CPU, some packages fail to build due to the movbe command of the target CPU (Atom), which the host CPU (Xeon) doesn't understand.

I made another quite crazy approach:
The installed packages on host and target are almost the same version. Means the 2 systems are almost identically which the installed packages concerns.

Use the include, lib and pkgconfig stuff from the target without any problems of the paths.

Use /bin and /usr/bin from the host, so it should be able to execute it with the host cpu instructions.

Unfortunately this approach fails just right away. Even /bin/bash can't be executed.

NeddySeagoon wrote:

The easiest and most reliable way to build is to use distcc on the target in pump mode with helpers running a cross compiler.
It mostly just works. There are a few things that fail and get built locally.

But what do you do, if your resources (memory, disc) are not sufficient on the target machine? Or even worse, packages like webkit-gtk, which have to be compiled with MAKEOPTS="-j1". They will take a few hours on the target machine.

Until now I've used distcc only in non-pump mode. When I tried the pump mode some months ago, I didn't get it running. But anyway, distcc isn't comparable with building e.g. in a chroot environment completely on the host machine.

NeddySeagoon wrote:

If cross buildings was a 100% successful, you could install your cross toolchain and do for example

Code:

armv6j-hardfloat-linux-gnueabi-emerge @system

into an empty cross environment and it would just work.
It doesn't as perl and python won't build that way.

The cross environment works (gcc/g++, binutils). I was able to build simple packages. The problem are the big packages, which are worth to build on a bigger machine. For the small stuff I don't need the cross environment.

Thanks so far for your information. I guess I'll have to search more, to get that problems solved.

For lack of RAM, there is very little to do. My Pi has a whole 256Mb. I've not tried to build things like firefox or libre office yet :)
I use either an USB HDD for build space, or nfs. Either way its external. This ea

You could write an exception handler for the kernel to deal with the movbe instruction.
Every time the host CPU hits this, it raises an illegal instruction exception, which you may catch, run some software to emulate the movbe instruction than pass control back.
The program execution the movbe will not be able to tell the difference. The drawback is the time taken to do two context switches to execute what should be a single instruction.

Your "quite crazy approach" fails because inside the chroot, the kernel appears to be a 32 bit kernel, so refuses to run 64 bit code.
Look at uname -a both inside and outside the chroot. Thats what the linux32 prefix does in the linux32 chroot .... command.

My Atom based Acer One 110 builds everything for itself but both firefox and libre office want a day each, so I'm looking at a cross environment for it on my AMD64._________________Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.

After recompiling these packages the errors disappeared. And gimp compiled without problems. The KDE stuff has also some tools, which use movbe in the install process. But this I will figure out in the next big update.