A number of people have suggested http://www.scratchbox.org as a usable cross compiler framework for the plug computer. Unfortunately I'm unclear which of the 2 dozen ARM toolkit tarballs one needs to install for the Plug Computer.

Thanks in advance for your responses.

bobPS.Assuming I can get a GCC based cross compiler working, does anyone know what compiler flags are needed for the Plug Computer.

There are no special parameters for compiling for the plug. The things I've noticed when cross compiling however is that the kernel wants ARCH to be set to arm, while various software (depending on their buildsystem) want armv5te/arm/other arm-ish things, which made me fail some builds for crosscompiling.

Such a simple program as hello world should just be to compile normally, but with the cross compiler as your compiler.And can't help you with how to setup your cross compiler - fedora already had a package and such setup, only had to download, install and set the path.

I'm making progress. It would appear that the illegal instruction error is a result of me trying to run this on an x86 compatible machine with a VIA Samuel 2 chipset. When I pull things over to my old Celeron the error disappears.

I then solved the cc1 error by adding

<mydir>/gcc/libexec/gcc/arm-none-linux-gnueabi/4.2.1

to my PATH.

Now I'm mired in stdio.h header hell. There seems to be innumerable copies of headers all over the place in this tarball. Does anyone have a clue as to which order they must be included on a "-I" parameter in my gcc line? Any idea why there are so may copies of headers like stddef.h?

First. A gcc source tar ball can be built as an arm cross compiler running on x86 host linux, but that exercise requires knowledge, patience and a calm mind....IMO, the actual cross compiler is a piece of cake to build from gcc source. The problem starts with the userland libraries, which any 'hello world' app would make use of. For C, one would minimally need glibc and the C++ runtime if you're using C++.My recommendation is; don't go this path ( unless you are interested in the topic or equipped with too much free time....)

Instead, give a big thank you to the boys over at codesourcery and download their 'lite' version for free. Browse to http://www.codesourcery.com/sgpp/lite_edition.html, pick 'arm' and download the 'arm-2009q3-67-arm-none-linux-gnueabi-i686-pc-linux-gnu.tar.bz2' tar ball.

With whatever means, maybe network exported drive or ftp, copy the 'hello_world' executable over to your plug and run it.

Result: guaranteed success.

A couple of points:- It looked like the codesourcery people used a kernel source tree version 2.6.16 for the compliler. Myself, I am running Debian squeeze, which is a kernel 2.6.30. Does this matter?? Not really, unless you want to make use of some very new and fancy kernel interface, i.e. including and using something under /usr/include/linux/*.h. Your kernel might support this interface, but the compiler doesn't have it in its include file hierarchy. Well, you can copy over this one file into your application and it probably works anyway. But in general, the 16 to 30 diff doesn't matter for most apps.- What about the glibc version?? Same thing here, codesourcery might be built with a slightly different version of glibc than actually installed on your plug. For most practical purposes; this doesn't matter. The most important thing is that the plug finds one compatible glibc shared lib. On your plug, you can check with$ ldd hello_worldit will tell if it can load all dynamic dependencies.- Someone else replied and talked about kernel dev. This doesn't really pertain to you since you are talking about user apps. But of course you can use code sourcery for building kernel too, there is a wiki and other postings describing this. NOTE: the kernel doesn't depend on ANY shared library provided by the compiler, therefore it is really much simpler to build a kernel or kernel modules since you never need to consider this...

Here is an update on my "excellent adventure". First of all thanks for the responses. I tried the codesourcery one on my VIA box. Same illegal instruction error. It would appear that my VIA processor is not i686 compliant.

Meanwhile I asked a friend with a more uptodate Linux version to try my original sequence with the Plug Computer cross compile .zip file. He had no problems ! The hello world executable worked. So my guess is that those tools only work for very recent Linux versions and processors.

I'm now in the process of getting my real application to cross compile. I'll keep you posted.

Of course VIA cpu are x86 compliant. VIA nano is even an AMD64 implementation. I have run plenty of Linux 32/64 installations on VIA nano. The only problem you might face is regards to vga graphics, since VIA is less than linux friendly.

CodeSourcery works, it is as simple as that. I tried it myself. Default code generation appears to be ARMv5 compatible, so there are no need of any architecture specific flags during compile/link.

Here is the update on my "adventure". I was able to cross compile several libraries and executables without difficulty. However one of my files includes <sys/wait.h> and this appears to mess things up. What I don't understand is how overriding CC in a Makefile (to point it at the cross compiler) can simultaneously alter the include search path? The sys/wait.h is on my system but the cross compiler refuses to "find" it.

What I don't understand is how overriding CC in a Makefile (to point it at the cross compiler) can simultaneously alter the include search path?

Well - since it is a different compiler it will have different builtin default values for where it is going to look. Perhaps <sys/wait.h> was found in a default location for compiler1, and compiler2 didn't include that location?The only way I know to get this list out of gcc is to write a minimal C program

However, the wait() family is the only way I know to eliminate zombie processes that result from a fork() parent/child architecture. The mystery to me is why the ARM architecture has anything to do with what to me should be firmly in the Linux OS layer. The fact is that the cross compiler environment supplied for the plug computer doesn't have the key header <sys/wait.h> which is required by all the functions in the wait() family. So how does one deal with zombie processes on the plug computer?

bobPS.I'm in the process of trying to build my code without the kill zombie functionality to see what happens on the plug when a child process dies. ie. does it leave behind a zombie process?

POSIX.1-2001 specifies that if the disposition of SIGCHLD is set to SIG_IGN or the SA_NOCLDWAIT flag is set for SIGCHLD (see sigaction(2)), then children that terminate do not become zombies and a call to wait() or waitpid() will block until all children have terminated, and then fail with errno set to ECHILD. (The original POSIX standard left the behaviour of setting SIGCHLD to SIG_IGN unspecified.) Linux 2.6 conforms to this specification. However, Linux 2.4 (and earlier) does not: if a wait() or waitpid() call is made while SIGCHLD is being ignored, the call behaves just as though SIGCHLD were not being ignored, that is, the call blocks until the next child terminates and then returns the process ID and status of that child.

This sounds like a way to keep children from becoming zombies. Or have I once again not comprehended things correctly?

This Gentoo bug report seems to allude to the ARM Linux not supporting the wait() function family.

Seems unlikely to me...that report actually says that waitpid() system call isn't defined, and is implemented by a wrapper.

Quote

However, the wait() family is the only way I know to eliminate zombie processes that result from a fork() parent/child architecture.

They result from the need to let the parent handle SIGCHLD. If you really are not interested in knowing when and why child processes die then use sigaction() to set the SIGCHLD handler to SIG_IGN and also set SA_NOCLDWAIT in the flags. (i.e. read "man sigaction").