Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

snikulin writes "My 6-year-old embedded software happily runs on kernel v2.4 on an XScale CPU. The software gets a bunch (tens of megabytes) of data from an FPGA over a PCI-X bus and pushes it out over GigE to data-processing equipment. The tool chain is based on the somewhat outdated gcc v2.95. Now, for certain technical reasons we want to jump from the ARM-based custom board to an Atom-based COM Express module. This implies that I'll need to re-create a Linux RAM disk from scratch along with the tool chain. The functionality of the software will be essentially the same. My question: is it worth it to jump to kernel 2.6, or better to stick with the old and proven 2.4? What will I gain and what will I lose if I stay at 2.4 (besides the modern gcc compiler and the other related dev tools)?"

actually, why was this modded flamebait? despite the fact that it doesn't give a direct answer to the question (99.9% of posts don't even give any answer, direct or indirect to the questions), the post actually makes sense and is relevant. With a test plan there is the possibility to find incompatibilities that don't pop out at first sight and that may force the guy to stick to the older kernel and, thus, voiding the 'is it worth it'-question with an 'is it possible'-question.

Your solution requires the post submitter to do all of the work to create his solution for both kernels, and then compare them.

If someone asked whether to build a reasonably complex website in Python or PHP would you recommend that they build both and then performance test them? That's a lot of extra work.

In both the original post submitter's case and the hypothetical one I suggested, it would be much easier to gather as much information you reasonably can about both solutions and then make an educated guess as to the best option. I'm not sure Slashdot is the best place for his information gathering, but I understand what he is doing.

2.4 is horrible to work with. It's missing so many features you expect from a POSIXy system that you constantly have to find work-arounds. Having a 2.4 kernel on the cluster during my PhD was enormous pain - I'd write code on FreeBSD, copy it to the cluster, and find half the features were missing. 2.6 is a lot better from a feature-standpoint, but is much heavier and isn't really suited to embedded systems anymore. If you're building the image yourself, why not go with FreeBSD or OpenBSD and get the best of both worlds - FreeBSD if you lean more towards features, OpenBSD if you want a smaller footprint?

As a member of the gentoo embedded team I would recommend the use of crossdev to generate the toolchain.By emerging crossdev-wrappers and setting up some gentooish cross-compiler environment, it is possible to cross-compile (by simply emerging them) a lot of packages on portage.Emerge will take care of most things leaving the most ugly cross-compile errors for you.

Point being? The OP isn't developing code 5 years ago. I'm not really sure why you weren't being modded flamebait. Doubtless they were busy mismodding things with which they disagree.

But, seriously, that doesn't matter even the slightest bit, the question is whether or not to use a more up to date kernel for development. You also didn't bother to mention whether or not FBSD 5.2 was also missing those features making for a completely irrelevant and useless post.

I would say this more relevant to desktops and servers, rather than embedded systems (of course, it also depends on the embedded system). Embedded systems typically perform a much more limited range of operations than when used as a desktop or server.

Again, it depends on the specifics of the hardware and the application running on top of the kernel to decide which kernel is more appropriate to use.

Finally! I think it's amazing that we're discussing embedded systems, Linux, BSD... yet ignoring NetBSD, which is the flavor that most caters to embedded systems!

build.sh [netbsd.org] is a great example of one of those unheralded "little things". If I'm on my Mac OS X laptop and want to build a NetBSD ARM kernel or distribution for my embedded single board computer, I don't have to go fussing around with finding and downloading cross-compiling too

OK, found out that the tool is called crunchgen [gw.com], and that its main purpose is efficiently combining multiple binaries into one (although I've read it could be used for building efficient libs, too), so it is more like an alternative to BusyBox multi-app binaries.

I was thinking the same thing. Yes, 2.6 has a bigger codebase, but if you compile only the modules you need, instead of everything plus the kitchen sink, it's really no bigger in binary form (maybe +5%). In return, I find it to be noticeably more responsive given the same hardware.

I've trimmed-down linux systems before and it was more work than my current method: start with an OpenBSD base, which is usually small enough for most purposes. Installing only the 'base' and 'etc' components results in a pretty damn tiny footprint and yet a full-featured Unix OS (and a quite stable and secure one at that).

Also, you end up with an actually supported OS that you can update every 6 months without a lot of patching and hacking around a custom forked/branched tree. I don't have time for that

OpenWRT dev trunk is rather stable, now supports 2.6.27 and 2.6.28 (change a few vars in the makefiles). The toolchain setup is automated and works well. I had no trouble setting up on modern Ubuntu 8.10 x64 host. A lot of embedded dev seems to be inflexible about hosting platform - the makefiles of OpenWRT work well.

I was wondering who was going to point out that Atom really isn't a very good solution for projects that aren't stuck with ia32 as their development architecture. Arm of some sort is far better for these types of applications. Better power usage, and better optimizations for things that actually matter for mobile or otherwise embedded systems.

As cool as it is, there isn't really a good reason for most people to in stall a full Linux system on an embedded system. Most of the time they just need a few relevan

First, the posters device environment sounds like it has a fixed power source based on that it mediates data between an FPGA and data processing equipment. With that said, a few hundred milliamps of current isn't going to matter to anyone.

Also, I don't think anyone mentioned anything about putting a full Linux system on an embedded system. It's likely nothing more than what is minimally required, i.e. kernel, ramdisk with some basic utilities and libraries(TFTP, dhcpcd, glibc) and the client apps.

Compared to some of the more powerful ARMs (e.g. OMAP3) the Atom is still a lot more powerful. For stuff I was looking at (mostly integer) the Atom was almost 10 times faster. You get two execution threads with hyperthreading which also helps a lot.

I know you're trolling and probably high as a kite being flown by another kite, but just in case anyone takes you seriously, let it be known that file shredding is not a kernel's job, and using a non-journalled filesystem (or a journalled filesystem set to not journal data blocks) will allow file shredders to work just fine.

However, a pass of zeroes is not enough as some magnetic domains will remain in their old state and allow partial data recovery.

...then go with the newer kernel. 2.6 has _lots_ of improvements above 2.4. The security aspects may be of less interest in your application, but the performance probably won't be. I've always believed that it is better to regret having done something than to regret having not done it.

From all I heard (I was in embedded business only in 2.2/2.4 times) that 2.6 integrated some number of patches from embedded folks and generally can be customized to run on smaller number of resources. Also, the improved I/O (much lower latencies) and scheduler (interactivity; soft-real-time) would benefit in embedded too. 2.4 has number of problem related to memory management, when virtual memory subsystem can easily grab half of available RAM - only for supporting virtual memory. 2.6 solved the problem for most architectures.

Generally, many embedded folks moved to 2.6 already - mainly due to support for more new OTS hardware. 2.4 has this support only through vendor patches (e.g. I used in past BlueCat and MontaVista patches).

In my experience changing kernel on embedded system is quite easy task. Using development system within couple of days you can come up with suitable minimal.config (one needs development system since on target embedded systems might not have sufficient resources to run vanilla kernel). Generally it would either work or not. Normally it works.

Also note that H/W vendors started being more active in 2.6 times. In 2.4 times best shot at Linux driver was some crude port from e.g. LynxOS or VxWorks. From all I know, 2.6 now supports more PowerPC system than did patch from MontaVista for 2.4 I used three years ago.

Last, but not least, if you are looking at new modules, many hardware vendors supply Linux compatibility information. 2 years ago finding module with "Linux compatibility" chapter in documentation wasn't a problem at all.

Much better power management, in many different aspects, which can be important if this particular embedded platform is meant to be battery-powered or used in an unfriendly thermal environment (yes, power efficiency is also a kind of performance metric, just per watt of consumed and emitted power, not per unit of time).

There's more power management support in the drivers, lots of ACPI fixes and improvements, and, most importantly for a platform like Atom (or any x86-based platform in general, when heat and power are a problem), the tickless idle mode, which enables very real and measurable power saving and reduction of generated heat by letting the processor actually do nothing (technically, drop to C3 and further power states) when, well, doing nothing, instead of processing useless interrupts and idling at the normal working power level.

I've always believed that it is better to regret having done something than to regret having not done it.

You are not quoting the Butthole Surfers, are you? Intro to "Sweat Loaf", from Locust Abortion Technician? Unfortunately, the iTunes 30 seconds intro preview are not enough for this song, you will have to sample it elsewhere.

I'd move on. Not for any particular feature, but to stay closer to the mainstream for the next years. The 2.4 kernel, not for any technical reason, becomes increasingly exotic as people move on to 2.6.

You'll have to maintain your existing 2.4 skills for another decade when all others have moved.

OTOH, the code is 6 years old, and from what I gather reading the post, it's stable and mature. OTOH, my guess is that if the article poster has written his code in a fairly portable way, it will compile without too much modification on GCC 3.x or 4.x and will run under the newer versions on glibc on a 2.6 kernel.

On the gripping hand, keep in mind that for embedded applications that memory is usually at a premium and the memory footprint of 2.4 is significantly smaller than the 2.6 kernel. Keep in mind that lots of embedded applications are still using a 2.4 kernel and some embedded applications even continue to use MS-DOS or FreeDOS.

I guess if I were making this decision, I'd try to compile and run my code on newer Linux distro in a sandbox to see how much work it would take to make it compile and run in the new environment. Then I'd see how much bigger a custom-built 2.6 kernel is than the existing 2.4 kernel, optimizing the kernel configuration for size and memory consumption, of course.

That work should take no longer than a couple of days.

If it doesn't work out, you can go back to your existing 2.4 configuration. *shrug*

keep in mind that for embedded applications that memory is usually at a premium and the memory footprint of 2.4 is significantly smaller than the 2.6 kernel

True, but because poster mentions he wants to move from ARM to Atom, that also implies he is moving to a more modern SBC. Therefore I would guess that his amount of available memory is also at least quadrupling (for the same or lesser costs).

My question: is it worth it to jump to kernel 2.6, or better to stick with the old and proven 2.4?

Old and proven on a different hardware. Chances are your new hardware will have some issues (if only caused by you misunderstanding something) and then it would help to have the latest kernel that more people are using.

Also, Atom is a newer processor, perhaps with PCI Express in the chipset - does 2.4 support that ?

Really a non-issue when bigger is cheaper and smaller RAM/ROM chips are being phased out. Just as an example, we developed a product using 32MB RAM but that was phased out (really, you couldn't buy the chips anymore) in favour of 64MB RAM

How much is a 1GB usb drive today again??

I guess you need something like 2MB flash for a 2.6 system (if you really squeeze things). With 16MB/32MB you can do pretty much anything.

Oh yeah. The article mentions the Just Say No: No Keyboard, No Monitor, No Wires. That was really bothersome in 2.4 times that kernel couldn't be used without video and keyboard.

Framebuffer in 2.6 is really cool, compared to old 2.4 times when it was doing some weird things without possibility to change the hardcoded behavior. We had the fun with 2.4 when due to driver problems, embedded system was mixing up LCD screens: touch screen was actually showing Linux console. [N.B. reaction of manager who firs

Well, I don't have that much experience with 2.4, and how much is 'backported' from 2.6, but IIRC you can use better IP filtering tools in 2.6. And are all drivers for various hardware written to work with 2.4 as well?

It doesn't sound like you use linux hardly for anything else than for using the drivers for the NIC, so if your system works now, then there's probably no explicit reason to change. What I would worry about though, are your future needs. Even if you don't need to upgrade now, it might just be the perfect time to do it.

I suggest both a GCC 4 compiler (probably gcc-4.2 or 4.3) and a Linux 2.6 kernel (perhaps at least 2.6.25) with a fairly recent (ie 2.6 or 2.7) GNU libc
Indeed, all this perhaps uses a bit more RAM, but you'll have more RAM than before, and it bring a lot of important functionalities & improvements (including bug fixes).
If you need a specialized HTTP server, consider GNU libmicrohttpd
Regard & Happy New Year 2009

Since first 2.6 release, most of the developer force is gone from 2.4. Although officially they "support" 2.4, expect the support to be practically nonexistent when you bump into problems. No one should have even considered using 2.4 for the last couple years now. It is simply too risky.

oh, that's understandable. but see, i'm willing to test various kernels, revert/apply patches... if only somebody would have told me that could help any.i tried to find exact kernel version that broke cciss support, but, as you can see, i only managed to nail it down to several versions.

of course, to make things harder, there were quite some updates to cciss code during that time:)

If you foresee needing to periodically update the firmware to along with a library or app, then I would say a definitive YES - use the 2.6 kernel (assuming your device is supported).

It might also be the case that the board you would like to use is not supported in the 2.4 kernel if it's new enough - kernel developers usually don't want to waste time backporting their code if they can avoid it.

Which introduces the most important issue - backporting is a PITA!! To make a long story short, if you need to track a library or app, such as an embedded JRE, or a hardware interface that requires a kernel module inserted, playing catchup and needing to backport at the same time is an awful game of one-step-forward two-steps-back. Avoid it at all costs. Backporting is not always guaranteed to work!

The 2.4 kernel has a slightly faster boot time, while the 2.6 kernel has so many improvements that it's hard to shy away from. Do yourself a favour and go with a stable 2.6 kernel.

If you're doing embedded systems in mass market volume, it's a matter of hardware requirements and cost per unit. Then potentially staying with the 2.4 kernel may be a good choice. If what you're making is a small volume custom setup, I'd go with whatever is getting the most use and the most testing now, which is definately the 2.6 kernel.

2.6 supports more hardware that 2.4 - especially embedded hardware. Several architectures in 2.6 (ARM, PPC) went through restructuring to allow easily add another board/module support.

And more importantly - for "mass market" - with 2.6 you also get much much better support from hardware vendors. In 2.4 times market was only heating up. Now, in 2.6 times, the embedded Linux market is full swing. You would be hard pressed to find H/W vendor who doesn't support Linux now - b

OMG, this was like eternity ago so I have forgotten. And the special layer to allow file systems to span multiple flashes (forgot the name).

Though ggp was trying to make a different point and I have forgotten to spell out my counter-point clearly: with 2.6, where you have more drivers and support for more boards/modules, you are not dependent on single hardware vendor. In mass markets, this is actually huge problem because over time, price for components (CPUs, SoCs, etc) actually goes up. Price curve s

The largest benny for an embedded system with 2.6 is timing, really. The kernel is now, for the most part, 'almost' totally preemptable, bring sort real time to the kernel. Additionally, using RTAI Fusion, you can get hard real time. RTAI extensions have phased out support for the 2.4 kernel.

But one would have to ask, I suppose. If your just replacing a legacy component with the same thing, why change the code?

Linux kernel preemption is about as far from real-time as you can get. It's not even in the same ballpark.

RTAI extensions do it right; it's real, real-time, although still not basically only in the parking lot outside the same ballpark. Which is as close as you need to be to HEAR the game anyway.

I don't think the guy is particularly looking for real-time support here. Pulling data over PCI-X then pushing it over a Gigabit LAN doesn't seem like it needs more than driver support. The Atom will no doubt be fas

I'm thinking your right, he may not need it. That's why I was saying he may not need to really change much at all.

Personally, I'd use whatever kernel the board support kit suggests. If there is no BSP, then whatever the version that the patches for the given ARM core is supporting. Aka, I wouldn't personally use anything not in the mainline kernel. However, this may not be possible, depending on the processor. Not all ARM processors are created equal, as each vendor can ad many things to the processor

If he's moving to Intel Atom the best place to be will be the kernel that had the support for the chip when it came out.

Running 2.4.x kernels on Intel Atom - before decent ACPI support, before the power management support for these chips, before a hell of a lot of modern chipset support especially for Intel 945 and PCI Express hit some level of maturity - is bound to be an absolute nightmare.

From the original post, he has an XScale board running 2.4 and wants to drop the "ARM-based custom board" (which is what an XScale would be) to move to Atom, and thinks he will need to generate a new toolchain and ramdisk etc. to do it.

If he was running ARM again why would he need to update anything? The toolchain and userland would be identical unless he's moving to a wildly different ARM core, and if that was the case, why is he worrying about Atom in the first place?

Lots of the improvements to 2.6 have probably been added to 2.4, but many come "native" to 2.6 so no outside patches are required. For example, kernel pre-emption, better scheduler, etc.. There are other intangibles too such as development time, testing, new toolchain, etc.., but you're already moving to a new processor and you'd have to do that anyway.

Sometime last year I was rebuilding some antique MIPS-based Linux from a 2.4 to a 2.6. Almost everything in the userspace was effortless (though much of it was based on Busybox); the main issue was related to some in-line assembler that took a while to figure out what it was doing. Once I did, I googled it and realized someone else had already solved a year or so ago.

Others in this thread will adequately cover the feature differences between 2.4 and 2.6, though it sounds like 2.4 already covers your needs when it comes to functionality. This makes your question more of a management one than an engineering one.

With these types of decisions you need to look at what your constraints and requirements are, whether they be time, developer resources, product lifetime, estimated lifetime of leveraged technology (kernel 2.4 in this case), cash, etc. It sounds like you'll be doing the development yourself, but otherwise I can't tell what the rest of cycle looks like, so you need to clarify these things before making a decision.

Those are major considerations, but it gets more subtle when you consider things like how much time you'll save with future updates due to better development tools and support with a new kernel, etc., so you need to estimate also whether the time you spend up front will be saved down the line.

So far all the positively moderated posts have advocated 2.6. Here's a slightly contrary view. You know 2.4. It seems to satisfy your needs. Why exactly are you considering change? Is there something 2.4 doesn't do that you want? I realise you might be asking in case there is some improvement that 2.6 may possibly provide that you've missed, but if the current setup does what you need then why would you even consider a change? My advice: stick with 2.4 unless 2.6 provides something additional that you definitely need.

There are some pretty compelling reasons to migrate, but looking at your specific application most of my favorite reasons don't apply. Since you're going to be changing your toolchain somewhat, the 2.6 migration isn't going to be that much more invasive. My reasons for wanting to change have mainly to do with filesystem improvements and USB improvements, which don't seem to have much traction for you. I'm assuming that you did your own hardware drivers for the PCI express data collection, so that shouldn't

I constantly port the kernel and distributions for custom embedded designs for new and old hardware. It is really painful.

To go to 2.6 you have to rewrite ALL your custom drivers and the board configuration files. Yes is nicer in 2.6 - but it still has to be done. Just like we did in 2.2, 2.4 and probably again for >= 2.8.

It is a real pain that the kernel and apps change so rapidly. You regularly run into compatibility problems with:

We moved our project from 2.4 to 2.6 during development, because the maximum interrupt latency of 2.6 is so much better. We needed to handle UDP packets within 20 ms. max and occasionally on 2.4 we would have a 60 or more. Going to 2.6 solved our problems immediately, even with early versions.

Linux 2.4 might be "proven" on your old Xscale system, but I doubt anyone else has even _tried_ to use Linux 2.4 on something as new as Atom. Linux 2.4 will also lack support for any of the peripherals of your Atom com module.

Atom is just plain x86 with HT. Nothing fancy, nothing new. Drivers for rare hardware might cause problems though, but that's true with any kernel. I would have worried if the guy wanted to switch to an exotic architecture but that's definitely not the case here.

I recently upgraded a piece of equipment running a Freescale 8270 that does something similar from 2.4 to 2.6.17 and the networking performance improvement was very significant. If you need better performance and lower latency, 2.6 is definitely the way to go.
The old 2.4 kernel had been significantly hacked up as it was to improve performance which resulted in higher latency. The networking interrupts killed the performance unmodified.
We now plan to upgrade two other CPUs from 2.4 to 2.6 after seeing

I'd swap over to 2.6 if you're swapping to a COM Express module - I'd worry about support for PCIe and devices based on it in 2.4, and the whole point of COM Express over other board designs is to get PCIe and other differential signaling. Also, 2.6 runs snappy on 64MB ram and a 300MHz PII - I don't remember seeing any COM Express modules with worse specs than that, and/certainly/ not an Atom-based one. Unless you're doing something very peculiar, the ~3MB of a 2.6 kernel on disk shouldn't hurt either; the

As someone facing a similar situation at his day job, 2.4 is painful in some regards. In my case, 2.6 allows me to do a non-standard initramfs (the stock is minimal and then you can load other initrd or initramfs images from the kernel options...) so that I can tapdance around the three differing hackish bootloaders they did in 2.4. This allows me to do major cleanups in what they did for doing NFS rootfs on the IXP2800 blades and on the X86 ones with minimal pain.

Most of the people commenting on 2.6 being too big are thinking of the whole size with everything loaded up. Minimal kernels with just your drivers loaded and only your drivers in the module build, you end up with only about 5-10% increase in footprint in memory and store space, with the ability to provide modern device support for things. In the case of what you mention, you're moving to an Atom based machine board. Given that you're moving to a modern board, the odds of things being "nicely" supported is lower with the 2.4 kernel.

Since you're manipulating large volumes of data over GigE, you're going to want to switch, probably even with the old ARM stuff if you can manage it. 2.6 provides much more responsive networking performance (so long as you do your network code right and don't dink with the scheduler (heh...let's just say I corrected a not so good idea there recently...)).

You may have to port a few custom drivers over to 2.6, but in the end, it'll work better since the driver architecture is better in 2.6.

Well, it's not wise to change both the hardware and the software at the same time. You think it will reduce your time to market but it might increase it instead due to the numerous changes that will have to happen in your toolchain before getting anything barely working again.

From what I understand, you have a big experience in 2.4 and Xscale. 2.4 Also works on x86, so you'll not have to re-learn everything from scratch by just changing the architecture. All your toolchains, boot scripts, packaging scripts, etc... will still work as they did before. Then, only once you get familiar with your new architecture and the minor changes you might observe in the boot sequence, build process etc... it will be the right time to evaluate a migration to 2.6. Once you put your finger there, you'll quickly discover that you need to upgrade your gcc, glibc, replace modutils with module-init-tools, experiment with hotplug and sysfs, maybe switch to udev, etc... Step by step you'll notice a big number of changes, but you will be able to proceed one at a time, which is not possible if you change the soft at the same time as the hardware.

Also there are other aspects to consider. 2.4 has been maintained for a very long time, and you're probably used to backport some mainline fixes to your own kernel from time to time. 2.6 is not maintained that long (avg 6 months), and changes so fast that you will not be able to backport fixes for many years. I'd strongly recommend to start with 2.6.27, because Adrian Bunk will maintain it for a long time, as he did with 2.6.16. Once 2.6.27 is not maintained anymore (in about 2 years) you'll have to decide whether you stick to 2.6.27 and try to backport fixes yourself or switch to 2.6.36 (just a guess).

Also, 2.4 accepts almost no new hardware nowadays. If your new platform works well, that's fine, but how can you be sure that next year your GigE NIC will not change to something not supported anymore ?

I would say that the only case where 2.4 would make sense for a long term starting from now is if you don't have the time to revalidate 2.6 or to wait for 2.6.27 to stabilize, and need to quickly release something which will sit at your customer's in a place where it cannot be upgraded. Something like "install and forget". But I don't feel like it's what you're looking for.

So, to summarize :
1) switch your architecture
2) switch your kernel

Whether an official release of your product exists between 1 and 2 is just a matter of your time constraints and customer demand.

Last, to show you you're not alone, I'm too considering switching our products to 2.6, but next release will still be 2.4. Too many changes for a short-term release, and 2.6.27 not ready yet to reach years of uptime (but it's getting better though). 2.6.25 was particularly good but not maintained anymore.

As in I'm in the middle of a similar project right now... If you have drivers, go for 2.6 and make it a recent 2.6, like 2.6.26. I was very afraid of kernel bloat when we were considering a move up from 2.4.18, but the hit wasn't as bad as I feared. As others have said, maybe 5 or 10%. That can be a lot depending on your runtime system. How do you save non-volatile variables? Flash? PXE boot? Custom bootloader that boots up with an NFS-mounted root partition? We have found the initramfs boot enviro

I do embedded Linux and have done so on several generations of hardware and kernel.

There are a lot of slightly incorrect comments here about the size of the 2.6 kernel vs. 2.4. Once you know what your target hardware platform will is, and which subsystems you need, you can tailor a kernel to be usably small by configuring out the parts that you don't need. Don't need USB, or video4linux? Leave it out. Don't use modules, unless you need to control the load order, set parameters, or want to be able to swap hardware platforms, since they take up space in two places (ram disk and kernel memory).

One of the best reasons that I have for the 2.6 family is the new version of ram disk. You can almost trivially generate a run-time ram disk in the new cpio format, with a tweaked init that doesn't need to go to a disk for root, although some kind of non-volatile storage will certainly be needed.

It will be harder and more expensive to find really small RAM DIMMs for the Atom. Get a size that will be manufactured for a while. Same thing with your non-volatile storage. On a current project using a commercial board (including a GUI, commercial multi-language font set, media player, and a kernel with modules so we can move to other hardware, as needed), we've got a 512MByte DIMM for the Intel chip set with room to spare.

Another great reason for moving on is development. Have you tried to run a 2.4-based distribution on computers that you can buy today? It's doable, but painful. If you want to give your developers a (for example) Ubuntu 8.10 distro, which can be easily guested to freeze your build environment, they can run on the same kernel and libraries you do in the target for initial build and test.

If your software does what it needs to do as well as you need it done, why introduce new variables unless you have to?

There are a number of improvements in 2.6 (enumerated by others here) as well as drivers for newer hardware, but unless you actually need those features, the gain for you is zero.

In any event, you're already going through the upheaval of a platform change, that's quite enough for one jump. Once you have everything validated on the new hardware, if you'd just like to move ahead, that is the t

I'm not sure why anyone would choose 2.4 over 2.6, except for cases of legacy support. There have been numerous security issues and improvements which Linus has decided not to backport into 2.4. The notion that 2.6 is too heavyweight for embedded systems is just bunk - it will run rather well on cpus as slow as 40 MHz.

With 2.6, you're going to get the latest drivers, and a lot of important new technologies, especially with regard to things like wifi and USB. While I haven't looked at the Linux A/V arc

This system is based on an Intel Atom CPU, so it does have SSE and and FPU.

For the main architectures (x86, x86-64, ARM, and maybe PowerPC), GCC has been getting significantly better with each release. For less well-supported architectures (like Hitachi's SH series, an uncommon embedded CPU, or really old architectures like 68k), there's usually an older version of GCC that works better than the current ones.

Given that this is a new(ish) CPU, newer versions of GCC are going to support it much better than ol

GCC 3.4 is quite outdated.
2.95 is just plain old. Why not code in Fortran while you're at it?

My development group is also stuck with gcc 2.9x series because it's only compiler our toolchain maker (WindRiver) supports for VxWorks 5.X. I'm guessing he's in a similar situation. I can't complain though -- we've never had an issue with it.

I have had issues with the VxWorks GCC 2.95.3 on several occasions when the compiler generated incorrect code resulting in crashes and lockups. The C code was correct, but the resulting MIPS assembly was incorrect. Each time, making slight changes to the C code would fix it, i.e. replace a for loop with a while loop. I say good riddance to VxWorks. The memory management in VxWorks 5.4 was atrocious and had to replace malloc with DLMalloc plus add a method of tracking memory usage on running systems (marki

Embedded systems encompass a LARGE range of systems. Some of those systems need a filesystem (such as an MP3 player or a NAS server...), some need them less, but it makes some tasks within the space easier.

Not all embedded systems are PICs or similar. Your mobile phone is an embedded device, but I'd shudder to try to code that all with "traditional" embedded techniques and practices. Same goes for a whole host of things that are considered to be embedded.

If you want people to stay with old code to run on their boxes, please leave GCC out of the mix.

GCC's had two major improvements over the versions:

1. Better language compliance. It matters *a*lot*. And frankly, you're not going to win an argument against that on a techie forum:-) This included a new hand-written C++ parser in g++ ~3.4 that closed out over 100 bugs at once. You don't ignore hand-written C++ parsers, they're complete bitches to do.