Linux 3.7 released, bringing generic ARM support with it

Linus Torvalds has officially announced that version 3.7 of the Linux kernel has gone stable, and that means good news for developers who work with ARM-based CPUs: among its other changes, Linux 3.7 is the first Linux kernel to include generic support for multiple ARM CPU architectures, reducing the amount of effort required to get Linux-based operating systems running on phones, tablets, and ARM-licensed developer boards like the Raspberry Pi.

At present, every time a developer wants to port a Linux system to an ARM system-on-a-chip, they have to build a new kernel to support that processor's particular architecture. Additionally, differences between ARM chips from different companies means that porting that same Linux-based OS to another ARM processor—for example, taking code running on a Samsung SoC and making it run on a Qualcomm SoC—requires another kernel. The work required to maintain these separate kernels for each ARM SoC is a major roadblock for the architecture compared to x86 chips traditionally used in desktops and laptops, and overcoming this issue will be a major step forward for Linux and its forks, including Android.

This work mirrors the effort that Microsoft also exerted for Windows RT, which likewise supports many different ARM architectures with the same kernel.

Tablet and smartphone users shouldn't get too excited just yet. As we originally reported, the current release's list of supported ARM architectures focuses overwhelmingly on server-oriented products—chips from Calxeda, Marvell, Altera, and Picohip are currently supported along with ARM's Versatile Express developer board, with support for more chips coming "in the next few releases" according to an October 2 post by Linus Torvalds. Additionally, it may yet be some time before this generic ARM support is worked into Android, which typically lags a few kernel versions behind. For Linux, though, this is the first step toward making ARM support more like x86 support—it may soon be possible to install operating systems on ARM-powered devices without the messy work of architecture-specific porting that is currently the norm.

Promoted Comments

Keep in mind though that devices like smartphones need more than just processor support to function. And typically the major compatibility and porting headaches are there.Not to say this is unimportant - just that the average smartphone won't magically run Debian and be a functioning phone any time soon. Not without open hardware drivers at least.

28 Reader Comments

Keep in mind though that devices like smartphones need more than just processor support to function. And typically the major compatibility and porting headaches are there.Not to say this is unimportant - just that the average smartphone won't magically run Debian and be a functioning phone any time soon. Not without open hardware drivers at least.

Otherwise, the situation is a bit different on ARM and x86 where most of the devices are discoverable (on busses like PCIe or USB). This is not possible in the embedded world (at least for most devices). But in order to remove the devices declaration from the kernel C code (and thus compiled code), some board/system specific config files called device trees are used. And these still need to be unique for every board/system. They are just easier to write and maintain than kernel C code.

Otherwise, the situation is a bit different on ARM and x86 where most of the devices are discoverable (on busses like PCIe or USB). This is not possible in the embedded world (at least for most devices). But in order to remove the devices declaration from the kernel C code (and thus compiled code), some board/system specific config files called device trees are used. And these still need to be unique for every board/system. They are just easier to write and maintain than kernel C code.

Or even better - I would assume there is scope for the Device Trees to be passed in binary format from the bootloader/ROM code to the generic ARM kernel? That way HW vendors can ship new boards with U-Boot or something which exports an accurate DT, and the kernel "just works". Without someone having to write the DT and have that built into the kernel binary.

The rate has been steady for the past few years. Since the early 2.6 series when they developed git, actually - 2005 or so. One kernel every 2-3 months.

They only changed the numbering from 2.6.X to 3.Y.

And the "official" reasoning according to Linus Torvalds?

Linus Torvalds wrote:

I decided to just bite the bullet, and call the next version 3.0. Itwill get released close enough to the 20-year mark, which is excuseenough for me, although honestly, the real reason is just that I canno longe rcomfortably count as high as 40.

One of my coworkers has some kind of Ford Hybrid. It has a Microsoft system powering it. He's crashed it twice since he owned it which doesn't stop the car, but all the displays go blank (including the speedometer). The fix is to shut off the car and start it again. That scares me to no end, although he *is* a software engineer, so he might have been fooling around and doing something a normal consumer might not have tried.

One of my coworkers has some kind of Ford Hybrid. It has a Microsoft system powering it. He's crashed it twice since he owned it which doesn't stop the car, but all the displays go blank (including the speedometer). The fix is to shut off the car and start it again. That scares me to no end, although he *is* a software engineer, so he might have been fooling around and doing something a normal consumer might not have tried.

One of my coworkers has some kind of Ford Hybrid. It has a Microsoft system powering it. He's crashed it twice since he owned it which doesn't stop the car, but all the displays go blank (including the speedometer). The fix is to shut off the car and start it again. That scares me to no end, although he *is* a software engineer, so he might have been fooling around and doing something a normal consumer might not have tried.

One of my coworkers has some kind of Ford Hybrid. It has a Microsoft system powering it. He's crashed it twice since he owned it which doesn't stop the car, but all the displays go blank (including the speedometer). The fix is to shut off the car and start it again. That scares me to no end, although he *is* a software engineer, so he might have been fooling around and doing something a normal consumer might not have tried.

One of my coworkers has some kind of Ford Hybrid. It has a Microsoft system powering it. He's crashed it twice since he owned it which doesn't stop the car, but all the displays go blank (including the speedometer). The fix is to shut off the car and start it again. That scares me to no end, although he *is* a software engineer, so he might have been fooling around and doing something a normal consumer might not have tried.

No, officer, I actually don't know how fast I was going.

My speedometer crashed and I haven't had a chance to reboo-...I mean, restart my car yet.

Keep in mind though that devices like smartphones need more than just processor support to function. And typically the major compatibility and porting headaches are there.Not to say this is unimportant - just that the average smartphone won't magically run Debian and be a functioning phone any time soon. Not without open hardware drivers at least.

The core of the new ARM work is the move to device trees which describe not just the CPU but what peripherals are available. This is necessary on ARM because unlike x86 most ARM architectures don't have a way to probe for devices. LWN as a good summery about the work hear:

No, NOT "like Windows RT before it". On Linux for ARM legacy apps are actually supported, and you can run everything in Linux on ARM machines. You can't do that with Windows RT.

No, you can NOT run everything. The developers still have to port over their code so it's ARM compatible (or recompile it if it already is). This will probably be done for 99% of the core infrastructure used by most people, but it's still not any more magic then it was for RT.

No, NOT "like Windows RT before it". On Linux for ARM legacy apps are actually supported, and you can run everything in Linux on ARM machines. You can't do that with Windows RT.

No, you can NOT run everything. The developers still have to port over their code so it's ARM compatible (or recompile it if it already is). This will probably be done for 99% of the core infrastructure used by most people, but it's still not any more magic then it was for RT.

Can you really just re-compile programs written for previous Windows version and have them work in RT? On the Linux side, "will probably be done" is more like "has already been done," since many distros (especially Debian, which is forked for many other distros) require cross-compiliation to officially supported architectures for pretty much everything in their repositories. Even if they don't require it, the people porting the distro to a new platform usually handle it; it's so much easier when you have explicit permissions to do so under open source licenses, which is the majority of software for Linux. The same policies from both the OS maintainers and the app developers, let alone the history, cannot be said for Windows RT applications. Linux can draw on its x86 app legacy almost immediately upon release to a new platform, but Windows cannot and probably

On the Linux side, "will probably be done" is more like "has already been done," since many distros (especially Debian, which is forked for many other distros) require cross-compiliation to officially supported architectures for pretty much everything in their repositories.

"Compiles" and "works" are two quite different requirements. Some software on some architectures is tested (especially if they have good test suites and the package uses said software), but that's by no means all. It's quite a common occurrence for software to exist in Debian's repositories but be totally non-functional on the less popular architectures (with the rise of the Beagle Boards, Raspberry Pi, and co, ARM has gotten far more attention that it did in the past, so it should be in rather good shape).

I've had the displeasure of running into this situation in the past on the PowerPC architecture. Debian doesn't have the man power to test every version of every package on every (supported) architecture... breakages will occur.

No, NOT "like Windows RT before it". On Linux for ARM legacy apps are actually supported, and you can run everything in Linux on ARM machines. You can't do that with Windows RT.

No, you can NOT run everything. The developers still have to port over their code so it's ARM compatible (or recompile it if it already is). This will probably be done for 99% of the core infrastructure used by most people, but it's still not any more magic then it was for RT.

Can you really just re-compile programs written for previous Windows version and have them work in RT? On the Linux side, "will probably be done" is more like "has already been done," since many distros (especially Debian, which is forked for many other distros) require cross-compiliation to officially supported architectures for pretty much everything in their repositories. Even if they don't require it, the people porting the distro to a new platform usually handle it; it's so much easier when you have explicit permissions to do so under open source licenses, which is the majority of software for Linux. The same policies from both the OS maintainers and the app developers, let alone the history, cannot be said for Windows RT applications. Linux can draw on its x86 app legacy almost immediately upon release to a new platform, but Windows cannot and probably

Pure compiled language code that is supported by both ARM & x86 compilers with minimal mods for differences in architecture will port well. Code containing x86 or ARM assembly modules will require new code to replace the 'foreign' assembly code or an emulator/translator module to execute the unsupported machine language. This also applies to CPU specific ARM/x86 extensions or other variation from the base assembly language. In the x86 world this is handled by libraries that include emulation of the various extensions that are not universal across x86 chips or support of only a base version such as 8080/i386/i486/i686/AMD64 etc. with a non-compatible warning for CPUs that do not support the required opcodes.