Posted
by
Unknown Lamer
on Thursday August 18, 2011 @12:43PM
from the becoming-model-ctizens dept.

jbrodkin writes "Linux and ARM developers have clashed over what's been described as a 'United Nations-level complexity of the forks in the ARM section of the Linux kernel.' Linus Torvalds addressed the issue at LinuxCon this week on the 20th anniversary of Linux, saying the ARM platform has a lot to learn from the PC. While Torvalds noted that 'a lot of people love to hate the PC,' the fact that Intel, AMD, and hardware makers worked on building a common infrastructure 'made it very efficient and easy to support.' ARM, on the other hand, 'is missing it completely,' Torvalds said. 'ARM is this hodgepodge of five or six major companies and tens of minor companies making random pieces of hardware, and it looks like they're taking hardware and throwing it at a wall and seeing where it sticks, and making a chip out of what's stuck on the wall.'"

What is a desktop in the PC world is your SOC in the embedded world. It even comes with RAM and Flash (not on chip, but on package), if you want to.

The difference is that the PC environment has over a long time filtered down to a few typical devices for each type. Your network hardware is probably Realtek, or maybe Intel or an embedded AMD chip. You graphics card is NVidia, AMD or Intel. Your mouse does not matter, because it always talk USB HID etc.

In the ARM world, you also have standard components, but every integrator makes tiny (and usually pointless) changes that render them incompatible on the software level. Linus is right - this is neither necessary nor sustainable. It is one of the reasons that you can get software updates for a 5 year old PC, but not for a 6 months old smartphone.

The difference is that the PC environment has over a long time filtered down to a few typical devices for each type. Your network hardware is probably Realtek, or maybe Intel or an embedded AMD chip. You graphics card is NVidia, AMD or Intel. Your mouse does not matter, because it always talk USB HID etc.

And perhaps most importantly your main system bus is either PCI or something that looks like PCI to software and by accessing the configuration space of that bus you can read the device IDs of everything on it whereas with ARM the software is expected to know the complete hardware setup in advance.

And perhaps most importantly your main system bus is either PCI or something that looks like PCI to software and by accessing the configuration space of that bus you can read the device IDs of everything on it whereas with ARM the software is expected to know the complete hardware setup in advance.

Amen to that. It seems like every vendor's pet ARM variant, and every stepping of every variant, has tons of semi-custom value-add features that they have to add because if they didn't differentiate their variant from everyone else's, you might buy the other guy's device. There's usually no way to tell what's present and what isn't, so you end up creating these complex expert systems that "know" that if you're using vendor X, product Y, stepping Z, then this set of additional functionality is available. An

So I wish I could agree. But ARM is following the same diversity explosion and darwinian selection as FOSS, and for the same reasons. Out of this chaos comes the wonderful bounty of choices in our modern digital buffet. PCs have become stagnant. If you want one they are still available, but don't really do anything more than they did 15 years ago.

Yet if you look at the FOSS projects with any real market penetration (outside the FOSS world) they are all the market leaders. Firefox, Apache, MySQL, Open Office, and so on. Yes, KOffice exists on Windows, but show me one non-linux type running it...

Right now ARM is a bunch of FOSS projects with no clear leader. Once there is one, it will get the mindshare, and hence the support. Then others will be compatible so they can use the ecosystem, and things will get better. But right now, it is Linux 199

Sure, sure. Play the old cynic. The one under my desk is running particle simulations interactively using hundreds of GFlop/s. Looking back 15 years would have been the first 3dfx board and maybe a version of Quake. I think they are doing quite a few new things in that time. Improving vector processing performance by four orders of magnitude has resulted in new applications (for the PC, these things would have been available on mainframes previously)...

You're missing the point. He's not talking about add-ons like network adaptors, he's talking about fundamental core bits of hardware, like interrupt and DMA controllers, which need to be configured by the kernel before it can even bring things like serial ports online for a console.

Every PC, except some early Intel Macs, is capable of booting PC-DOS 1.0. It has interrupt controllers and device I/O configured in the same way and accessible via the standard BIOS interface. You don't get great performance if you use these, but you can bring up a system with them and then load better drivers if you have them. With ARM, every SoC needs its own set of drivers for core functionality long before you get to things like video or network adaptors. Oh, and on the subject of video, you don't even get something like the PC's text console as standard, let alone a framebuffer (via VESA).

It almost brings to mind the Linux desktop situation. Sure the underlying engine (linux kernel, drivers, etc) is the same across distros, like the basic ARM processor instruction set is the same for ARM. But all the glue that holds a system together is different. Choice of desktops, sound systems, desktop interprocess communications. Every distro puts together a Linux 'system' from the Linux kernel, X11 and various combinations of these other software components the way every ARM box generates a system

In a nutshell, essentially everything in and out of PC is standardized, and when it's not, drivers usually provide abstraction to some standardized layer to ensure compatibility. This is true, and is probably the point you're trying to make. But it ended up coming pretty badly mangled.In fact the sheer size of falsehood in your claim is astonishing. The entire point of PC platform is that it supports a massive amount of different hardware configurations that all work with the same x86 (amd64) code.

This like Linus is spoken by someone who cannot have done any ASIC/embedded development.There are standard graphics pipelines but you will integrate them onto your SOC with direct access to your SDRAM. This removes any standard bus architecture. You may even write your own 3D pipeline.You will probably write your own SDRAM controller. You will add your own peripherals. Why do this rather than plug in standard components? Well if you just stick off the shelf stuff together then how is your product any differ

There are standard graphics pipelines but you will integrate them onto your SOC with direct access to your SDRAM. This removes any standard bus architecture. You may even write your own 3D pipeline.You will probably write your own SDRAM controller. You will add your own peripherals. Why do this rather than plug in standard components? Well if you just stick off the shelf stuff together then how is your product any different from your competitor?

That's exactly the thinking of the device vendors that's got us into the mess we're in now. It's fine if you're a device vendor, it's OK if you're a manufacturer who can ship 100M units with exactly one ARM-based ASIC in it that'll never change in its lifetime, and it's a nightmare if you're doing anything else. For example to do something as basic as add TCP offload on ARM ASICs to our stuff we'd have had to add custom code for every single fscking vendor's bizarro TOE concept. In the end we did it all in

Well put yourself in our shoes.You've got to remember an ATI chipset is not a standard, OpenGl is the standard, why shouldn't I be able to develop my own pipeline as long as it complies with the standard?SDRAM is a standard, ARM's implementation of their controller might be targeted for their processor but not for the 3D pipeline you're building, so by building your own you can do better. Trust me an arm processor and 3D pipeline need very different things from an SDRAM controller. Why shouldn't I be able t

All of which is, more or less, interchangeable. The Intel x86/IBM PC platform, despite its many flaws, has reached a stable point where there are well accepted and commonly implemented standards for the boot process, the storage formats, the hardware interfaces, etc. ARM, despite a "purer" and "simpler" instruction architecture, lacks much of this common surrounding infrastructure.

And that is called inovation.The orignal PC "Standard" sucked.You had to asigne memory spaces, interrupts, and IO ports when you added cards. Not every card worked with every PC.PC compatibility was hit or miss. The magazines would use Lotus 123 and Microsoft Flight Simulator as the benchmarks. If both of those ran then it was PC compatible. Of course if you bought anything but a real IBM PC or at you could still find software that didn't run.Then you had the x86 CPU which also was terrible. Segmented memor

ARM descended from Acorn Computers, who provided the Archimedes computer along with a RISC OS. They seem to have bought out every possible semiconductor design group and merged them together.

Remember those times in the mid/early 1990's. As mentioned, there was a vast variety of different consumer PC's, along with experimental operating systems like TAOS - a JAVA like system with cross-platform compilation and dynamic linking.

Graphics boards were upgraded every six months as they are now: CGA, Hercules, EGA,

Maybe it was just me then - got my first PC in 1989 (20MHz Dell 310), got an upgrade to a 256K Paradise VGA board about 6 months later, moved onto a TIGA board with a few Megabytes memory and 32-bit framebuffer another 6 months later. Another six months later, I've got employment using a custom PC.

Had great fun programming the VGA directly; doing fun things like changing the character set, trying out Mode X from Dr. Dobbs, with 256 colors and page flipping, implementing scrollable viewports, palette editors

By 1990 we where already 9 years into the PC lifecycle. In those ten years we gone through 4 generations of mainstream video standards, three generations of CPUs, and where on the third generation of Windows, and frankly the first usable one but still version three." It was well after the standard was set. The thing was that even with all that development on the "standard". It still sucked to high heaven.The Mac, Amiga, and Atari ST where all still better and they where all from the mid 80s. If you adopt a

I always remember that time in 1986 when our lab PC's had CGA graphics,EGA was high-end, and even the Atari 800XL could do 256 colors using some HBI's. All the artists I knew, used Amigas for rendering characters and levels.

Not really. If I take a hard drive from my Ubuntu running PC, and stick it in a totally different PC, it will boot. And even X may come up. (And to cover the different level of your analogy, I can copy my "CryaonPhysicsDelux" folder from my Ubuntu system to a Red Hat system and it will run.) If I take a boot image from one ARM device, and stick it in another it will hang.

In the 80s, you had machines made out of standard 3rd party components. Your CPU was the same as the next guy even if he got his computer from a competing brand. This is why an Atari could emulate a Mac. The actual CPU was a particular part that everyone bought from the same place. This is why you can have versions of Linux targeting those 80s/90s era machines. A 68000 in one machine is the same as the next, or a 6502, or a 68030.

The old home computer landscape seems positively orderly by comparison.

The CPUs were standard, but little else was. Sure, the C-64 and Atari 800 both had a 6502-based CPU, but they also had different video chips, different sound chips, different and mutually incompatible disk drive formats and serial communications protocols, etc.
One nice thing was that even though each company used their proprietary chips, they didn't feel the need to hide implementation details from users. If you wanted to know exactly what each register in the VIC-II chip did, it was right there in the man

Apple was the only one that had a "mutually incompatable" format. The rest not so much.

While there were a lot of custom chips, there were also a good number of stock parts as well. This included floppy controllers, IO controllers, and sound chips.

Now the bit about everything being documented is a good point. This is how it is that I am still somewhat familiar with the parts that were in my old machine. This probably made the 030 Linux versions a lot easier to deal with.

While there were a lot of custom chips, there were also a good number of stock parts as well. This included floppy controllers, IO controllers, and sound chips.

Later on, perhaps. During the rise of 8-bit home computers, no. Stock parts for these functions didn't come about until there was an industry to demand them.

Actually they were available early on - they just cost dozens of dollars just for a floppy controller. That's why most mass market computers used self-made, cheaper solutions, where the design cost was easily outweighed by the cheaper construction.

It's a complete mess and currently a huge barrier to development. You don't even have to get into coding for the kernel -- just getting a toolchain for your particular flavor of ARM is enough to turn away lots of developers. We're talking several DAYS spent figuring out how to produce a goddamn libgcc.a that has the correct endianness, MMU-or-not, and doesn't hose the system because it uses an undefined instruction to implement prefetch()... and then another night trying to figure out how to get that libg

They're not trying to cut corners for the hell of it, but for performance, power usage, and other actual engineering reasons.

You just cant build smartphones and tablets with that same common architecture, or else you're adding too many chips and circuits you don't need.

It's no big deal that PC's ship with empty PCI slots and huge chunks of the bios and chipset that are never used but rarely. (Onboard raid, ECC codes, so on and so on), but when you're trying to put together a device as trim and minimalist as possible, you're going to end up with something slightly different for each use case.

He's acknowledging that, but at the same time discounting the advantages of having a minimalist option. I don't see any problem with having a heavier duty ARM available, but suggesting that there's not value to having chips that have just the necessary circuits is silly.

Unfortunately one of advantages of ARM is that the chip maker can heavily customize what is on the SOC. Most of them don't mess with the core. I don't think that the different makers are intending to have hostile features but given time constraints for development, they can't check with other companies (some of them competitors) to see if there optimization hurts others.

I think perhaps the biggest complaint is that ARM lacks a unified bootstrap and hardware bus. As in, there is no BIOS like on the X86, nor is there a PCI or similar that one can query and get a dump of device ids. So for a lot of the SoCs you basically need to know what is on there before you start sending signals.

Right, because do you want to boot your router from ROM? Or your IP phone from flash attached over what will later be GPIO? or your Mobile phone from SDCARD. Your tablet an embedded SSD, your MP3 player from its flash chip over custom interface*, your set top box from its hard drive? What if you have one processor booting another, what if it needs to do a first stage loader from ROM and then grab the image over ethernet (using your own ethernet implementation of course). What if it's not ethernet but SPI?So

That is the reason we have drivers. Unfortunately the ones supplied by embedded manufacturers tend to be kinda crap in my experience. There is also a lack of solid APIs for many embedded system features, where are desktop ones are quite comprehensive and mature now.

At the place I work some guys are making a hand held logger using Windows 7 Embedded and the support from the manufacturer is terrible. Took two weeks and sending test code to their office in Israel just to get the vector processing features of t

This whole article is bullshit. Is everyone forgetting the varying instruction sets of the 386, 486, Pentium, Pentium 2-4, Xeon, x86-64 etc., etc. Plus all the millions of Northbridge and Southbridge chipsets from Intel, Via, etc., plus all the different busses through the ages, plus 92 different kinds of temperature monitoring, USB, ATAPI, ACPI...

And we're badmouthing ARM for being a constantly moving target? And that manufacturers are throwing shit at the wall? Huh???

And yet, you can run, say, DOS on all of those computers. Critical devices will support a "generic" instruction set. Any VGA card will support standard VGA instructions, disk drives can be accessed using standard IDE interface (SATA controllers can emulate it). SCSI drives can be accessed using INT13h, the controller BIOS takes care of it. Keyboard/mouse use one of the few interfaces (and USB can emulate PS/2).

Now, when you get the basic system running, you can load drivers to access all of the features of the hardware (for example, different resolutions of the VGA card).

For ARM you have to recompile the kernel for most of the chips and boards for it to even boot. So, how would you create a way to install an operating system from me media not using another PC?

They're not trying to cut corners for the hell of it, but for performance, power usage, and other actual engineering reasons.You just cant build smartphones and tablets with that same common architecture, or else you're adding too many chips and circuits you don't need.

A common firmware interface (like BIOS, OF, or EFI) and something like a device tree doesn't require extra chips. At most maybe it's a few KB of flash.

Is Linus Torvalds (implicitly, at least) criticizing ARM because it is open and therefore anyone can create their own version of it? As opposed to x86, which has a restricted licensing set (AMD/Intel/Via... Via still exists, right?)? Because that is, AFAICT, exactly why ARM is so varied: anyone can roll their own. With the result that many do.

Kinda ironic (and I do mean *ironic*) that the creator of Linux would be complaining about this. I guess he is finally discovering why, in some cases, a regulated and restricted environment can be good (note: if x86 was a monopoly, I would not be saying that. But AMD and Intel are fierce competitors, so it isn't at all monopolistic). Open environments often become "hodgepodges" and lend themselves to non-standardization. Closed ones don't (well, they can, but generally they don't. Definitely not as fast as an open one) and can be easily standardized (witness how Intel accepted AMD's x86-64 set for consumers over their own I64 system). The result is, in the case of CPUs, good for consumers.

Note: I am note proclaiming the virtues of proprietary systems, or claiming they are better than free and open ones. Just pointing out the irony of the situation.

Linus doesn't have the RMS/ESR stick up his ass about "open." Linux was built out of necessity because no good x86 based *NIX or BSD was available. if HURD got off the ground, Linus wouldn't have bothered with Linux.

Is Linus Torvalds (implicitly, at least) criticizing ARM because it is open and therefore anyone can create their own version of it? As opposed to x86, which has a restricted licensing set (AMD/Intel/Via... Via still exists, right?)? Because that is, AFAICT, exactly why ARM is so varied: anyone can roll their own. With the result that many do.

ARM is not any more "open" than x86. To sell chips implementing modern versions of either instruction set, you must obtain a license from at least one company and nothing prevents you from extending that instruction set. Many companies have implemented (and often extened) each set over the years, though there are fewer implementing x86 now than ARM. There are probably fewer implementors of x86 because it is much more complex.

I think Linus is criticizing the lack of a common platform surrounding ARM rather than the instructions themselves. The instruction set of x86 chips has grown a lot, especially with x86_64, but the way you boot a PC hasn't changed much for example.

ARM is not any more "open" than x86. To sell chips implementing modern versions of either instruction set, you must obtain a license from at least one company and nothing prevents you from extending that instruction set

Open? How is ARM open? ARM is a very popular but *licensed* core that you must pay a good deal of money to license. According to the Wikipedia article on ARM, in 2006 it cost about $1,800,000 per license.

There are? OpenCores has one beta VHDL implementation (it hasn't been updated since December 2009) that I can find with a quick search- everything else I find leads to a dead-end. I don't see any ARM cores listed on opencores that have been ASIC proven.

While there may be some designs available, I don't think any of the ARM implementations that are in the Linux kernel are based on an open core. If you are aware of an open core that can run Linux, I would appreciate a pointer.

I'm 100% certain that if ask ARM to license 10, 100 or even 1000 cores at $0.11 per core, they won't even talk to you. Developing a device around an ARM core is expensive and has high start-up costs. Remember that $1.8M is the average cost of a license, some people pay more, some less, but ARM holdings is a for-profit company, not a charity. They are out to make money. It is not in their business interests to license the core to you if they aren't going to make money off of it, and on average, they made $1

Going back to the original point I was trying to make- ARM is not "open" in any sense of the word. You don't get the core unless you have a lot to invest, and we are a long, long way from from someone using their makerbot to whip up a new processor. ARM has valuable IP- they don't hand their info to just anyone on the promise that "we're going to make lots of MONEY!" They want to see a return on their investment. If you're not going to make very many, you probably have to pay a higher licensing fee- and th

Going back to the original point I was trying to make- ARM is not "open" in any sense of the word. You don't get the core unless you have a lot to invest, and we are a long, long way from from someone using their makerbot to whip up a new processor.

So something can't be "open" unless you can do it at home on the cheap? This argument is silly.

That is very much up to debate- but you'll find very few people who will call something with a $1.8M license fee open. No matter how you cut it, you are not going to be able to license the ARM core without giving something on the order of >$1M to ARM Holdings. And they won't give it to you on spec.Personally, if I'm charged anything more than a reasonable 'media fee' I don't consider it open. Would you call Windows "open" because it doesn't cost a lot to license?
The concept of 'open' is very much tie

Actually it is the other way around. The x86 platform is mostly based on open standards. There are more 486-compatible clones than you may realise. ARM, on the other hand, is strongly proprietary. There are no clones at all. The ARM fragmentation has occurred because of a lack of open standards - while the PC guys were standardising PCI, USB and VGA, every ARM licensee was reinventing the wheel to give their own SoC the features that nobody else had. While the core ISA is always the same, the system archite

The problem I think is that the abstraction of the CPU is seen differently in the ARM community from the x86 community. So Linus is frustrated that the ARM side doesn't see the CPU as "processor plus memory management plus bus management plus system control".

ARM is going after a different market than the typical desktop/server and so has different needs. Whereas Intel, AMD and others want to be very compatible and mostly plug compatible at the software/OS level so that you don't have to have different ver

The reason why x86 is so unified is because they're all in PCs. You only have the one form factor to shoot at. So of course the CPUs will be highly similar.

ARM fills a different niche. You see ARM chips in tablets, phones, industrial control, routers...all over the place. Of course ARM chips will vary more wildly. They're trying to hit more targets. And those targets have unique and tightly defined parameters. That will put them at odds with other designs.

I mean hell, if the x86 has it all figured out so well, then why isn't your cellphone using one?

Uh, x86 is everywhere. PCs. Supercomputers. Microcontrollers. Embedded systems (you can still buy i386 chips because a lot of embedded systems like traffic light controllers use them). There's even been a few game consoles using it (original XBox and the Wonderswan series). Quite a few of them don't follow the PC standard, and that's fine. But there should still be a standard for common uses - even just covering smartphones, tablets and netbooks would be a major improvement over the current chaos.

ARM's everywhere. Look at most of your consumer electronics... Odds on, you're looking at an ARM in most of them. There's at least 1-2 ARMs in your X86 machines as well, doing tasks you wouldn't relegate to the X86.

Well yeah you can find x86 a few other places. I was working on a grocery store scale that was x86. But the thing was from an architecture point of view a PC in a funny box with a scale on top. Standard Linux distros worked on it unmodified. I tested that personally. The thing ran Ubuntu without a hitch.

And yeah you probably would expect to find 386 in traffic lights. Traffic lights are older than ARM chips, so you'd expect that.

What standard would you propose? What standard could cover a CPU that you find in everything from routers to car dashboards? ARM is meant to be adaptable to corner cases. How would you fence that without hindering development?

Once again, you misunderstand me. I'm not suggesting we make a standard for any possible ARM device. I'm suggesting we make a set of standards for PC-like ARM computers - tablets, smartphones, netbooks, maybe even desktops and servers. That much is possible - x86 is able to work passably well in each, and it has a rather outdated standard not designed for those things. If you designed the standard to fit ARM's strengths (low power consumption, low cost), you could come up with something that works just as w

I'm not trying to misunderstand you - honest. I just don't see what your standard would fix. Code portability isn't really a huge problem on ARM. I do a lot of Windows CE work. And 99% of the code that runs on one platform will run on another. The base Microsoft binaries linked during a sysgen do not change from platform to platform, regardless of many design choices. Just select ARMV4I and you're good to go.

Sure, you could make a standard that says "This is what an ARM tablet is." Microsoft has al

One of the reasons ARM has succeeded over Intel in the embedded platform is exactly because it's a hodgepodge in terms of implementation.. Arm just designs the chip, they don't make it, they leave that up to others, who then in turn support their own chips by providing kernel patches - which has been amazingly successful for Linux (and incidentally the non-linuxy iPhone as well)

Not to talk trash, he definitely understands the kernel and software but the nuances of hardware development and what makes hardwar

A lot has changed since then but ARM has done nothing but help Linux. If your chip vendor has a poopy Linux implementation they'll sell less. If they have a great one (and great documentation) they'll sell more. TI's a pretty good example of an awesome ARM / Linux implementation, and.. there are less awesome examples..

How do you define "help Linux"? The popularity of Linux on ARM has produced a giant, acrimonious fork which is not helpful to the community in general. Obviously, this wouldn't have happened in the first place if Linux and ARM weren't good for each other, but for the community to function well, things need to change. Linus is hopeful that this will be resolved in four or five years as a result of his and others' efforts to fix the very problems he's complaining about. The problem is not so much "poopy" Linu

It's not desktop vs. mobile, it is manufacturer X vs. manufacturer Y. ARM is just the core- the company doesn't make chips. They license their core to people who design with it. What is fragmented is everything outside the core- that is, the value that each licensee adds to the core to make their own product. They're embedded processors- they get surrounded by many peripherals such as analog to digital converters, interrupt controllers, serial ports, memory interfaces... the list goes on and on.

But apart from maybe "memory interfaces" maybe, the other things you mentioned (like analog to digital converters) wouldn't be of concern to the average programmer who would still maintain cross-compatibility across devices, assuming he didn't write for special things like cameras, sounds recorders or networking etc.

All the core does (by itself) is math/logic functions, conditionals, and move data around. *EVERYTHING* else is done by a peripheral. Embedded processors are (almost by definition) packed with peripherals. We get very used to these peripherals, but they're there, and if you want your computer to do anything other than serve as a way to deplete batteries, you've got to send data to/from a peripheral. Every different ARM variant has a different set of peripherals and different ways to use them- hence the frag

ARM is a bunch of different companies all contributing code to make their own device work. The problem is that very little effort is being spent to extract all the common bits, so you end up with many slightly different implementations of the same thing.

The problem is that micro kernels have always been harder to develop and slower(if not done carefully). And not all "board features" can be separated/exposed from/to the kernel easily when done externally.

For instance, paging and memory management is usually something that would go in the kernel, even a microkernel. Do you know how many different ARM MMU interfaces there are, and also how many ARM processors don't have an MMU, or that implement only a subset of some other MMU. And then there is the dual-core processors now as well. I wonder how many different interfaces there are for controlling both multiple ARM cores or ARM processors.

Basically, ARM is a cluster fuck for OS development. They need some form of standardization if they ever hope to get widespread OS support. Linux is probably only supported by most boards because the board manufacturers submit patches to the Linux project. By widespread, I mean each board supporting a minimum of 3 different operating systems, for instance Windows, Linux, and something proprietary or a BSD.

The embedded world doesn't have much trouble with this. For QNX, there's the kernel, which is the same for all CPUs with the same instructions set, and a "board support package", which has the driver programs for a given board or variant.

Linux is a monolithic kernel, and so it has to be hacked all over the place to deal with architecture variations. Linux lacks a clean conceptual model of operating system vs. board support.

Linux supported many architectures before ARM, so Linus's complaints don't come from a purely PC mindset. You also seem to be ignoring the fact that Linux is and has long been a major part the embedded world. How many smart phones run QNX?

Microkernel versus monolithic kernel has nothing to do with board support packages.

Linux has the equivalent of "board support packages" -- they can be as small as one file, but are more often just a handful: a C file that describes memory and I/O mappings and other peripherals that cannot be safely detected at runtime, sometimes a default configuration (defconfig) file, and maybe some other pretty small driver-like files that manage some of the mess that Linus was talking about. (For example, the BeagleBoard has three C files: one to define the board, one to manage LCD video configuration, and one for audio setup; it shares a defconfig with every other board using an OMAP2/3/4 CPU.)

That is in sharp contrast to my experience with commercial RTOSes, where a BSP might consist of a dozen C source and header files, plus another half-dozen configuration files. For the boards I have used, Linux has the smallest set of board-specific files, a microkernel RTOS has the next smallest, and a Unix-based RTOS has the largest. Linux doesn't call its board-specific file sets BSPs because they are (a) too small to really call a "package" and (b) not controlled and shipped separately. (Linux is not about locking down what the end user can do, so there would be no point in having BSPs for officially supported boards.)

The problem is not that adding support to a new board in Linux is too hard, in fact, it's almost the opposite. There are already tens of slightly incompatible boards to support, and every time a company makes a new one, they don't even try to stick to any standard (not that there even *exists* a real standard), since it's very easy to just add new code to Linux. See this LKML thread [gmane.org] for Linus's description of the problem from some time ago.

Using a microkernel doesn't help at all; you still have to code for all of the slight incompatibilities, regardless of whatever differences in logical organization.

and so it has to be hacked all over the place to deal with architecture variations.

Bullshit. Linux abstracts such details though various standardized functions and macros. If you've bothered to pull your head from your ass and take even a quick look at the Linux source tree, you can clearly see the architecture variants are cleanly broken out.

Not only is your post NOT "Interesting", as was modded, it is factually, "Troll".

I'm pretty certain he'd prefer a consortium that produces a common set of standards, but he raises an important point.

Choice costs.

It's wonderful that you have the a massively wide variety of choices, unconstrained by the a central authority, but don't forget that the cost of having that choice is going to be significant. There's a reason that almost all lines of business tend towards either a few big winners or, if the product is essentially identical, commoditization.

It's why I often wonder at why Linux users dream about taking over the desktop. If that did occur, it would mean a drive to lower cost that would result, almost inevitably, in the wholesale adoption of s single choice, reducing all the other choices to total irrelevance.

It's why I often wonder at why Linux users dream about taking over the desktop. If that did occur, it would mean a drive to lower cost that would result, almost inevitably, in the wholesale adoption of s single choice, reducing all the other choices to total irrelevance.

But when that choice goes goofy, you can change it quickly. Like the watershead from KDE for a while. Next it will be from Unity and Gnome Shell, for a while. Then the leaders either shape up, or fall aside, like XFree86. http://en.wikipedia.org/wiki/XFree86 [wikipedia.org] You can have a market leader (A good thing for standards) and still have choice. (A good thing for freedom)

I'm not so certain. If the business community settles in on a standard, instead of the Linux community being composed of a dozen different distributions, all of which have roughly equal mindshare among contributors, you end up with only one to which you contribute if you want to be at all relevant, which means the alternatives wither from lack of customer and eventually programmer interest.

My thesis (speculation to be sure, but built on observation), is that you *cannot* sustain that level of choice in a m

So far we have POSIX, Linux Standard Base, and the Free Desktop "standards". Standards are always optional, whatever the "business community" says. There are many categories of software, but there aren't terribly many different competing projects per category. As concerns distros, there are the debian-based distros, the fedora/RHEL based distros: everyone else is more or less "niche", and generally not intended for mainstream consumption.

"Relevant" in the FOSS world means your code compiles on new systems a

It's why I often wonder at why Linux users dream about taking over the desktop. If that did occur, it would mean a drive to lower cost that would result, almost inevitably, in the wholesale adoption of s single choice, reducing all the other choices to total irrelevance.

I don't understand this logic.if the hardware was standardized, anyone could make the chip and someone would find a way to compete (speed improvements, power consumption,...)

The whole deal with ARM standards is probably going to be solved with Windows 8 (unfortunately) if it sticks to the promise of running on ARM. Microsoft will step in and say "Here is what we will support" and the chip shops will fall in step.

I don't understand this logic.if the hardware was standardized, anyone could make the chip and someone would find a way to compete (speed improvements, power consumption,...)

My comment about the desktop was unrelated to ARM. I was trying to point out that if you become a significant player in a market where cost rather than flexibility is the main factor (i.e. the mainstream desktop), you are likely to *lose* a lot of your current choice.

Your point about Windows 8 is a very good one. We may lose a lot of choice because of it. On the other hand, much reduced costs to enter a now much larger market may well boost participation significantly.

Microsoft already has two operating systems that run on ARM, Windows CE (aka Windows Mobile) and Windows Phone; so I'm not so sure about that. Besides iOS and Android are the market leaders in the ARM space, that probably isn't likely to change any time soon, and Windows 8 probably won't run on any of those devices.

More like wouldn't it be nice if they would at least occasionally meet, talk shop, and perhaps agree voluntarily to be a bit more compatible. That and don't go making changes for the sake of changes. Pick a design that works and stick with it.

Code size doesn't really apply--this is a discussion about Linux. If you're running Linux, you're not counting KBs. Maybe you're counting MBs. You may only be counting GBs (the smallest iPhone was 8GB). And ARM does provide a timer, interrupt controller, and memory controller. Not all customers use them, and only the interrupt controller has a generic "architecture" which could be said to apply to any interrupt controller. It's ironic, though, since I think everyone uses the ARM interrupt controller i

Thus far ARM has not focused on system specifications other than basic binary code interface. The Linaro group http://www.linaro.org/about-linaro/ [linaro.org] has now started developing a more system level approach and a concerted effort to get more consistency with upstream engineering. The situation has been a bit confused until now, but it will get a lot better with the Cortex-A9 and A15 systems for Android, Linux, and Microsoft.