Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

bigwophh writes "ARM launched its new Cortex-A5 processor (codenamed Sparrow) this week, and while it's not targeted at the top end of the mobile market, it is a significant launch nonetheless. The Cortex-A5, which will likely battle future iterations of Intel's Atom for market share, is an important step forward for ARM for several reasons. First, it's significantly more efficient to build than the company's older ARM1176JZ(F)-S, while simultaneously outperforming the ARM926EJ-S. The Cortex-A5, however, is more than just a faster ARM processor. Architecturally, it's identical to the more advanced Cortex-A9, and it supports the same features as that part as well. This flexibility is designed to give product developers and manufacturers access to a fully backwards-compatible processor with better thermal and performance characteristics than the previous generation."

Speaking (typing) from a Quad G5, PPC and watched the happenings in OS X community/developer scene since Intel transition announced. If Intel one day manages to make Atom (x86) run in same low power as ARM licensed CPUs, ARM is doomed.

Why? Compare the compile process of an open source, multimedia application on PPC and Intel. See the "bonus" stuff Intel chips get? Every kind of optimization, way more cheaper is available on Intel x86/SSE. Trust me, I am more amazed to Intel's developer/development/applicati

Acorn Computers tried in the 80's and 90's. The ARM processors were faster than their x86 rivals, and OS was years ahead of the likes of Windows and Mac OS. As you say, some monopolistic software company would never allow ARM to take off. Lucky ARM is now the most common architecture on the market.

It's sad x86 is still here, the platform should have been done away with years ago.

They were also much cheaper. I remember A3000s being under £100, when the cheapest PC that you could actually use seriously was at least £500, and probably closer to £1000. If you wanted a hard drive, it cost a bit more, but most RiscOS software at the time could run from floppy. If you got a 70MB or so hard disk (when PCs typically came with 250MB+) then you could store all of your applications and data on it easily.

The success of the x86 is mind-boggling considering all the true innovation that has been happening around itfor 3 decades. Can its success be attributed to nothing more than Intel's fabrication capabilities and M$ support?Even Intel's shiny new Nehalem architecture is not much more than an updating of the DEC Alpha ( ditto for AMDbut their designs, at least, have been based on it for 10 years.

Even Intel's shiny new Nehalem architecture is not much more than an updating of the DEC Alpha (ditto for AMD but their designs, at least, have been based on it for 10 years).

I'm shocked at this claim. Back in the day, Byte Magazine used to dissect processor architectures in a way you rarely see any more, apart from anything written by Jon Stokes over at Ars. Realworldtech picked up the torch, and I followed it for a while; smart guys, but you need a large Kool-Aid division factor to hang there.

By contrast, ARM was developed on a shoestring budget. The goal was modest: low power and average performance.

The goal was, simply, a half-decent processor architecture that could supplement and eventually replace the 6502 in Acorn's range of desktop computers. They didn't think anything on the market at the time was suitable.

They read about the Berkeley RISC project and figured if a bunch of students could put together a processor architecture, they should be able to do a good job fairly easily.

That the processor architecture wound up offering sufficiently good performance/watt as to become a roaring success in t

Well, almost. This was true for the ARM1, but that chip never made it to market. The ARM2 was developed with some additional funding from Apple, with the aim of powering the Newton. Power efficiency was definitely a goal for this processor, and for all subsequent ARM chips.

ARM -- another British invention -- has established a small beachhead in the notebook market

Not quite accurate. While Acorn is indeed a British company, the current batch of ARM (Acorn RISC Machine) processors is actually the result of a collaboration between Acorn, Apple Inc. and VLSI Technology. I guess you could say it's a multi-national invention.

I hope ARM beats x86 merely because x86 is an ancient technology that has a pile of limitations preventing the industry from moving forward as fast as it otherwise might. Previous attempts to move away from x86 failed due to the absence of software to run on the new machines. It's all fine and dandy if Microsoft write NT for the Dec Alpha and Itanium, but if there are no apps, it's pointless.

Actually there is a way for this to work. Microsoft ports Windows to Arm. Most of the time the processor is in kernel mode so that makes a difference. Now running user mode code through an emulator which is basically a big switch statement will not deliver a decent performance level. Microsoft could port their Office applications to ARM.

ARM have actually quite some experience of running non native instruction sets - Jazelle is mode where the ARM runs 80% of Java byte code natively. Basically there is an ext

I would love to have one of these in a "smartbook". Even though it won't run x86 binaries (I use linux anyway) it would be useful enough to let me leave my big arse laptop at home. With hours of battery life I wouldn't need to take a power supply with me.

So far though the only ARM smartbooks currently available have very limited RAM and disk space. I will have to wait and see what comes out in the next few months.

Found more info on it [liliputing.com]. Looks like it uses a modded version of Mandriva. The USB flash as a hard drive replacement is interesting. Only problem is that you will have to buy the special G-key USB flash drives to have them fit nicely in the slot.

"Exponentially" means according to a function in which one of the terms is a constant raised to a term which includes the power of the x variable. It is not a synonym for "many times", and it cannot apply to something which is, even instantaneously, a constant, since it can only refer to a function. If you mean that the number of MIPS/Linux applications increases linearly while that of X86 functions is increasing exponentially you might have a point - except that, at any moment in time without more informat

Well, 2X the price would be one good reason... Once you upgrade to the larger battery, solid-state HDD, and including shipping and taxes, you're paying almost 2X the price, most certainly NOT "almost the same price"

which runs every x86 app in existence through Windows, Ubuntu Preinstalled or Hackintosh?

If you want Windows, go for the Dell. If you want Linux, you'll barely even notice you're on a different architecture... All the same apps will work.

The Loongson CPU is quite nice, but the 2F is closer to Atom in terms of power usage than an ARM chip (and a bit higher than even Atom). Note the 4 hour battery life, which is pretty poor for a machine in this class.

The Cortex-A5 is a slight improvement over the MPCore/Arm11/Arm9. That's nice for those who need it, but it's miles away from the speed of a Cortex-A9, which is really what's going to be needed to battle Atom.

And since the A9 has announced by ARM quite some time ago, this posting should have been written then not now.

In reality, it's not clear which niche the A5 is going to occupy. It's probably going to be useful in lower end smartphones only, since current higher end models are already using the faster A8.

As a developer for products based on ARM9 and ARM11 SoCs the A5 is targeted squarely at me. I'm not sure why it's of any interest to slashdot. But it does appear to be a cheaper ARM11 (to the point of making the ARM9 obsolete) but with some of the features of the A8.While smartphones are all sexy and exciting, the staple for cell phone manufacturers are the simple ordinary phones. If they can cram more features into the same cheap phone it usually means they can sell more of them. Think of it as competing in the free phone market. Where the styling and brand and features are the only way to differentiate yourself rather than price. The customer is just going to pick 1-4 of the plan bundled phones.

Not sure I'd agree with a cheaper ARM11... more like a cheaper Cortex-A8!

Absolutely agreed that A5 targets ARM9 and ARM11 users though. ARM makes that clear. All other things being equal, I'd want a Cortex-A5 instead of any of those. ARM9 is trusty but limited at the high end. ARM11 is kind of awkward; never quite took over from ARM9, and given Cortex I doubt it'll ever catch on all that much more. ThumbEE (on Cortex-A) is way better than Jazelle (on ARM9/ARM11); it works for any JIT-oriented runtim

the Cortex-A8 is out now on the 65nm process as are all the other low power device CPU's except Atom. Atom is currently on 45nm to get in the ballpark as the others but power usage is still pretty high. Cortex-A8 on 45nm should be in the pipeline soon and along with it, Cortex-A9. Those are going to shack the Atom up on price/watt and performance/watt. This is why Intel is moving Atom to 32nm ASAP but it's very expensive for them because they have to price the Atom low while at the same time use very expensive 32nm process space which they normally use for high profit desktop/server CPUs. So in 2011, along comes Cortex-A5 on 40nm so Intel would have to start looking at 2?nm processes to keep competing. I believe the ARM dude talks about this somewhat.

Size is a big deal and right now, Cortex-A8 on 65nm is rather large for smart phones. they pack some decent power for netbooks so I'm not sure what the delay is on that front. Cortex-A9 on netbooks would be very nice but I think they are just sampling now so it won't happen til next year( 2010 ).

ARM is a thorn in both Microsoft and Intel's sides and there is probably massive amounts of pressure on OEMs and manufacturers to stay away from it. Atleast on the netbook side. Remember, the head of the Thai Manufacturers Association said they fear Microsoft when talking about Linux on netbooks. ARM is an enabler for Linux so it too is a threat to Microsoft. But I sure hope the market gets to make the choice some how, some way.

did I say that ARM inc made the chips somewhere? I was speaking more to the fact that ARM inc's designs are doing well on larger die processes, there's room for even better performance and power sipping along with Intel being forced to use die shrinkage to even play in the game.After reading the story on the A5, it sounds like the design documents for the design also relate to what process size is used. The A5 was said to be designed for 40nm process. So while the implementors may have a choice, they might

ARM does not make chips, they design them. The process technology is up to the licensees. Some are using 45nm now, and have been sampling 32nm for a few months with plans to ramp up production in early 2010.

I was talking more about the "ARM" platform than the company so yes, if people don't know, ARM inc doesn't build the chips but only sell the design to companies who produce chips from those designs. Saying that, I think that ARM inc's design documents somewhat tie it to a process size or that's what I got from the Cortex-A5 article.As far as some ARM based chips on 45nm now goes, which ones and who's using them? I thought TI was still 65nm and only read that Samsung was eventually to release a 45nm Cortex-

Looks like the Cortex-A5 has 50% more performance while using 1/3rd the power of the current generation ARM11 found in the iPhone. As a game developer this makes me hopeful that we'll see cellphones as a gaming platform without sacrificing useful battery life.

Most figures you find in the TFA are in terms of DMips, which is an awful metric to measure general CPU performance. Imagine how easy it is to optimize a loop which contains 100 instructions, which is 100% branch predicted and 100% cache hit at L1 D/I. This does not translate at all to web browsing performance which is thrashing (at least) your L2.

In term on u-architecture, we are looking at something similar to ARM11 on newer processes.TFA talks about:+80% DMips co

The Cortex-A5 has a more advanced L2 memory system with multiple outstanding transactions. This makes a huge difference for many workloads compared to the ARM11 cores. Thus, for workloads not contained entirely within the L1 memories the Cortex A5 should offer much better performance.

The A5 is, from a marketing standpoint, a cut down A8. It supports all of the new instruction set extensions introduced with the A8, and is intended to be binary-compatible, but is a lot slower. It is also a lot cheaper. A decent A8 SoC costs around $40, but you can expect A5-based cores to sell for well under $20.

From a technical standpoint, it's quite a different design. The A8 is an in-order superscalar design, with a 13-stage pipeline (and a 10-stage SIMD pipeline). The A5 is an in-order single-i

If it's not superscalar, why does it need a branch predictor? It only needs to know when the first instruction fails a cache hit, so that any results can be held.

Uh, what? You need a branch predictor because it's pipelined. It has an 8-stage pipeline, which means that it doesn't know the result of an instruction until eight cycles after it was issued. If you come to a conditional branch, you need to decide whether to take it or not. For example, if you have some C code saying something like 'if (a == 12)' then you can't decide whether to jump to the else block until you've computed the value of a, which will be 8 cycles in the future. Without a branch predictor, you just stall for 8 cycles and do nothing. Given that compiled code averages about one branch every 7 instructions, that means that you would be spending most of your time doing nothing.

The branch predictor makes a guess about which branch to follow, i.e. whether to continue to the body of the if statement or jump to the else block. It then starts executing whichever branch if guesses. If it guesses correctly, then the pipeline stays full. If it guesses incorrectly, the pipeline is flushed and none of the results of the instructions after the branch missprediction are committed. The processor resets itself to the branch and continues down the right track.

The branch predictor in the A5 gets about a 95% hit rate, so on average you have to flush the pipeline every 20 branches, which isn't too bad in terms of overhead. Superscalar makes no difference to the need for branch predictors. A superscalar chip is one that can issue more than one instruction per cycle. That means that independent instructions can be run side by side. This is quite nice on ARM chips, where a lot of instructions are predicated, as you can run both versions in parallel and only commit the one that was meant to be taken, but it's completely independent of the branch predictor.

It doesn't sound like it is necessarily slower, either, since you can get the same functions as the A8.

Nonsense. By that logic Atom is as fast as a Core 2 because you have the same instruction set on both. The A5 and A8/9, due to massive implementation differences, will execute different numbers of instructions per clock and not run at the same clock speed. The A5 will execute far fewer and runs at a lower frequency.

The A5 and A8/9, due to massive implementation differences, will execute different numbers of instructions per clock and not run at the same clock speed. The A5 will execute far fewer and runs at a lower frequency.

For now. But if they do implement it in 40nm they might get the clocks way up to compensate for the inability to retire as many instructions per cycle.

Not really. People are already shipping A8 cores on 45nm and ARM has deals with IBM and Global Foundries for 32nm and 28nm processes. But it's irrelevant, because you won't run an A5 at a high clock frequency if you need speed, you'll use an A8 or A9, because it will consume less power for the same throughput.

Are opcodes still hardwired in ARM, or are they using microcode now? I know a little ARM assembly from hacking my ARM7TDMI (iPod mini), and found that ARM was really and interesting and weird (coming from MASM on IA-32) architecture, and quite a bit easier to use. But I remember seeing product documentation claiming that hardwired instructions were one of the reasons why they were able to keep their transistor counts (and thus price) down.

I honestly don't know. It wouldn't surprise me, given the attention to detail that goes in to an ARM core design. It was certainly true of the ARM 2, but I can't find anything definitive one way or the other. The StrongARM, I believe, had microcode, but that was designed by Digital, not by ARM (then acquired by Intel, who managed to turn it from the highest-performance ARM variant to the lowest in a couple of years).

Modern ARM cores have a series of pluggable instruction decoders, which helps keep the

So this is why ARM and Global Foundries recently made a deal [hothardware.com]. ARM's Cortex-A5 is going to be built on a 40nm and Global Foundries already has that equipment, with AMD working hard to advance to the next node that frees up a lot of manufacturing power for ARM to use. Officially it was for Cortex-A9 at 28nm but what's to stop other stuff from being done in the shadow of the deal?

Probably not. The A5 is designed to be cheap, and you don't produce your cheapest chips at the most expensive process technology you have. ARM's marketing stuff currently suggests producing it on a 40nm technology.

Remember, ARM doesn't make chips. The deal with Global Foundaries was to allow ARM to sell designs and fab space in the same bundle (they do this with IBM and a few other chip manufacturers too), so when you want to make a custom SoC you go to ARM and say 'I want to make 10,000 custom chips b

Its the Wifi/WWAN chips, and LCD screen which suck up the power, not the CPU. ARM is cool and all (pun intended) but if you make an ARM based Dell Mini 9, you're not going to end up with uber battery life, when you're on Wifi and running the screen bright.

The main reason why the CPU does not suck power is because most if not all mobile phones use ARM CPU cores. Imagine a mobile phone with an ATOM, shudder...You would gain some speed but your mobile phone would need fans:-(

I know what I'm about to say may not happen, but may make people consider moving to those mobile platforms: While you may be right about power comsumption, the fact that the couldd perform better and even add more core or better video cards using the same power comsumption of current devices makes me hopeful. I'd go for something faster or more powerfull than my current MSi Wind if it cosumes similar battery and I can run several programs at once or faster.

ARM talked about the Cortex A9 (the one I'd actually like to have in a netbook) over two years ago [cnet.com]. There is still nothing you can get that actually has one in it.
Yay something to replace the ARM11. Hope it actually gets used.

Define 'you'. ARM began selling Cortex A9 licenses a while ago, but ARM does not produce chips. TI are shipping OMAP4 SoCs based on the A9 to high-volume OEMs for a little while, as have a couple of other ARM licensees. They should be appearing in consumer products in 2010. As, in fact, it said in the article you linked to.

Before the A series, ARM haven't really designed any new processors since Acorn Computers died in 2000/2001. The only development push ARM had is when RISCOS went to other manufacturers such as Castle. Now ARM needs to design new processors as their time has come where more powerful CPUs are needed in the mobile devices.

(And, Acorn as a personal computer manufacturer died in 1998. They were using the DEC StrongARM, which predates the ARM9 and ARM10 - the StrongARM was used in place of the ARM8 that was still under development, and the ARM9 borrowed ideas from the StrongARM.)

This is what happens when you link to articles written by idiots instead of people who know what they are talking about. The article on Ars Technica was a lot better. The A9 is out-of-order, the A5 is in-order. The A9 is superscalar, the A5 is single-issue. They both have the same pipeline length (which surprised me; the A8 had a 10-stage pipeline, but apparently both the A5 and A9 have 8-stage ones). It's therefore possible that the A5 is a massively cut-down A9, with a single pipeline and a simpler i

What they mean is that the instruction set is compatible. So you can run the same binaries on both (although they would probably run faster if you recompiled them).

ARM has several different instruction set versions and optional extensions. You cannot run binaries interchangeably in a simple fashion. This is arguably true as well for x86's SSE and the ilk but to a much smaller degree. Why do you think cellphone vendors use Java ME even if, more often than not, they use ARM processors?

We really have to start looking more carefully at posts like this, which clearly contain entire paragraphs of unexamined assertions by company PR drones that may or may not be true. Bottom line: Kill this shit unless a trustworthy, honest reviewer with a decent track record says it. If that isn't happening, quit posting it here, where we have more important stuff to spend time on.

By the way, that "more important stuff" includes pulling our dicks and/or replaying World Championship Monopoly games move by move.

Ok,
1) you don't have the source to recompile. It's not like you just have a repository to recompile. It lots of different companies that must work together, and they are only going to do what they see will be profitable. So only a selection defined by what the owners see as profitable will be ported.
2) the first port of software is the hardest and most Windows software has never been ported. Much of Windows software is written to just one implimentation of the API, so problems go hidden. You'll find old W

Except netbooks didn't take off until Windows ran on them. Then, you got a real ultraportable that did everything your desktop did, for $300-400.

This is true that MS can't effectively subvert the Linux smartbook, but the average person would have to buy a Linux smartbook in spite of Linux. (We won't talk about WinCE smartbooks, other than my saying that MS can't effectively subvert the Linux smartbook.)

Basically, it's a really, really long battle to get smartbooks adopted, simply because Linux isn't Windows

Heh... They kind of dropped all but the x86 versions because the backwards compatibility features of Windows kind of got in the way of selling the other architectures. There was this big push for Alpha as it WAS vastly better than x86- back when NT 3.1 was "king". It didn't go well then because you had to run pretty much most of the applications in emulation, negating most of the advantage the CPU had over X86 machines as it would run that stuff slightly slower than the comparable x86 machines of it's da

Plus, it could even take advantage of the enormous number of open source programs that could be compiled for ARM Windows before commercial titles get ported.

Most open source desktop apps that I've seen either are ported to GNU/Linux (e.g. Firefox and OpenOffice.org) or came from the GNU side of the fence in the first place (e.g. GIMP and Inkscape). So Windows NT for ARM wouldn't have a huge advantage over Ubuntu in this case. It would probably be more productive to consider a compatibility layer from Windows CE to Windows NT, much like the Win16 to Win32 and Win32 to Win64 layers that Microsoft has already implemented in Windows NT, so that at least a user's co

Microsoft can really change things around if they decided to port Win7 to ARM, instead of offering only Windows CE.

But considering monopolies, I wouldn't expect that any time soon.

People generally use Windows on PCs because they have x86 Windows software they need to run.

How many people have a stack of ARM software to run on ARM Windows? If you're going to need new software anyway, why would anyone in their right mind pick Windows to run it on?

Because 6 months before you can even buy "Windows 8 - ARM Edition", Microsoft will have released a Visual Studio patch that enables "ARM" as a target alongside the existing x86/x64/Itanium platforms. Both.NET and Java will have runtimes ported as well. Converting 32-bit code from one CPU to another is much easier than going from 32-bit to 64-bit, so it wouldn't take very long for vendors to update their software for it. Also, Microsoft strongarms ISVs into compatibility. For example, it's often hard (or h

I've said this before. Aside from games, very little legacy software is CPU-bound. A modern emulator can get somewhere between 50-80% of the host native speed on emulated software, and not all of the code that is running will be emulated. Take a look at a typical Windows application. Most spend at least 50% of their CPU time in system library code. A half-decent emulator will just pass these calls to the native versions of the libraries, so for half of the CPU time you are running native code. A lot of recent Windows applications use some.NET code. This will be JIT compiled to ARM, so it's also native. The remaining code will be emulated, but the number of programs for which this will be too slow is very small.

Oh, and most people do not have a stack of x86 Windows software. They have one or two Windows programs that they depend on (or, at least, would not abandon without a lot of persuasion). You can bet that an ARM version of Windows would be accompanied by an ARM version of Office, and if MS really wanted to push it then they'd give a free download of the ARM binaries to people who owned the x86 version.

In terms of C programming environment, x86 and ARM are very similar. C does a terrible job at abstracting the differences between SPARC64 and x86 (for example), but it does a lot better at abstracting the differences between ARM and x86. Most software, unless it uses inline assembly or SSE / MMX intrinsics, is a straight recompile. The SSE and MMX intrinsics can be implemented in terms of NEON or slower scalar operations, so the code will compile, even if it doesn't get the same performance.

If you make packed structures on x86, they will require unaligned loads and stores, which are slow (so don't do that). If you do it on a new ARM chip, you get the same. If you do it on a slightly older ARM chip, you get a trap to the OS which fixes up the load. If you do it in x86 code emulated on ARM, then the emulator will turn it into a load-shift-mask sequence (and since ARM instructions get a free shift, this is actually a very quick sequence).

Making a C program 64-bit safe, if it was not designed to be portable originally, is a lot of effort. Porting a C (or C-family) program from x86 to ARM is generally a straight recompile.

But, really, a port of Autocad is irrelevant. If you're running Autocad, you don't want the CPU with the best power consumption or the best performance per Watt, you want the CPU with the best performance. And, much as I like the ARM architecture, that's not the market it's (currently) in.

Making a C program 64-bit safe, if it was not designed to be portable originally, is a lot of effort. Porting a C (or C-family) program from x86 to ARM is generally a straight recompile.

Plus the price of a hostile takeover of the non-free program's copyright owner, which otherwise declines to do this recompile in the interest of maintaining the market segmentation between the smartphone editions (Windows Mobile, iPhone, etc.) and the desktop edition of a program.

But, really, a port of Autocad is irrelevant.

AutoCAD was used as an example. There are plenty of other non-free programs for Windows that won't be recompiled on ARM.

It would be best for Microsoft if ARM on the laptop/desktop was a complete flop. Sure, if what others say is true about the portability of Windows internals, Microsoft could release a version of Windows 7 for ARM. But really, what would be the point?

The biggest strength of Windows is running Win32 apps, and they are all compiled for Win/x86. Microsoft would have to provide development tools that encourage developers to make ARM binaries along side x86 binaries to even have a chance at making it happen.Look at the average computer user's software catalogue, you will find many apps (and games) that were bought long ago and would cost money to upgrade to a potential ARM port if the company that made them are sill even in business. Those programs are never going to be ported to Win/ARM. Then there are all the drivers for last years peripheral hardware (assuming that the laptop's hardware is supported) that won't work.

I don't believe they can do what Apple did either. Apple was able to move to x86 from PPC because the control the hardware and moved their whole product line to it (killing PPC market). Any developers that wanted to stay in business had to port to x86. MS would be introducing a side product that would have a very small fraction of the bigger x86 customer base.

In the end all that Win/ARM has left is the few open source apps that choose to build an installer for it and the familiarity of the Windows desktop environment.

It would be in their interest to do everything in their power to make sure this doesn't ever get off the ground. We will have to wait and see what their next move will be.

Apple did not kill the PPC market. IBM did at least the desktop market, one day they decided to give up the PPC desktop processors without telling Apple. Apple did not have a choice, there were new desktop and notebook processors in the pipeline, while IBM busily was working on their high end server processors and was designing console processors for Sony and Microsoft with their old cores.

If that was the case Apple could have simply bought a license to one of those console designs. The Xbox 360's Xenon triple core CPU, for example, is pretty decent. They could also have commissioned IBM to design processors for them. They had enough volume to do it. They did not because using Intel processors was more cost effective.

Apple killed the PPC market when they sank the PPC Mac clone market (e.g. Power Computing), forcing companies like Motorola and even Be (which used hardware based on a PPC CHRP

As if the xbox processor would have made sense, the xbox processor is basically three g4 cores with some simd units attached on top, nothing fancy and not even that fast compared to amds and intels offerings, why should apple pay for the next processor generation if they can get it for free mostly on intels side.The powerpc market was also killed by ibm not really enforcing the desktop anymore. After the G4 and G5 they did not have any new designs in the pipeline and even their own workstation offerings are

Why would IBM design new desktop processors when they had no desktop OS of their own, and the only desktop OS of interest for PPC was done by Apple, who were not interested in PPC anymore? I guess it is easier to blame IBM and Motorola than looking at your own issues. I find it interesting you give a blank check to Apple for taking their own financial interests at heart, while thinking IBM got to survive all these years by making products for third parties with no viable market in sight.

Hen and egg problem, I personally think Apple simply was not enough to keep the PowerPC floating, lets face it Wintel simply killed it. IBM is slowly moving away from the entire hardware business, my personal guess is the next division which will be axed and sold off will be the processor division, and outside of the server space absolutely no one was interested in the power pc anyway, it would have taken more than simply apple to pull off the power pc as new desktop processor standard, if there was a chanc

MS could invest in doing some kind of dynamic binary translation from x86 to arm that could work for a good part of the software available out there

And once you clock up the ARM CPU to the point where the dynamic binary translation from x86 to Thumb doesn't result in unacceptable slowdown, your CPU might already be consuming as much power as an Atom CPU. So what does that buy you?

Applications? If they just wanted an OS, any OS, they might as well use Linux. Having Windows with no decent applications to speak of provides little help. DEC and SGI had to figure that out the hard way.

An overclocked ARM running an x86 emulator has applications. Atom also has applications. So what's the point in running an x86 emulator on an overclocked ARM instead of just going with Atom in the first place?

Apple solved it by discontinuing their PPC line, leaving little choice for developers to go with it. MS doesn't have such an option. No sane developer will make and test an ARM binary without a market. And there is lots of x86 specific code (optimisations, ect.) that can't be ported without significant investment.

An x86->ARM emulation layer could benefit Windows though. People get the impression that ARM processors are slow and spread the word, eventually killing ARM on the laptop. Killing ARM with a hal

>> Microsoft can really change things around if they decided to port Win7 to ARM

Heard it through the grapevine that this is EXACTLY what they're doing, albeit not in a context you mentioned. A subset of full blown Windows kernel is being ported to ARM (a-la iPhone Mach) as a foundation for their "next" next gen mobile OS.

Unfortunately for Intel (and happily for everyone else) the x86 arch is going to start haunting them. The bit that just figures out how big the next instruction is on an x86 CPU is as large as an entire ARM core. As things get more and more multicore and want to be more and more low power, this will be a ball and chain for them - already they are having to use considerably more expensive processes to make the Atom compete with the Cortex A9.

Now that is silly. 8086 had like 29 thousand transistors. 80386 had some 275 thousand transistors. The StrongARM SA-110 processor had 2.5 million transistors, more transistors than a 80486, and that was years ago. x86 decoding is hardly the issue people think it is. Not at todays transistor budgets. Intel has surprised a lot of people with Atom and they should be able to shrink it further.

1) The A5 is not meant to take on Atom. The A9 is.2) The A5 is not architecturally identical to the A9. The A9 is an in-order, multi-issue core. The A5 is an out-of-order, single-issue core. The only thing similar is it has the Cortex A-series ISA.

What the A5 is is a CPU that completely obliterates the ARM11-derived cores, used in everything from NVIDIA Tegra to the Nintendo DS. It's an update of the ISA, and a more capable core, with better thermals. That's it. Whereas every low-end smartphone now has the same damn QualComm ARM11-based core, in a year, they'll all have the A5.

If you look at the ARM press wibble, then you'll see that they make a distinction between 'smartphones,' which are things like the iPhone, Palm Pre, N900, and so on, and 'feature phones' which are what everyone else thinks of as smartphone (smaller screens, but capable of running user-installed apps, come with a web browser, may support WiFi + SIP, and so on). The A5 is aimed at the feature phone and 'dumb phone' markets, the A8 and A9 are aimed higher.