Ok guys, you probably know where this is going. Although a lot of us gerbils are probably half-expecting that there will be an announcement from AMD about a new big x86 core in the works to succeed the ill-fated Bulldozer architecture, let's face it: the chances of that happening are about as likely as you or me winning the lottery. And with Intel being left unchallenged when it comes to the world's most prevalent ISA used in serious, productivity computing, it could mean a dark age of computing is on the horizon. Pricier chips? Highly possible. Absence of choice? No doubt. What now?

ARM may have a bigger installed base out there but honestly, what can you do with an ARM-powered device? Play Candy Crush? Make a call using Viber? Email, perhaps? Good. Those are cool. But what if you wanna do some serious, hard-core gaming on an ARM device? Or transcode video? Or perhaps do some serious photo-editing? Can existing ARM computers do those without giving you a slideshow or taking forever? Is there an ARM computer out there that can run games that are as compute-intensive as, say, Skyrim, Titanfall, or the like? And no, this isn't about microservers (which haven't even begun to appear in any considerable volume). Tying a thousand cores together is one thing, but you know you can only do so much with that.

So when it comes to the desktop, if the industry wants to stop Intel from hogging the entire market all to itself they need to band together and promote an alternative hardware+software ecosystem. It will be painful, oh yes, but going forward I think it's necessary. It's kinda like consoles. Before the PS2, backwards compatibility between console generations is practically unheard of (no, don't cite the Sega Master System and Sega Genesis as examples), or at least weren't very popular. If you wanna play your old NES games you try to keep your NES console alive for as long as you can because your SNES as sure as heck won't play them. So with the PC, perhaps if some industry bigwigs such as Samsung and Nvidia could come up with big ARM cores and someone like Microsoft renews its efforts on an ARM-supporting OS, the industry can slowly migrate to those while we try to keep our x86 machines alive for as long as possible to run our existing apps. I say Microsoft because it's hard enough to pull the industry in one direction, it's almost impossible to do so when a bunch of groups try their hand and fragment the ecosystem even more the way Linux distros are these days. And of course, having a coherent design and development team under one leadership would probably make the OS better instead of programmers littered all over the planet talking to each other via YM and meeting once or thrice a year. I'm digressing a little bit but sorry guys, I know a lot of you swear by Linux but in my view, if Linux can't even get its act together on an already established hardware ecosystem (I'm referring to desktops) imagine the mess it would create on an all-new hardware platform that itself is still trying to get its pieces together and evolve into maturity.

As time goes on, more and more apps will come out for these ARM desktops and the CPUs themselves will become more and more powerful as design teams step up their game. ARM itself can put out bigger cores that put energy efficiency in the back seat for once and focus on performance. As time goes on, assuming ARM doesn't stab the whole industry's back, you can buy motherboards that could probably support a common CPU socket. Restored competition, restored freedom of choice. Of course, we're not limited to ARM. Other ISAs will do just fine as well and I'd be happy to go with MIPS, PowerPC, even SPARC, but ARM seems to have the best chances as many companies already have experience building ARM devices and many devs have already written ARM apps. As for cost, I've read somewhere that it takes roughly $300 million and 5-7 years to develop a big x86 core, and about $30 million and a few months to do a small ARM core. $300 million. That's peanuts for Samsung and Nvidia. Heck, even AMD can probably borrow some more oil money to do it.

Can you guys see this happening? Would you like to see this happen? I don't mind plugging an ARM or SPARC CPU on an Asus motherboard as long as it means more freedom of choice and more market competition.

As you get older, you don't lose your friends.. you just find out who the real ones are.

I don't think ARM *wants* to compete with x86 at the high end. Beefing up the ARM design to compete in that space would be a whole lot of expense to compete for what would likely be a small slice of a (relatively) stagnant market. The x86-centric nature of the desktop market is also a formidable obstacle; unless Linux gains significant traction on the desktop soon (yeah, right...) people aren't going to be able to run the same apps on their ARM systems that they run on their x86 systems.

Desktop ARM would have the same problem that desktop Linux does, lack of compatibility with legacy stuff. I do anticipate that someone will eventually do ARM+Android in a desktop form factor, but neither is well suited to the "big work" use cases where desktops will hold out against tablets, consoles and other assorted smart devices.

Yes, and that's why I said "unless Linux gains significant traction on the desktop". A large swath of the Linux ecosystem has already been ported to ARM. You can actually run OpenOffice on a Raspberry Pi; it doesn't run particularly *well*, but I find the fact that it runs at all pretty amazing!

Oh, don't get me wrong; I think we'll see them on the desktop soon enough. The trend of moving more stuff into "The Cloud" makes x86 compatibility less of an issue for some use cases, and those are the ones we'll see moving to ARM.

Anand wrote:With six decoders and nine ports to execution units, Cyclone is big. As I mentioned before, it's bigger than anything else that goes in a phone. Apple didn't build a Krait/Silvermont competitor, it built something much closer to Intel's big cores. At the launch of the iPhone 5s, Apple referred to the A7 as being "desktop class" - it turns out that wasn't an exaggeration.

Yeah, I heard about the Apple A7. Impressive. Who better to develop this core than Apple? See, I really think it's just a matter of time before we see big ARM cores, and the A7 is just the beginning. Expect more to come.

As you get older, you don't lose your friends.. you just find out who the real ones are.

Anand wrote:With six decoders and nine ports to execution units, Cyclone is big. As I mentioned before, it's bigger than anything else that goes in a phone. Apple didn't build a Krait/Silvermont competitor, it built something much closer to Intel's big cores. At the launch of the iPhone 5s, Apple referred to the A7 as being "desktop class" - it turns out that wasn't an exaggeration.

That's all true, but the overall performance of the A7, while impressive for a mobile core, isn't anywhere near where Haswell is, even if you lower Haswell's clockspeeds to comparable levels. Additionally, those cores from Apple are also approaching the size of Haswell cores....

Considering it's like Apple's 3rd or 4th chip it's pretty impressive they have closed the gap as much as they have so quickly. I'm going to be really interested in whatever A8 ends up being. From what I can tell, Apple is the only company that seems to understand the possibility of ARM taking over the low-mid range CPU environment that desktops typically use.

There area couple of issues with this with respect to the desktop market though: the high end of the desktop intercepts with the low end server market (Ivy Bridge-E etc.) and the ARM cores are part of a greater SoC.It is no secret that ARM wants to move into the lucrative server market but performance is only one aspect here: RAS is another large part of the server equation. ARM needs to define some system level mechanisms to incorporate some high end RAS features like chip kill memory, memory add/remove and the hot swapping of processors. These features have no bearing on the desktop but a big fast desktop core will have them due to its server heritage.

The other problem is that there is no desktop standard for ARM. So far, ARM cores are used as a component in larger SoC's that include things like a GPU and dedicated IO logic. Much of this level of integration is happening on the desktop already but x86 chips offer expandibility to compliment what is not included on-die. While ARM offeres PCIe controllers to be included on-die, I have no seen any SoC go beyond several single lane links for embedded applications. To further compound the lack of expansion, ARM SoC's are not offered in socketable form. This can easily be resolved with new packaging. The other ARM issue is that there is no standard ARM socket. ARM SoC's are offered by numerous vendors and end users could be looking at a different socket from each vendor. It'd be nice to have a standard here before it comes an issue.

As far as performance goes, we do have an example of what ARM is capable of: Apple's Cyclone core is competitive with Westmere on a per clock basis. For a mobile package, that is quite impressive. On the desktop, the Cyclone core would have to clock to the typical 3 Ghz range that is found on x86 chips to be competitive. There is no indication the Cyclone core can scale to such clock speeds to ultimately be competitive. If clock speeds are the only thing holding ARM back, then the potential is there to be competitive.

One thing that isn't an issue is operating systems. ARM already has a Windows port, Android of course runs on it and desktop Linux already exists. The rest of the software ecosystem needs to migrate but enough is there already to get generic work done. The more niche your work, the more likely you'd run into an application availability issue.

SuperSpy wrote:Considering it's like Apple's 3rd or 4th chip it's pretty impressive they have closed the gap as much as they have so quickly. I'm going to be really interested in whatever A8 ends up being. From what I can tell, Apple is the only company that seems to understand the possibility of ARM taking over the low-mid range CPU environment that desktops typically use.

The thing is, the reason Apple caught up to that level so quickly is because it's simply implementing all the low-hanging fruit. It simply gets harder and harder to extract more out as you get faster.

As for being used in more serious computers, ARM has already been on it for a while. The A15, for instance, is meant to be for (relatively) low power consumption server use.

ronch wrote:But what if you wanna do some serious, hard-core gaming on an ARM device? Or transcode video? Or perhaps do some serious photo-editing? Can existing ARM computers do those without giving you a slideshow or taking forever?

As far as hard core gaming goes, it is not that far out of the realm of possibility, nothing is preventing a non integrated graphics solution. In fact Nvidia's ARM developer kits are capable of utilizing their graphics cards. Even the integrated graphics are on par or perhaps even a bit better then some x86 integrated solutions and you have to admit that the current consoles don't exactly have the most powerful cpu capabilities. Transcoding video is also not really a concern either. Many arm chips out there utilize a hardware solution for that. Even the anemic Raspberry Pi has an encoder that is capable of encoding 1080P in realtime. Photo editing may be a bit tricky but with the likes of ARM chips that have the graphics capability like Nvidia is coming out with again it is within reach.

ronch wrote:But what if you wanna do some serious, hard-core gaming on an ARM device? Or transcode video? Or perhaps do some serious photo-editing? Can existing ARM computers do those without giving you a slideshow or taking forever?

As far as hard core gaming goes, it is not that far out of the realm of possibility, nothing is preventing a non integrated graphics solution. In fact Nvidia's ARM developer kits are capable of utilizing their graphics cards. Even the integrated graphics are on par or perhaps even a bit better then some x86 integrated solutions and you have to admit that the current consoles don't exactly have the most powerful cpu capabilities. Transcoding video is also not really a concern either. Many arm chips out there utilize a hardware solution for that. Even the anemic Raspberry Pi has an encoder that is capable of encoding 1080P in realtime. Photo editing may be a bit tricky but with the likes of ARM chips that have the graphics capability like Nvidia is coming out with again it is within reach.

Well, we already have gaming devices like the Ouya and Shield. Anything Tegra based is good enough to play games, really. Only issue is form factor, as afaik nobody's using these chips to make a pc like device or laptop, not to say you couldn't hook up a tegra device to a monitor and kb/m.

ChronoReverse wrote:The thing is, the reason Apple caught up to that level so quickly is because it's simply implementing all the low-hanging fruit. It simply gets harder and harder to extract more out as you get faster.

They've been getting 2x performance every 18 months pretty consistently. And doing some novel things too, not just implement others. They are also at the head of the game for mobile computing, not catching up.

ChronoReverse wrote:The thing is, the reason Apple caught up to that level so quickly is because it's simply implementing all the low-hanging fruit. It simply gets harder and harder to extract more out as you get faster.

They've been getting 2x performance every 18 months pretty consistently. And doing some novel things too, not just implement others. They are also at the head of the game for mobile computing, not catching up.

There are very little novel things being implemented in consumer class CPUs that haven't actually been invented or detailed in papers in the 70's or is just a natural consequence of improving processes and frequencies.

Even Cyclone hasn't quite reached C2D capability. The design of the chip is both IPC and the frequency you can get after all. You'll find that when ARM gets to C2D class performance, they'll slow right down to the relative trickle everyone complains that Intel has slowed to.