AMD’s 2014 roadmap: A hazy future for desktop x86 chips as AMD doubles down on ARM

Share This article

AMD’s updated roadmaps for 2014 show plenty of transitions, but not necessarily in the product lines that interest enthusiasts the most. Piledriver — the eight-core, AM3+ processor based on Bulldozer — will remain AMD’s top-end desktop chip throughout next year, even though Steamroller-based Kaveri is preparing for a January unveil. In the past, AMD has rolled out updates to server, desktop, and APU families in fairly short order, so this decision not to build server or desktop variants on the new CPU is a bit surprising.

All of this might be a moot point, if Kaveri had hit its frequency targets. Increasingly, however, this seems to not be the case. At APU13 last week, several Kaveri systems were on display with a base clock of 3.5GHz and an unknown Turbo Mode. That’s sharply below Richland, with its 4.1GHz base, 4.4GHz Turbo, and 5GHz overclock — and it raises the question of whether or not the chip will deliver the performance benefits AMD initially promised. A 3.5 to 3.8GHz clock speed would be a 15-20% reduction from Piledriver’s level — easily enough to offset the increased single-thread performance.

AMD’s server roadmap shows Piledriver continuing as a high-end part through 2014 as well, but the new ARM Cortex-A57 core from AMD takes over the budget/low-power segment by the middle of next year. Interestingly, AMD’s Kabini servers are apparently nothing more than a stopgap measure, and shuffle off the mortal coil to be replaced by ARM variants. This is rather odd, as it implies AMD is going to go after the low power market strictly with ARM, when it has an x86 core that fits into the segment as well its recently acquired SeaMicro subsidiary that specializes in x86 server hardware.

Kabini and Temash transition to Beema and Mullins next year, with minimal changes. The thing is, the current slide is wrong. AMD already offers Kabini parts in a 10W envelope, as we covered at the chip’s debut. The Temash to Mullins debut, with a 2W envelope, does imply that the follow-up chip will slip into a better space. Overall, however, the best way to think of this update is in line with what we saw between Trinity and Richland, or Brazos and Brazos 2.0 — a modest improvement on some key metrics, but not a fundamental game-changer.

Finally, in embedded, we see the ARM Cortex-A57 cores debut as higher performance options than Kabini, but below Steamroller. This is interesting, for several reasons. First, we know Kabini and Temash are quite competitive with Bay Trail, which is generally as fast as Apple’s A7 in thermally constrained environments. This implies that the A57 hits an even higher performance target, or at least that AMD thinks it has the potential to do so. We also see Hierofalcon hitting 4-8 core deployments with up to 30W of dissipation, which stretches the traditional definition of embedded. Steppe Eagle, the Kabini variant, is aiming at a 5W-25W envelope, but still contains a GPU, while AMD’s first A57 core is CPU-only.

Tahiti retrenches, holds its ground in 2014

In its earnings call last month, AMD announced that it would tape out 20nm GPU designs in the first part of 2014. That surprised us — with 20nm production supposedly starting in the early part of the year, we expected AMD to have designs ready to ship fairly early in the ramp. GPUs are typically early production candidates; AMD and Nvidia have quickly moved to new nodes as they came online. If AMD is taping out designs in Q1 2014, it suggests we won’t see new cards until Q1 2015. While we’ve heard rumors of Nvidia’s Maxwell being pushed back into later in 2014, there’s nothing to suggest it’s actually running a full year behind.

What’s interesting about this slide is that it does suggest that AMD is adjusting its product placement. The R9 290X and R9 290 don’t change, but the Radeon 8800 family, for example, corresponds to the Radeon 7800 series — meaning those GPUs have 1,280 cores. The R9 280X that AMD shows as replacing the Radeon 8800, in contrast, is the full implementation of Tahiti, with a full 2048 cores. These changes ripple down the entire product family.

Keep in mind, we have no idea when the repositioning will happen, so I’m not suggesting skipping a GPU purchase if you’re in the market today, but this slide suggests that AMD is already anticipating changes it will make to remain competitive against Nvidia, should Team Green have new hardware in the pipeline.

An uncertain future for AMD’s “Big Core” business

What we’re seeing here, I think, is the inevitable result of a resource-constrained company attempting to fight a multi-front war. In the past 18 months, AMD ramped two new APUs (Kabini, Kaveri), built SoCs for both the Xbox One and PS4, built a new high-end variant of its 28nm GPU architecture, overhauled the Radeon HD 7000 driver family, and built a new ARM processor core. That’s an enormous amount of work, particularly for a company struggling back towards profitability.

The good news on AMD’s roadmap comes courtesy of Kaveri’s new GCN architecture and HSA implementation, the first ARM core, and a refreshed Temash APU that fits into a 2W envelope. The more troublesome news is the lack of a CPU refresh for server and enthusiast products where it would be useful, combined with troubling rumors of Kaveri’s CPU performance. We’ve heard rumors that AMD temporarily suspended work on all of its Steamroller follow-ups to bring the Xbox One and PS4 to market in time — and that would explain why we don’t see larger Steamrollers, a platform refresh for Socket AM3, or much news on the big core server front.

Tagged In

Post a Comment

Steve Smith

” The Temash to Mullins debut, with a 2W envelope, does imply that the follow-up chip will slip into a better space. Overall, however, the best way to think of this update is in line with what we saw between Trinity and Richland, or Brazos and Brazos 2.0 — a modest improvement on some key metrics, but not a fundamental game-changer.”

I’m sure AMD claimed 100% more performance per watt, you can hardly liken it to the ones mentioned above can you?

Joel Hruska

They havent’ claimed a 100% improvement in performance per watt. They claimed a reduction in power envelope. They might hit that target in any number of ways, including a weaker GPU, leakage improvements, or fewer cores.

The straight-line performance increases are on the order of 20-35% and due to frequency improvements. It’s not a new architecture.

But you’re right about the claim. I hadn’t seen that deck, I was at APU13 when the announcement went out.

Steve Smith

Temash 8w vs Mullins 4.5w 35% faster on PC Mark 8 and 22% 3D Mark 11 while consuming nearly half the power. So more than 100% performance watt. Which ever way you look at it, that’s a fantastic gain in one generation.

Joel Hruska

I’ll believe it when I see it, and hardware testing is what I do for a living.

Those kinds of gains are difficult to pull off on the same process. It’s much easier to, say, change how you measure TDP. It implies that original Kabini actually came in way over original power budget, if they’re able to clean it up this much in that short a time period.

Jesse

No its pathetic because it still much faster and Haswell uses less power as well. Haswell Crushes both Temash and Mullins RIGHT INTO GROUND!!!! Intel Broadwell WILL seal AMD’s fate for GOOD!!!

Steve Smith

JML = Moron!

Jesse

Steve Smith = Super Moron!!!

Joel Hruska

If you’re going to troll you should get your code names right. A 2-5W AMD part would compete against Bay Trail, not Haswell.

Principle

Excuse me, but do not try to play down the fact that you missed this AMAZING point that on the same process node, they have increased the performance per watt by more than 100%. MORE than 100%! So this article is in error, its nothing like the change from Trinity to Richland, which was only like 15%.

And Haswell, Haswell will not have a leg to stand on against a Mullins or Beema processor. Haswell will be a has been by this time. Bay Trail will compete, but actually at lower performance overall than what AMD will be offering for 2-3W.

Aaand you base your assessment on what exactly?? All benchmarks I have seen show that 1) Intels SDP can be exceded very easily. 2) AMDs TDB has always been on the conservative side of things (like intels TDP of old)… So please show me some hard evidence that you did not just pull that out of your hind…

Jesse

Please show me hard evidence that you are not a die hard AMD FANBOI!!!! I wasn’t talking about Intel’s SDP which is pretty good by the way. AMD’s SDP is pure crap and AMD exceeds it without even breaking a sweat.

john

give…me…hard…facts! Thanks!

john

as to the fanboy thing… well I think I just did.

Jesse

No you didn’t you are still A die hard AMD Fanboi which is NOT GOOD!!!!

Principle

Jesse youre the stupidest POS

Jesse

No that’s you Principle the DIE HARD AMD LOVER!!!!

Steve Smith

zzzzzzzzzzzzzz

Andrew

He is just a troll. Trolls the AMD PS4 reviews because they have AMD parts. He’s Anti AMD. Has a Intel rig with a garbage low end AMD GPU. Go figure.

Andrew

If Intel has such great affordable processors why do people continue to purchase AMD? Probably because PC chips have become powerful enough that the average user doesn’t care about Extreme performance. They care about bang for buck. Compare a 4770k to a 8350 while gaming with the exact same GPU. Do you notice a difference that the human eye can see? nop. AMD is doing the smart thing and not focusing on enthusiast level PC parts. They’re focusing on mobile tech and APU’s. Which Intel has just recently started to do. Another question for a fanboy, Why would intel include such a crappy GPU built in to a enthusiast level CPU? I actually bought a 4770 and I’m honestly not all that impressed. I was expecting more from it.

Jesse

Actually the difference is HUGE!!!! AMD is stupid for not seeing the writing on the wall!!! They are f**kED six ways to sunday!!!! Um, those SUCK ASS!!!! Intel is far more powerful!!!! They will fix that next year and most people buying those CPU’s use dGPU’s which are fading away!!!!! Only you and the dumb fanbois!!!!

ElMo

Wow…

I don’t know who this Jesse is, but he’s quite the douchebag. So much so that I created an account here just to voice my opinion. Jesse… STFU :)

Jesse

Now if I didn’t listen to one AMD fanboi why would I listen to another douche-bag like you hm?

AA

what the writer wrote about beema is just bullshit. Amd actually redesign the cpu core and the performance will improve by so much that it will match intel haswell ulv Pentium cpu performance while of course having a much better gpu. On the other end, I am disappointed that there will be no steamroller fx to rival the intel core i5. The current fx range is starting to look very dated be it on performance and power consumption

Joel Hruska

AMD has not redesigned the Kabini micro architecture for a minor update. At APU13, Lisa Chu characterized Beema as a minor update. Furthermore, AMD hasn’t had time to do a full core redesign and there’s no reason to do one on 28nm.

The next major update to the small-core x86 parts, assuming AMD doesn’t kill them in favor of the ARM part, will presumably be in 2015 or 2016 when AMD transitions to either 20nm or 14XM (the hybrid 20-14 process).

Jesse

Don’t Worry AMD is DOOMED and any hope of them putting out a decent part along with it. Yep too bad ARM is the future and AMD is a nothing more than a causality of the new era of computing. M$ will be the next causality. AMD won’t be around long enough to use the 14XM process.

Principle

I am surprised Jesse even knows how to use a computer.

Jesse

Really why is that?

john

jeez what are you like 12? You certainly sound like one. AND is nowhere near *doom* in fact all signs point to another’s of this industries big ones “doom”, I’ll let you reflect on that (most possibly in vane but anyhow…)

Jesse

OH really from where I am standing AMD is DOOMED!!! Intel is not because they have lots of $$$$. M$ is doomed because no one likes their products anymore.

john

give… me…hard… facts…! Or do you have a reading-understanding deficiency?

Steve Smith

So Intel isn’t doomed because they have lots of $, but MS is because no one likes them? Only Apple that i can think of have more cash than Microsoft (MS have more than 3x the cash reserves of Intel, nearly 2x the net income), so you have just stupidly contradicted yourself…..Moron.

Jesse

That’s right. M$ is losing cash fast unlike Intel. M$ hasn’t had a successful product That made them any real money. They have been losing $$ with every new product they release because it either costs too much to make or is PURE GARBAGE!!!! Nope Moron you just don’t get it.

NL

Xbox one, Surface tablets, Office, buying Nokia starting to get actual market share in the very competitive smartphone market how is that a failure? Sure they may have a flop here and there but so do all companies!

Jesse

Because NONE OF THEM SELL!!!! Don’t believe the M$ Hype!!!

Principle

They redesigned the CORE, its now Puma, not just the Jaguar Plus. The architecture of the SOC may be similar but they redesigned the cores, so I am still scratching my head as to why you cannot admit that 100% increase in performance is a major change. Beema will have the same TDP ratings as Kabini with twice as much performance. Mullins will have like 130% increase in performance per watt, but at half the watts, so at 1W it will still have more performance than a Temash running at 2W. The Jaguar cores in Temash were never too power hungry to being with, maybe 1W max per core, but somehow they managed to cut it in half. The question will be how the GPU performs, some of that improvement must have been from better design layout or metal optimization at the foundry, so we should likely expect leakage to be down across all parts of the SoC. So even though you may be right about the architecture not changing, things have changed.

Joel Hruska

“I am still scratching my head as to why you cannot admit that 100% increase in performance is a major change.”

Because that kind of improvement is effectively unprecedented on a mature process. We know for a fact that the Jaguar CPU is still the same architecture — they aren’t stripping out or adding any deep functionality.

Absent the advantages you gain from a new process, there are only a handful of ways to realize a gain that large.

1). Change how you measure TDP. This is perfectly plausible. If you manage to build a better video decoding block (as Richland did), shift TDP reports to focus more on video decode.

2). Improve a broken part. Nothing I’ve ever heard suggests that Kabini came in over budget on power, but it’s possible. Maybe AMD expected the 2GHz chips to actually be 15W instead of 25W. So you fix whatever was broken, and boom — better chip.

3). Lower frequency and TDP targets. This is also possible, in conjunction with 1&2. If I can build a 15W chip at 100 MIPS or a 5W chip at 60 MIPS, guess what? I’ve doubled performance per watt. But that doesn’t mean my actual performance is *better.* It means my new core is more efficient.

And finally? Because at APU13, those parts were a footnote. They were a slide flashed on screen at the last minute, literally, the *last* minute of Papermaster’s keynote on the last day of the show. Because if AMD had been planning a major unveil, they would’ve discussed it, either privately or publicly. And they didn’t.

So, at the end of the day, I expect Beema and Mullins to be modest improvements to Kabini and Temash. I expect they will be better than their predecessors. I do not expect they will deliver unparalleled gains on the same 28nm node, unless AMD has ported them to TSMC’s low-power high-k metal gate process — in which case, they’d hit lower performance targets.

Principle

Seeing as the info on Mullins and Beema with PCMark08 and 3DMark11 benchmarks and respective TDP values have been out for a week, this should not be hard to understand. They show like 25% performance increase at 44% less power envelope. Which is more than double the performance per watt. PCMark and 3DMark are not some subjective benchmarks.
AMD had to layout the design over and add in the ARM processor, along with the new tweaks to the cores to roll them to “Puma”. Whatever they did, they hit higher performance at lower power. Heck maybe they transitioned to GloFo, but it doesn’t matter to me, just that the performance is up and power consumption down.

Joel Hruska

Definitely no transition to GloFo. They would’ve had to do a ground-up redesign for that.

Let me try to explain the reason I’m dubious of this. It’s not because of the performance figures. Performance figures are fine.

Let’s say I tell you: “In a 2012 representative workload, performance was X.” “In the 2013 workload, performance was 1.25x. But we only drew 65% the power for that workload.”

Huge gains. Very nice. Then I hand you the laptop. and your battery life is only 20% better, while average performance is just 10% better.

You come to me to ask what’s up, and I say: “Aha! Look at our workload mix!”

And it turns out that the workload mix is 50% video, 25% idle, 25% office apps. If that workload mixture reflects your laptop use, hurrah! You see huge gains. But if your workload mix is different, your usage profile is different. If AMD adjusts Jaguar to have faster Turbo modes, but manufacturers don’t use adequate cooling solutions, then you don’t get that performance. If they tweak the core to really optimize video decode, but you measure WiFi browsing, then those battery improvements don’t help you at all.

My point here is not “AMD is lying.” My point — and *believe* me, I’ve had this discussion with AMD and Intel both — is that battery and performance per watt metrics are VERY complicated. If Acer doesn’t use a robust cooling solution, for example, your APU performance craters because it never hits the appropriate frequencies.

This sort of thing is definitely an issue for shipping hardware. It’s why I take a wait-and-see attitude.

Principle

Either way, battery life be damned, the total thermal envelope has still lowered so it can actually be used in a thin designs with little cooling. I would not describe 3DMark as anything but very demanding on both the GPU and CPU parts, so if they can really maintain the refrenced lower TDP then they can at least cram in that 25% more performance into a smaller space. Something they were incapable of before, so its a win for AMD, and the existing Temash APUs were already very competitive to other tablet SoCs, or even the most powerful at expense of a little too much heat. I never cared if the Wifi battery life is the same as before.

Joel Hruska

One of the bothersome things about Kabini / Temash is that the number of currently shipping designs is so small. I wish the part had better hardware.

I had a friend get a Kabini-based system from Gateway, supposedly using the same hardware I have in the whitebook AMD sent me. Performance was so bad, she wound up returning the notebook. And I could not lock down the reason why. Even after driver updates, patches, and thermal profiling to ensure it wasnt’ overheating, the system experience was *terrible.*

Of course people say “Don’t buy Gateway,” but I went over the system with a fine-toothed comb. As far as I can tell, the A4-5000 in her system was configured identically to the A4-5000 in the whitebook. Even with every bit of OEM stuff disabled, it still just ran terribly.

Principle

And I trust very few of these type of stories……I have used several cheap Brazos and Brazos2.0 systems, from HP, and all ran just fine. I have owned two myself, and my GF uses one daily with no issues for years, with no modifications. Issues like this likely have nothing to do with the CPU, other than maybe a more powerful one could potentially mask another issue. And seeing as AMD has increased the performance at a lower TDP, all signs point to that improving these types of experiences, which is the whole point.

Joel Hruska

*shrug* I’m telling you the factual truth about a system I worked on myself. The white book was excellent. The shipping Gateway couldn’t duplicate that. I was unable to find a cause or fix the issue after installing all updates, hunting for performance-sapping software, comparing the version of Windows on my own whitebook to the Gateway, checking low-level for error codes or other hardware failures, or installing updated codec packages, Chrome instead of IE, Firefox instead of Chrome, etc, etc, etc.

I offered to wipe the OS and start from scratch, she elected to return the system instead.

Joel Hruska

Huh. My comment seems to have been eaten.

I told you the factual truth about a system I worked on personally and recommended deliberately to the buyer. Come to find out the experience on the shipping hardware was much worse than the whitebook I had.

I was unable to find a cause after a great deal of troubleshooting. She elected to return the system rather than have me blow the OS and install from scratch.

I suspect the problem was Gateway, not the APU.

john

I know I’m reading this 9 month after the fact… and yet I can’t help it: You were wrong :) – well partly at least.

I suspect what happened is 2) They basically screwed something up with cabini and fixed it with beema/mullins. That’s my take. But as the performance numbers show they are quite in line with amds initial claims. It is still to be seen how well they perform on battery life but other then that they delivered, especially mullins.

Jason

7th paragraph.

“The current Radeon 8600 has 384 cores while the R7 260X that replaces it packs twice that number. The Radeon 8600 series has just 384 cores, while the Radeon R7 260X packs twice that number.”

Phobos

Seems they almost decided to ditch the cpu stand alone in favor of the APU, I guess it makes sense in a way, they haven’t come out with any new cpu for the AM3+ for sometime. Now, the question would be when we will see APU’s with at least 6 cores and LV3 cache?

Joel Hruska

Was LV3 a typo? Not used to that nomenclature.

But to answer your larger question, I’m not sure we will. It would make more sense to consider adding a graphics cache to the current design, or otherwise improving access to memory bandwidth. Both of those things would require additional die area.

The fundamental challenge APUs face is that they improve graphics performance for a segment that traditionally hasn’t cared much about graphics. This means AMD is pressured to deliver as much HSA acceleration as possible, to allow the GPU core to shine in non-graphics workload.

The problem there is that building HSA applications takes years. So improvements on the CPU side for single-threaded workloads are extremely important. Even in a situation where HSA was quite popular, the CPU must be powerful enough to keep the accelerator fed and perform its tasks quickly.

If Steamroller improves IPC but can’t clock as high as Piledriver, the focus has to be on getting clock speeds back up to previous levels. AMD needs better single-threaded performance more than it needs more cores.

Ken Luskin

Joel, AMD is DELIBERATE about what they are doing in driving more GPU integration.

You are completely CLUELESS in thinking that people do NOT want more graphic capabilities.

ALL the future NATURAL USER INTERFACES will require significantly more graphic capabilitiies.

You are living in the past!

You live in this Intel CPU world, and have NO VISION of what is going on.

WHY is Intel desperately trying to increase their graphic capabilities?

While telling everyone that the CPU should do everything?

If you could get your head out of Intel’s crotch for awhile, you might be able to understand what AMD is doing.

AMD is sticking with X86 for plug in consumer devices, because developers are comfortable with that architecture.

AMD will be employing ARM for the Cloud and mobile devices, because it is inherently more energy efficient, and it is much easier to iterate.

Joel Hruska

Ken,

I had this conversation with multiple AMD employees last week. One of the difficulties of selling high-end integrated graphics is that average consumers don’t prioritize high-end integrated graphics very much.

Intel is not “desperately” trying to increase integrated graphics capability. Intel is improving their integrated graphics capability by a sizeable margin, each and every launch. The gap between AMD and Intel on this front has narrowed considerably since 2008, when AMD had a decent integrated GPU, and Intel graphics were best described as terrible.

Kaveri is expected to improve AMD’s position by 20-30% over Richland — a very nice boost. But it’s not automatically a boost that will be particularly visible to users who don’t game. The hump AMD is attempting to get over is the fact that integrated GPUs still aren’t particularly great for gaming.

Jesse

AMD Kaveri IS PATHETIC!!!! IT’s a waste of space and Kaveri is more proof of why AMD is DOOMED!!! Kaveri IS A JOKE!!! Integrated graphics is the Future but AMD will not be part of that future. AMD doesn’t sell high-end IGP’s they sell mediocre ones that are PURE CRAP!!!!

Principle

Jesse’s pure crap again, you say nothing but pure crap all the time. Jesse is a mediocre hack.

Jesse

Nope it isn’t crap AMD is crap. The PS4 is PURE Grade A Crap. Principle is a donkey turd!!!!

Principle

You have always been a pathetic person Jesse, you have been trolling AMD articles for what must be years now saying AMD is bankrupt and their stuff is crap. And yet you continue to be proven wrong, again and again. You are a moron Jesse.

Jesse

Wrong. I have not been proven Wrong at any point Sorry Turd Brains. AMD IS FINISHED that’s fact!!!

Principle

What have you ever been right about? You speak nonsense like a third grader. AMD is still here, not bankrupt and will still be here 5 years from now, which completely refutes your nonsense you have been posting on every site for a long time.

Jesse

EVERYTHING!!! AMD IS IN FACT SCREWED!!! Nope that won’t happen Sorry. AMD is destroying itself by the hand of RoRY READ and releasing SHITTY products with even WORSE drivers!!!!

Arafat Zahan Kuasha

Don’t feed the trolls. Ignore them and they’ll have no food. Simple.

Ken Luskin

Joel, HARD core gamers want discrete graphics. There are hundreds of millions of people playing lower games on mobile devices.

People LOVE to play video games.

The more graphic power the better.

If AMD can give gamers a good experience without a discrete card, which is the case with BOTH the NEW CONSOLES, then that is a superior solution.

AMD’s HSA empowered Kaveri continues to increase the graphic power of their APUs at a REASONABLE price.

Why did Intel create an Iris pro at $650…

When AMD’s Kaveri will outperform it, and only cost $150?

AMD is still WAY, WAY ahead of Intel in both Graphics, and more importantly INTEGRATED GRAPHICS.

The HSA innovations will eventually be on ALL of Media Tecks chips, PER the CEO.

PC OEMs will increasingly be trying to DIFFERENTIATE themselves from mobile devices.

SUPERIOR GRAPHICS is the DIFFERENTIATING FACTOR.

Nobody needs more CPU power from today’s PCs.

But, there are plenty of people who would like to buy a PC, that plays AAA games right out of the box, without additional EXPENSIVE modifications.

In mobile devices, POWER consumption is paramount.

HSA innovations embodied in APUs will INCREASE EFFICIENCY.

john

What you call natural interfaces doesn’t convince nor do smart watches google glass and oculus rift… They are nice and all but they are gadgets… When all is said and done you will be best served by a mouse a keyboard and some sort of screen.

I don’t think people want more graphics, for instance a richland apu already offers enough for a casual gamer (hell it runs crysis for us “poor” 780/1042P games). What is important however and even the author misses it is Java and more specifically Oracles involvement. For instance B-trees(and derivates) are the structure that is most commonly used as an indexing technique in databases, incidentally the algorithms for seeking on them can be parallelized and shifted to a GPU… (this is just one of the many advantages a HSA compiled DB can get from the GPU)This means along with Oracles strive to seamlessly integrate HSA features in the Java language with the next versions of java which will improve dramatically java execution by shifting most if not all array walks & computes to the GPU that we might see a huge shift towards HSA(If oracle comes out with the first HSA enable DB they would absolutely ridicule other DB engines … running on inexpensive hardware no less…). The HSA push is crucial for APU’s and AMD. And Java is just the right language to spearhead it as most codemonkeys are not able to code OpenCL let alone rethink algos in a parallel fashion… Java & Oracle DB are industry engines if they make the jump to HSA lots will follow! I’m pretty sure MS will jump aboard not long after javas first HSA JVM… flowed by PHP libraries( and then core)… whatever follows is not relevant any longer after that…

In this HSA thing none is more of a linchpin then Oracle!

Joel Hruska

Java integration is actually really important, potentially. Boring, but important. And Oracle sent folks to APU13, though I confess, the deep programming details were over my head.

Java support is probably critical when it comes to making a use case argument for HSA across AMD and non-AMD devices, mobile, tablet, and desktop. When it comes to games, specifically, I’m not sure how much an advantage that offers. I didn’t think most games used Java.

john

hrm I don’t think ANY gamer uses java :D – Well there are a few java games but nothing world shattering(except minecraft :P). But Java is important if not instrumental in large enterprise softwares and lately in more and more DBs too so I think AMD wants to make a big server move soon…

Ken Luskin

If people do NOT want more graphics WHY does Nvidia exist?

WHY do people pay $500 for a discrete graphic card?

>>>” you will be best served by a mouse a keyboard and some sort of screen.”<<<

Do you live under a rock?

The future is about INCREASINGLY more natural user interfaces.

I guess everyone using a touch screen phone or tablet is wrong, and you are right?

john

well… I mean doing work not gadgets. Ultimately the interface that is the most productive wins the office space, whoever wins the office space wins the world as nearly everybody works nowadays :D.

I meant the regular joe gamer not enthusiasts and I-need-120fps-so-my-3-3d-4k-screens-work guys… I mean the guys that enjoy a game once in a while on a screen that is not a gaming screen but a simple office screen.

Enthusiast markets will always exist but they are niche. I know lots of guys that don’t ever need more then a richland apu or a haswell i5 can deliver in graphic performance.

Ken Luskin

>>>” whoever wins the office space wins the world”<<<

Wrong again! MSFT won the office a long time ago…

So, why is Apple's market cap TWICE the size of MSFT?

WHY is GOOGLE's market cap bigger than MSFT?

You really have NO idea what you are talking about!

QUALCOMM is increasing GRAPHICS on their mobile chips…WHY?

There were 275 MILLION game boxes sold over the last 8 years.

There are an equal amount of PCs that have been upgraded with graphic cards.

GAMING is NOT a NICHE market!

The video game industry produces more total REVENUES than Hollywood with all its movies.

Are movies a niche market?

You are CLUELESS!

You know a lot of OLD nerds!

john

Apple’s market cap really ?? Because they squeeze 80% market profit out of 5% market share(there are no bigger fans then apple has!). That’s why they have such a market cap. And i was talking about user interface read keyboard/mouse and the productivity you get from it.

Gaming at large is very well acomodated by todays apus. Most gamers are casual gamers why do you think farmville had more guys playing the wow? Did they play on a highend machine? Nope… did they need a gpu? Nope. Another case, angry birds… or… minecraft… any gpu? Well maybe but not a highend one.

so your arguments are kind of stupid. In the mobile arena more power is just bragging rights as with any gadget. Rarely anybody uses the entire power of an s4 or 5s.

As to touchscreen vs keyboard… i don’t look at the keyboard when typing and type probably around 100-150 words a minute when working… beat that with todays touchscreens or by flapping like a seal in front of a screen… yeh good luck with that.

Wow reall 200m gameboxes? Wow… how about the billions of keyboards and mices sold along with entrylevl to midlevel rigs sold anually?

Jpoint

Well, looks like no one is going to force Intel to actually boost the performance or drop the prices of their desktop chips anytime soon. What a shame that AMD wasn’t managed more effectively years ago to maintain R&D parity with Intel. If Intel’s plans to crush ARM on the low power front actually happens in the next few years, we’ll essentially be looking at a monopoly. Here’s to hoping some competition comes along.

Joel Hruska

Let’s not forget the predatory rebate practices that left AMD in a situation where it couldn’t give OEMs chips for free — because free chips from AMD wasn’t enough to offset the value of Intel rebates.

But yes. Mismanagement also played a part.

Jesse

Cry me a river. AMD lost because its chips sucked ASS!!!! Intel did pull some funny business but AMD’s demise is solely on its own shoulders. Maybe someone else will take AMD’s place once they are gone and actually bring GOOD products out into the market just like Nvidia and Intel DO.

Joel Hruska

Just as a one-time notification: You bring nothing useful to the conversation and while I cannot ban you from commenting, I will not respond to anything you write.

Jesse

AMD FANBOI!!!! And by the way you just did!!! I WIN!!!!

Matt Menezes

I think there’s been more pressure placed on Intel by ARM (Qualcomm, etc.) in a relatively short time span than AMD has been able to apply in years. The thing is, if/when AMD surpasses Intel, it’s still using x86, so Intel can regain any lost market share quickly by releasing a better x86 chip. If ARM surpasses Intel, it’s a completely different instruction set and code base, so it’ll be much harder to move things back in Intel’s favor.

This is one reason I think Intel waited too long to get serious with mobile designs (e.g. using their latest fabs for the relatively low margin mobile parts). By waiting, they let an entire ecosystem get built up using a competing instruction set and architecture. Granted, Intel’s process advantage has historically kept them firmly ahead of the competition. If they apply that process advantage to mobile – presumably taking some capacity away from their higher margin server chips – they can maybe move Android and/or iOS away from ARM and make it worth their while in the long term.

Steve Smith

Can’t see that happening to be honest. I think the big players Qualcomm/Samsung/Apple etc.. like the fact that they can design there own CPUs on a well founded instruction set, the way they want them. Also, Intel will never be able to compete on cost, the peanuts these companies pay for a licence from ARM, if they were to buy a finished product from Intel, its going to cost a hell of a lot more. Maybe some companies without in-house design teams may take the bait, but most certainly not the big players. This of course is just my opinion.

Jpoint

I agree completely. ARM definitely has had a noticeable effect on Intel – we see it in the almost complete focus on power efficiency since Sandy Bridge.

My concern is that if Intel is successful in outpacing the ARM-licensees in power/performance in 2-3 years, there may be a mass change to Intel/x86 for mobile devices. Android is more prone to this than iOS since Android can more easily run on different architectures – but we all know Apple isn’t shy about switching platforms when technology changes either. Without another x86 competitor, this would essentially give Intel the whole market on a platter. Ironically, if this scenario comes about, AMD’s work on ARM will be seen as a fatal distraction from their x86 R&D.

john

There is more to the story then just power…

x86 is CISC – meaning lots of dedicated(per instruction) hardware on the cores.
ARM is RISC – meaning very little dedicated hardware – easy to scale you just add more of everything where everything is very little unlike a CISC that requires scaling by duplicating a huge part of the hardware.

Now… AMD with it’s APU have : x86 – CISC & GCN – RISC… Many of the x86 instructions can now be performed by either one or multiple GCN cores at once on the GPU part. Meaning AMD can now -thanks to the unified access to memory & the heterogenous queuing- ship workloads from one x86 core to multiple GCN and back without the old days memory copy overhead.

What this means is that they can basically reduce the x86 cores to bare metal and ship any and every single instruction that CAN be efficiently executed on GCN to the GPU. This means much finer grained power management of the entire APU(instead of lighting up a huge core for nothing you just scale as needed). Meaning that we will probably soon see x86 chips doing the ARM limbo with running cores at a tenth of the frequency or shutting them down etc… The second interesting thing that can happen is to have equal numbers of arm and x86 cores on the same die making the chip able to execute both instruction sets in a very efficient way with both leveraging the GPU power on equal terms. So no matter what code you input you will use the same basic powerhouse… the GPU (I think we’re about 4 years out as we speak but If HSA takes off theres where it might go)

But as to your other point? Remember, Kabini / Kaveri both translate x86 code to internal micro-ops and that micro-op translation isn’t directly schedulable to the GPU. Maybe one day, the GPU will be able to read the code directly, but that’s why HSAIL (HSA Intermediate Language) exists. It’s purpose is to provide a vital translation layer between CPU and GPU.

john

yes but there’s more to the x86 arch then the decoder… some instructions are not that well suited for a gcn.

As to the microcode you’re right – I was talking 4 years in the future not right now. Right now the very basic building blocks are laid out. But still teh hUMA is very interesting to me just because I’m tired to ship things back and forth between gpu and cpu via OpenCL

Joel Hruska

I would like to see this level of interoperability, certainly. I don’t know when / if it’ll make sense to build things this way, but if it works, I think it’s the holy grail of the HSA concept.

Principle

You cannot say that AMD has not effected Intel’s processors, considering Intel has now devoted more and more die space to the iGPU and the fact that they even made a single package CPU-GPU was to compete with AMD. AMD leads the way, and Intel spends more money to do something quicker and with better manufacturing, and uses its existing grip on the industry to put it in a better position.

Ken Luskin

Joel Hruska seems to completely MISS AMD’s goal of integrated graphics, which is embodied in the HSA foundation.

>>>”>>>””Today,” says AMD’s Hegde, “the CPUs in our APUs may be lower in performance in your tests against Intel. But that’s not an indication of where we are going. We’re now embracing the low-power space in a very strong way, so we are building CPUs that are going to have very good performance with a very low power envelope. So when we say ‘balanced platform,’ we’re not saying that you have to take one or the other. We’re saying that, in a balanced platform, for every workload, do it in the place that makes the most sense. Intel’s approach of doing everything on the CPU is just wrong, because when you put a balanced workload on just the CPU, your power draw is dramatic and unnecessary. That’s why we don’t think that’s a good model. We’re going to make the cost of transition from CPU to GPU next to nothing. Then it’s up to each application to choose the right execution engine in terms of performance and power.”<<>>” HSA targets two things: ease of programming and performance per watt. Once GPGPU compute comes into play, analysis shows that the GPU is 4x to 6x more performance per watt efficient than a CPU.”<<<

__________________________________________________________________
Once you begin to understand AMD long term strategy, combined with the demand for more graphic capabilities from most consumers, then the above article becomes mostly irrelevant.

Joel Hruska

*dryly* I’ve been writing about HSA since AMD dreamed up the idea and called it Fusion back in 2005 – 2006. I attended APU13 last week. Please, tell me more.

Ken Luskin

You attended, but you are too DENSE to understand.

Why do you think AMD showed the KAVERI chip running against Intel’s top chip with a discrete GPU?

Kabini is a cheapo chip for mobile devices, and does NOT have HSA.

Are you suffering from early dementia?

Joel Hruska

I’m not the one with the understanding problems, Ken. Jaguar is the CPU inside Kabini. Xbox One and PS4 are powered by *Jaguar/Kabini*, not Steamroller/Kaveri.

Piledriver / Bulldozer have 16K L1 data caches and shares a 64K data cache between two cores. So does the upcoming Steamroller. But Jaguar? Jaguar has 32K L1d and 32K L1i. Two blocks, four cores per block, just like Kabini. 1.6 – 1.8GHz clock speeds — like Kabini.

And so on, and so on.

It’s a Jaguar, my friend.

Jesse

The Jaguar failed miserably and so will AMD. Crapdriver and S**Tdozer are pathetic as well. PS4 is a steaming pile of PURE CRAP!!! It uses too much power and can’t do very much at all.

Joel Hruska

Furthermore, discussions with AMD have indicated that while the Xbox One and PS4 support some HSA features, they do not claim HSA compatibility. Exactly which features and to what degree is genuinely unclear. Sony’s remarks point to fairly full support, Microsoft hasn’t entirely clarified the issue.

So it’s fair to say they support HSA-like features, yes. But they may or may not be fully HSA-compatible.

>>>”Not surprisingly, the PS4 (Fig. 1) is foremost a gaming console but it is a lot like a PC (see Where Has My PC Gone? It’s Gone Gaming). It is a rather compact and stylish PC that is powered by a custom SoC (system-on-chip) from AMD that supports AMD’s Heterogeneous System Architecture (HSA). HSA is also supported by the new AMD Kaveri Accelerated Processing Unit (APU) designed for PCs (see Unified Heterogeneous Computing Arrives).”<<>>”HSA incorporates the heterogeneous uniform memory access (hUMA) support as well as hQ (heterogeneous queueing). CPU and GPU share the same virtual memory address space courtesy of hUMA. The hQ support allows threads to initiate GPU tasks directly.”<<>>”As with existing and future APUs, the CPU and GPU share main memory. In this case it is GDDR5. For the standard APUs, it is DDR3 or DDR4 depending on the chip. The Kaveri is the first non-gaming version that has the HSA support that allows the CPU and GPU to share the virtual memory as well. This simplifies CPU/GPU application integration.”<<<

____________________________________________________________

1) In another publication EE times, the reporter, Rick Merritt tried to suggest that HSA innovations would not be supported by "developers" for a long time. I guess this guy missed the fact that AMD APUs are powering the new Sony PS4 and MSFT Xbox One

2) The $25 billion video game industry is DEVELOPING ALL their new games for the new Sony and MSFT game consoles.

3) Which means that all the developers writing software for those new video games will be optimizing them for AMDs' APU architecture, which is employing HSA innovations.

4) Which means that ALL the new video games will run better on a KAVERI chip, that employs a similar architecture as the chips that are in the new consoles.

5) Many of the new game developers are also using AMD's new application programming interface (API) called Mantle, which will improve performance by approximately 20%, when run on the Kaveri chip.

6) The powerful Kaveri is compared to a top $300 Intel chip paired with an $80 Nvidia discrete GPU, and produces frames rates that are almost twice as good, while running the game Battlefield 4.

7) The massive use of mobile devices for playing low level video games indicates that people LIKE playing video games.

8)Therefore, if the average person needs a new PC/laptop for whatever reason, it makes sense that they would want one that has better graphic capabilities, everything else being equal.

9) Since most video game enthusiasts play on either a game console or a PC, the first Kaveri chips will be the desk top version. But, AMD does plan on releasing a laptop version a few months later.

10) AMD's Kaveri APU will probably be priced ($150 or so) at a fraction of the cost of Intel's top integrated graphic processor, the Iris Pro, which is priced at $650.

11) As the cloud increasingly provides computing power to consumers, cloud providers will look to AMD to provide more chips that are GPU oriented.

Bottom line, as Graphics become increasingly more important to consumers, AMD will benefit.

Joel Hruska

I was at APU13. Games are being optimized for Kabini, not Kaveri. Mantle will offer significant benefits, in games that support it, which right now is not most, or even many.

And yes, there will be Steamroller laptop chips. That’s shown above.

Ken Luskin

Kaveri has HSA, same as the Sony and MSFT consoles.

You are CLUELESS!

Jesse

HSA is WORTHLESS!!! AMD IS WORTHLESS!!!! AMD is DOOMED!!!!

Ken Luskin

Joel, the more you write the more EVERYONE can see what a total imbecile you are.

Kabini is a mobile chip you moron.. Why would AAA video games be optimized for a mobile chip?

You know NOTHING about AMD… how embarrassing for you!

Joel Hruska

*laughing out loud*

Jaguar is the CPU half of the Kabini APU. A mobile chip, two-issue processor. Both the Xbox One and PS4 have two sets of four chips each. And when publications refer to the PS4 and Xbox One as being powered by Kabini, this is precisely what they’re referring to.

The GPU inside Kabini is GCN-based. The GPU for both the Xbox One and PS4 is GCN-based.

Answer: They are NOT.. Its just some GOOFY dude who somehow has a job writing for Extreme Tech..
A little site that likes to bash AMD.

Please continue to embarrass yourself..

The people who own ExtremeTech and everyone who reads this comment section are thinking that maybe you should NOT be writing for them…..

You know NOTHING about HSA.

You know NOTHING about AMD.

You know NOTHING about the large trend toward GPGPU.

But, please continue to make a fool out of yourself, if that is what makes you happy…..

Joel Hruska

The Xbox One and PS4 are semi-custom designs that combine the Jaguar CPU with a large, on-die GPU based on GCN. Period. End of story. HSA support suspected, not confirmed. Exact degree of support? Unconfirmed. HSAIL support? Unlikely. OpenCL? Unsupported, though that’s a Sony software decision, not a fundamental issue of compatibility.

Believe as you like. If you can’t be bothered to read and learn, I’m not going to waste my time educating you. But thank you for the amusing conversation.

Not surprisingly, the PS4 (Fig. 1) is foremost a gaming console but it is a lot like a PC (see Where Has My PC Gone? It’s Gone Gaming). It is a rather compact and stylish PC that is powered by a custom SoC (system-on-chip) from AMD that supports AMD’s Heterogeneous System Architecture (HSA). HSA is also supported by the new AMD Kaveri Accelerated Processing Unit (APU) designed for PCs (see Unified Heterogeneous Computing Arrives).

HSA incorporates the heterogeneous uniform memory access (hUMA) support as well as hQ (heterogeneous queueing). CPU and GPU share the same virtual memory address space courtesy of hUMA. The hQ support allows threads to initiate GPU tasks directly.

As with existing and future APUs, the CPU and GPU share main memory. In this case it is GDDR5. For the standard APUs, it is DDR3 or DDR4 depending on the chip. The Kaveri is the first non-gaming version that has the HSA support that allows the CPU and GPU to share the virtual memory as well. This simplifies CPU/GPU application integration.

________________________________________________

HSA is a CHIP architecture!

You know NOTHING about AMD.

You know NOTHING about their APU roadmap.

You know NOTHING about HSA.

The only thing amusing is what a utter FOOL you are making of yourself.

I ask you again:
WHY would AAA game developers be optimizing for mobile chip like Kabini, that will be replaced in 6 months?

That is what you said.. Or has your early onset dementia eliminated what you wrote from your memory already?

Ken Luskin

WHY was Sony PS4 developer invited to make a presentation at APU13, which is ALL ABOUT HSA?

Or were you too high to understand that HSA is integral to the future of AMD?

If Sony is NOT interested in the functionality of the HSA architecture, WHY did they help design and pay for a semi custom APU that has uses HSA?

You are so incredibly UN knowledgeable about AMD/HSA and the future of computing, it is sad that you do this for a living….

Joel Hruska

It’s funny you mention that. The Sony keynote was terrible. I expected a discussion of why they chose AMD, or what specific benefits the APU offered, or what performance scenarios it made simpler.

Instead we got a dull, boilerplate, very high level overview of the PS4 in general terms. Utterly unhelpful. And there were no tech sessions or PS4-centric developer sessions. Nothing about using HSA capabilities on the PS4, nothing about the Xbox One. Some of the high-level sessions were just do-overs from GDC 13.

As I’ve covered for ET, I expect the PS4 does implement some of HSA. Whether it includes the new IOMMU hardware or support for HSAIL? It may not. HSAIL is a framework for implementing HSA on many types of hardware. Sony may have felt it wasn’t important, since the only support they care about is their own.

Principle

Hello, the PS4 uses HUMA, where both the CPU and GPU run off the GDDR5 independently coherent. This is the first step towards HSA!!! Which is just like KAVERI, not like Kabini. The internals are like Kabini, as far as Jaguar cores, but the ARCHITECTURE is more like Kaveri. You have gone through all of these comments from Ken and still not figured this out. Kabini has X86 cores, and so games are being optimized for highly parallel X86 cores, it doesnt matter what that core from AMD is, Jaguar or Steamroller. The games are optimized for x86 cores.

Joel Hruska

1). The PS4 probably uses hUMA. It sounds as if it does. Sony won’t confirm or deny it. Neither will AMD. So when I say “Unconfirmed,” I mean “Unconfirmed.” There is no guarantee that the console implements HSA in the exact manner specified by the HSA Foundation, even if the capabilities are essentially identical.

2). The Kabini / Kaveri distinction is enormously important when talking about game optimization. A game optimized for Steamroller will be designed for high latency, long pipelines, and a relatively slow L1/L2 cache subsystem.

A game optimized for Jaguar is optimized for low latency and shorter pipelines. The two chips have different branch predictors, which makes a significant difference in overall performance. There are certain instructions that actually execute faster on Jaguar than Piledriver, even though PD is the “big core,” even after account for the difference in clock speed.

In other words, when you talk about optimization, CPU architecture matters and you can’t just say “x86 cores.”

Principle

Do you really believe a Kabini processor will do better than a Kaveri one? Do you really believe the game developers are keying in on the L1 or L2 cache speed? Or are they coding their games just like they always do for a PC. Do the game developers also “optimize” for Intel CPU cores versus AMD cores? Or do they just make them run on the X86 instruction set and expect the CPU to keep up with the GPU? Do you expect that if I paired a Kabini APU with an HD7870, it would work better than my FX-8320 and the HD7870?

Joel Hruska

Historically, yes, those are precisely the sort of low-level characteristics that developers for consoles key on. Exposing them in a way that’s simple to use is an ongoing challenge for developers. If you go back and look at the Xbox 360 / PS3 discussion, you’ll find that programmers roundly preferred the Xbox 360 initially *precisely* because it was easier to take advantage of the hardware at a low level. Cell was such a unique processor, everyone had to figure out how it worked and write compilers to target it effectively.

The Xenos GPU in the Xbox GPU could look up texture data directly in the CPU L2 at very low latencies. This was a very low-level way of boosting performance that developers exploited. Nothing comparable existed on the PC side until HSA and even there, the function isn’t identical.

So yes, developers care about things like latency in the L1/L2 cache subsystem. They care about instruction execution speed and the ability to extract parallelism effectively. Doing this is why consoles have performance advantages — they run “close to metal.”

PC developers don’t do as much of this — and that’s why PCs are slower on the same hardware. These are the kinds of discrepancies Mantle is targeting.

So, to answer your final question: No, a Kabini APU + HD7870 running Windows will not be as fast as your FX-8320 with an HD 7870.

But an eight-core Kabini with an optimized link to your HD 7870 running a version of a game explicitly built to take advantage of the six available CPU cores and sharing data through some form of HSA implementation (because it’s virtually certain that that *some* degree of HSA capability exists) might very well be as fast as the FX-8320 + HD 7870.

Bear in mind, one of the changes AMD made to the PS4’s integrated GPU is a change we know exists in the R9 290/R9 290X. It has eight ACE’s, not 2. It’s designed to juggle more compute workloads simultaneously — to lift work off the Jaguar CPU.

Principle

And Thank You for posting that last bit, which pretty much sums up why the PS4 is HSA, to better use that iGPU for GPGPU.

Joel Hruska

To hit this from a different angle, it’s not a question of which chip does “better.” Clearly if Sony and Microsoft had prioritized performance above everything, we’d have larger SoCs, higher power consumption, and bigger CPU/GPU blocks.

Sony and MS picked Kabini because Kabini met their performance and power targets. Would Kaveri be better? Maybe. But the Bulldozer architecture is an odd duck. It can only dispatch two instructions per clock cycle and it feeds cores round robin. This introduces latency and pipeline delays, the L1 cache miss rate is much higher (about 20% higher) if you use both cores per module.

Now, Kaveri should fix this by allowing for two instructions per core, per cycle. A quad-core Kaveri should decode eight instructions per cycle, where a quad-core Piledriver could only do four. But that only brings it back to Kabini’s eight-core level, and eight cores of Kabini are much smaller than a quad Kaveri.

(Also, Piledriver’s 256-bit AVX implementation is broken. Hopefully Kaveri fixes it. But there’s no advantage in using PD’s AVX implementation over Jaguars, and I don’t know if AMD fixed the problem with KV yet).

Mariu

Kabini is a low power chip. Is stupid to optimize games for it. Games will work way better on Kaveri and Dice mentioned optimizations for AMD’s future APU’s.

Joel Hruska

Console games are, by definition, highly optimized. And the Xbox One and PS4 use Kabini. Therefore, games are being optimized for that microarchitecture.

That does not mean PC versions won’t be optimized for other uarchs.

Lee Saenz

No PCIe 3.0 support yet for AMD…. That will hurt them majorly! Gamers demand full access to their hardware, so all compatible systems will suffer without it. An PCIe 3.0 compatible motherboard, VGA, and CPU are all required to make it work, plus enabling it on your OS…. If one component lacks it (in example AMD CPU), the true PCIe3.0 technology would NOT run. So, like DX11.2 only runs on Win8.1 with a DX11.2 capable video card, you will not take advantage of the latest gaming technologies, unless it is supported across your entire systems hardware components! Some will argue the difference is not that great, and will justify doing without the latest tessellation, texture shaders, and other technologies like “Mantle”…. But I for one will try and make my games look as best as possible for enjoyment and overall visual experience. Not to mention all the eyecandy! So be aware, do your homework…. I’ve always loved AMD, I still many AMD components….. But PCIe 3.0 is something that AMD is lacking, and all of your PCIe 3.0 capable hardware will not run in 3.0 mode on any AMD CPU based setup.

Lee Saenz

Only one MB has 3.0…. Sabertooth, is that right?

Joel Hruska

There’s a Gigabyte board that uses PCIe 3.0. Enabling it is a matter of ensuring proper timing and design.

True. Should have said A7 — I was actually reading that article while writing this story. Fixing that.

john

Apple’s 5s chip is the first arm 64 chip out there isn’t it?

Why compare older 32 bit ARM chips with intels atoms which are 64 bit chips? The comparison is quite biased. 64bit instructions sets have a huge advantage over 32 ones. Basically it can amount to almost double the horsepower(in certain scenarios even more) in specific workloads.

If we compare 64Bit chips: 2cyclones beat by 50% 2 x86 intel cores… that’s the important part (if the ashraf aesra fanboy from your link is to be believed that is…)

Remember this is the first ARM 64 bit cpu… intel is at gen 6 or so…

and there is also the other aspect… bay trail is 22nm while apple’s chip is 28nm :) and still loses out core for core.

Joel Hruska

Unless someone does a custom load of Windows 8, all of the BT systems shipping that I’ve seen are using 32-bit Windows because of Connected Standby support.

Cyber Revengeance

kaveri will give amd more market share

Sharikou

Google “

Setting the record straight on the AMD-Intel-SeaMicro relationship”. AMD’s Feldman exposed your lies. The question remains, why did you lie? Were you paid to lie?

Joel Hruska

Not my story. Not my reporting. Not something I even mentioned in this article, save to say that the SeaMicro acquisition was factual.

Mindbreaker

I have never seen a company work harder to stay alive. That has just got to pay off at some point. Though it looks like they have some simple things they could try at the high end. They could make some of those 6300, 16-core chips for the regular high end consumer with overclocking limited to single socket (so it will not undermine server market). Serious FX beast. It will use power…so what?…gamers don’t care. With some heavy watercooling the thing will be seriously envied. They would also pick up several small niche markets too like budget video editing, chess engine people, photo editing, university science stuff, simulations, budget genetic algorithm stuff, CGI, well…the list would be extensive.

And if I were them, I would make 64-core chips for the Server market. Forget die shrink, if that is not economical, or will slow development; just make enormous sockets. The racks will be smaller…some companies that really matters for. I also think that could take the supercomputer market especially if they had small boards with 2 sockets. The idea of a rack with 1280, x86 cores would be very attractive. They would probably have to move the clock down a tad bit, I think they would sell. It would take some investment for a new socket and such but I think it is good bang for the buck. More serious folks from the above paragraph would love the 2 socket or 1-socket boards.

TVippy

So 990FX is how many decades old now? When are we going to see a new chipset??

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2015 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.