exactly the problem, current atom is a horrible cpu in wathever device, whatever frequency you put it. have used them in notebooks and even now in a tablet. Bobcat on the other hand was awesome in the netbook range. THe temash would be way better suited for all these devices but as usual OEM focus on the blue brand with market jingles and dominancy and in the end its the end consumer (WE) that suffer from it and if it continues like this we will even suffer more. (less innovative, higher prices, dominant predefined design (something already horrible today) but many people fail to see that........as if they think there Intel system they just bought is a better suited device for everything... Reply

I'd read an interesting perspective on why Intel refrained from "kicking AMD to the curb & on down into the storm sewer" (sorry; can't recall the source). In essence, given the scope of Intel's unquestioned dominance in their chosen markets, (& mobile's on the radar), were they to act with any obvious & direct intent to further weaken, (or even try & finish off), AMD, Intel would find themselves in an extremely difficult, exceedingly complex & decidedly unpleasant set of circumstances.

By decimating their only possible source of true competition, Intel would be responsible for invoking upon themselves intense anti-trust scrutiny; a result that would be inevitable assuming regulatory agencies were functioning properly.

By backing off a bit, Intel may well cede some amount of business to AMD, but they retain a legitimate market competitor & at the same time continue collecting very healthy margins. The premiums charged on sales can then be used to continue funding aggressive Intel R&D, and, uh, marketing related expenditures, too. Reply

Yeah, but regardless AMD's had the FAR better CPU now for years. I've been running the lowest end version of it for a couple of years in a tiny notebook, and from the beginning wished people were using it for tablets.Reply

When you say current Atom, are you referring to Baytrail? Previous Atoms were pretty bad for Windows boxes, in my personal experience. But the new generation is significantly more powerful (About on par with a Core 2 Duo on benchmarks). They seem pretty reasonable for basic office tasks, this may not include all the lowest-end versions.Reply

Still don't see why OEM's would choose AMD's APU's in Android tablets over ARM, though.

It's weaker CPU wise, and most likely weaker GPU wise, too. We'll see when they come out if their GPU's can stand up to Adreno 330, PowerVR Series 6 and Mali T628. Plus, it requires quite a bit of power.

In my book no chip that can't be used in a smartphone (and I'm talking about the exact same model, not the "brand") should be called a "mobile chip".

This idea about "tablet chips" is nonsense. Tablet chips is just another way of saying our chip is not efficient enough, so we're just going to compensate for that with a much larger battery, that adds more to weight, charging time, and of course price.Reply

Even an Atom chip is more powerful per cycle than ARM, and AMD's stuff is more powerful than Atom. I'm not exactly sure what you are using to state that ARM is more powerful, but AnandTech did a great comparison themselves.

By the way, the comparisons in some cases are for quad core ARM vs. single core Atom at comparable speeds. Again, really not sure where your "facts" come from.Reply

As I have seen it stated elsewhere, "Simply: x86 IPC eats ARM for lunch while actual performance and power usage will scale together. That is why ARM currently has no real business competing against x86"Reply

not to mention, the IPC's in these processors follow suit with their server Opteron processors, which means, they achieve even greater IPC's per cycle. This is how you are able to have 1.6ghz CPU's that can compete with many common 3-4ghz desktop processors.Reply

Huh? AMD's Bobcat parts are more powerful than ARM's stuff. ARM's only just now managing to sort of compete with first gen Atom at best, and that with a CPU that's not actually used in much.

And "tablet chips" is NOT nonsense. You have more power budget, and higher expectations for performance in a tablet. If it's "nonsense", why does Apple put bigger chips in their tablets? Why can tablets run Core i CPUs? Why do they typically get bigger chips first even with Android?Reply

I think the main point of this is getting into the mobile market. Temash looks like a great chip for a tablet. The only problem is OEMs not biting because they think they have to put an Intel sticker on the box for it to sell.

Personally, I'm waiting on a good tablet with Temash to finally jump into the Win8 tablet club. Whatever OEM makes a good one first will be getting my money.Reply

I think with tablets, OEMs are even less likely to think they need to put an Intel sticker on it. Joe Schmo knows Intel doesn't mean as much for tablets. The most well known tablets aren't Intel. This is an opening for AMD to be able to get in the game late if they want to.

Yeah, I really have no interest in an Atom tablet, partially even just because of the horrible video.

I've got an 11.6" AMD c50 (lowest end Bobcat) based notebook, and while it's slow, it's still impressive how it runs anything, and in a pinch can even function as a main PC. AMD's got an even lower power Bobcat part with the exact same performance for tablets, but I don't know of shipping computers that used it, and it really would have been perfect. These new ones of course will be even better.

I wonder if the companies building these understand that using AMD would be a selling point... I see "Atom" and my eyes glaze over....Reply

"I should point out that ARM is increasingly looking like the odd-man-out here, with both Jaguar and Intel’s Silvermont retaining the dual-issue design of their predecessors."

It's not just ARM, it's three different current gen ARM cores.. if you're going to pose it as ISA shouldn't it then just be ARM vs x86 and not ARM vs Silvermont and Jaguar?

Besides, MIPS is 3-way in its CPUs targeting this power budget too (proAptiv), and so is PowerPC (e600 for instance). The reason why Silvermont and Jaguar is 2-way is really undeniable: x86 decoders are substantially more expensive than those for any of these ISAs, even Thumb-2. There's some validity to the argument that x86 instructions are more powerful (after first negating where they aren't - most critically, lack of three-way addressing adds a lot of extra move instructions for non-AVX processors) but nowhere close to 50% more powerful.Reply

Qualcomm hasn't said an awful lot about the internals of the uarch but several sources report 3-way decode and I haven't seen any say 2-way. It's possible that isn't fully symmetric or limited in some other way, we don't really know.Reply

Don't worry, I had no idea either until I started working in the industry :) It just means custom circuits that are hand crafted by a human. This is as opposed to "synthesis", in which the RTL code (written in a hardware description language such as Verilog) are "synthesized" by design software into circuits.Reply

quasi was more accurate than his name implies, but just to expand on it:

The count of custom macros is important because when you switch manufacturing processes, the work you have to re-do on the new process is the macros. Old cpus were "all custom macro", meaning that switching the manufacturing process meant re-doing all the physical design. A cpu that has a very limited amount of custom macros can be manufactured at different fabs without breaking the bank.Reply

Thanks quasi_accurate, Tuna-Fish and lmcd. Your answers were very clear.

If I my understanding is correct, would it be safe to assume that Apple's A6 uses custom macros. Anand mentioned in his article that Apple used a custom layout of ARM to maximize performance. Is this one example of custom macros.Reply

You can customize a variety of things, from individual transistors (eg fast but leaky vs slow but non-leaky), to circuits, to layout.

As I understand it the AMD issue is about customized vs automatic CIRCUITS. The Apple issue is about customized vs automatic LAYOUT (ie placement of items and the wiring connecting them).Transistors are obviously most fab-specific, so you are really screwed if your design depends on them specifically (eg you can't build your finFET design at a non-finFET fab). Circuit design is still somewhat fab-specific --- you can probably get it to run on a different fab, but at lower frequency and higher power, so it's still not where you want to be. Layout, on the other hand, I don't think is very fab-specific at all (unless you do something like use 13 metal layers and then want to move to a fab than can only handle a maximum of 10 metal layers).

I'd be happy to be corrected on any of this, but I think that's the broad outline of the issues. Reply

The Embedded G-Series SOCs seem to be exactly Kabini + ECC memory enabled (ex: GX-420CA and A5-5200). This will probably be the cheapest way to get ECC enabled and better performance then Atom, next step up would be Intel S1200KPR + Celeron G1610?.

I've been thinking of putting together a Router/Firewall/Proxy/NAS combo ...Reply

Of course, drawing such conclusions from a single benchmark is dangerous. If other benchmarks exhibit more code/data sharing and thread dependencies than Cinebench, their numbers might show a more appreciable scaling benefit from the shared L2 cache.Reply

Wii U uses a PPC 750? Correct me if I'm wrong, but the PPC 750 family is the same chip that Apple marketed as the G3 up until about 10 years ago? And IIRC, Dolphin in the GameCube was also based on this architecture?

Back in the day, the G3 at least had formidable integer performance -clock for clock, it was able to outdo the Pentium II on certain (integer heavy) benchmarks by 2x. Its downfall was an outdated chipset (no proper support for DDR) and the inability to scale to higher clockspeeds - integer performance may have been fast, but floating point performance wasn't quite as impressively fast - good if the Pentium II you're competing against is nearly the same clock, bad when the PIII and Core Solos are 2x your clockspeed.

Considering the history of the PPC 750, I'd love to know how a modern version of it would compare.Reply

Yes, the Gamecube, Wii, and Wii U all use PowerPC 750 based processors. The Wii U is the only known multicore implementation of it, but the core itself appears unchanged from the Wii, according to the hacker that told us the clock speed and other details. Reply

And you're right, it was good at integer, but the FPU was absolutely terrible...Which makes it an odd choice for games, since games rely much more on floating point math than integer. I think it was only kept for backwards compatibility, while even three Jaguar cores would have been better performing and still small.

The Nintendo faithful are saying it won't matter since FP work will get pushed to the GPU, but the GPU is already straining to get even a little ahead of the PS360, plus not all algorithms work well on GPUs. Reply

Not entirely true. The Wii U CPU is highly customized and has enhancements not found in typical PowerPC processors. It's been completely tailored for gaming. I'm not saying it's the power of the newer Jaguar chipsets, but the beauty of custom silicon is that you can do much more with less (Tegra 3's quad-core, 12-core GPU vs. Apple's A5 dual core CPU/GPU anyone? yeah A5 kicked its arse for games) that's why Nintendo didn't release tech specs because they tailored a system for games and performance will manifest with upcoming games (not these sloppy ports we've seen so far). Reply

Also the "plethora" of developers that said it sucked (namely the Metro: Last Light dev) said they had an early build of the Wii U SDK and said it was "slow". Having worked for a developer, they base their opinions on how fast/efficient they can port over their game. The Wii U is a totally different infrastructure that lazy devs don't want to take the time to learn, especially with a newer GPGPU. Reply

If a developer wants to do GPGPU, the PS4 and Xbox One will be highly preferable due to unified virtual memory space. If GPGPU was Nintendo's strategy, they shouldn't have picked a GPU from the Radeon 6000 generation. Sure, it can do GPU but there are far more compromises to hand off the workload.Reply

Ahh amd, I love your marketing slides. Lets compare battery life and EXCLUDE the screen. Never mind that the screen consumes a large amount of power and that when you add it to the total battery life savings go down tremendously. (That's why sandy-> ivy bridge didn't improve battery life that much on mobile). Lets also leave out the Rest of system power and soc power for brazos. It also looks like the system is using an SSD to generate these numbers which looking at the target market almost no OEM will do.Reply

It was purely the fact that they have an APU with a high end GPU on it. Intel is nowhere near AMD in terms of top tier graphics power. Nvidia doesn't have x86. The total package price for an APU vs CPU/GPU made it impossible for an Intel/Nvidia solution to compete. The complexity is also much less on an APU system than CPU/GPU. The GPU needs a slot on the mobo. You have to cool it as well as the CPU. Less complexity = less cost. Reply

Multiple reasons, AMD has historically been better with console contracts than Nvidia or Intel though, those two want too much control over their own chips while AMD licences them out and lets Sony or MS do whatever with them. They're probably also cheaper, and no one else has an all in one APU solution with this much GPU grunt yet. Reply

Good article!--as usual, it's mainly Anand's conclusions that I find wanting...;) Nobody's "handing" AMD anything, as far as I can see. AMD is far, far ahead of Intel on the gpu front and has been for years--no accident there. AMD earned whatever position it now enjoys--and it's the only company in existence to go head-to-head with Intel and beat them, and not just "once," as Anand points out. Indeed, we can thank AMD for Core 2 and x86-64; had it been Intel's decision to make we'd all have been puttering happily away on dog-slow, ultra-expensive Itanium derivatives of one kind or another. (What a nightmare!) Intel invested billions in world-wide retool for Rdram while AMD pushed the superior market alternative, DDR Sdram. AMD won out there, too. There are many expamples of AMD's hard work, ingenuity, common sense and lack of greed besting Intel--far more than just two. It's no accident that AMD is far ahead of Intel here: as usual, AMD's been headed in one direction and Intel in another, and AMD gets there first.

But I think I know what Anand means, and that's that AMD can not afford to sit on its laurels. There's nothing here to milk--AMD needs to keep the R&D pedal to the medal if the company wants to stay ahead--absolutely. Had the company done that pre-Core 2, while Intel was telling us all that we didn't "need" 64-bits on the desktop, AMD might have remained out front. The company is under completely different management now, so we can hope for the best, as always. Competition is the wheel that keeps everything turning, etc. Reply

The point Anandtech was trying to make is that no one is stepping up to compete with AMD's Jaguar, and so they are handing that part of the business to AMD - just as AMD handed their desktop CPU business to Intel by deciding not to step up on that front. If you don't do what it takes to compete, you are "handing" the business to those who do. This is a complement to AMD and something of a slam on the other guys, not a suggestion that AMD needed some kind of charity to stay in business here.

I want to suggest that you are letting a bit of fanboyism color your reaction to what others say. :)

Perhaps if AMD had been a bit more "greedy" like Intel is in your eyes, they wouldn't have come so close to crashing permanently. Whatever, it has been very good to see them get some key people back, and that inspires hope in me for the company and the competition it brings to bear. We absolutely need someone to kick Intel in the pants!

Good to see them capture the console market (the two biggest, anyway). Unfortunately, as a PC gamer that hates the fact that many games are made at console levels, I don't see the new generation catching up like they did back when the PS3 and Xbox 360 were released. It looks to me like we will still have weaker consoles to deal with - better than the previous gen, but still not up to mainstream PC standards, never mind high-end. The fact that many developers have been making full-blown PC versions from the start instead of tacking on rather weak ports a year later is more hopeful than the new console hardware, in my opinion.Reply

I honestly expect the fact that both PS4 and X1 are x86 will benefit PC games quite significantly as well. Last gen devs initially developed for 360 and ported over to the PS3 and PC and later in the gen shifted to PS3 as the lead architect with some using PCs. I expect now, since porting to PS4 and X1 will be significantly easier, PC will eventually become the lead platform and will scale down accordingly for the PS4 and X1.

As someone who games more on consoles than PCs, I'm really excited for both platforms as devs can spend less time tweaking on a per platform basis and spend more time elsewhere. Reply

Actually im pretty sure 90% still made the games on xbox first then ported to other platforms, however with all of them (excluding the wii u) being x86, the idea of them porting down from pc is quit possible and I didnt think about that. It would probably start to happen mid to late in the gen though.Reply

I know its definitely not that high for any individual platform, but I do remember a lot of major publishers, Ubi, EA and a bunch of other smaller studios had said (early-mid gen) that because porting to PS3 was such a nightmare and resource intensive, that it was more efficient to spend extra resources initially and use the PS3 as the lead and then have it ported over to 360, which was significantly easier.

While I'm sure quite a large chunk still use 360's as their lead platform, I would say 90% was probably very early in this gen and since then has dropped to be much closer between 360 and PS3.

Although at this point both architectures are well understood and accounted for that most engines should make it easier to develop for both regardless of what platform is started with. Reply

I don't think using x86 would benefit the dev as much as many expected. Sure using the same hw-level arch may simplify the low-level code like asm, but seriously, I don't many of devs nowaday uses asm intensively anymore. (I had been working for current-gen console titles for a little, and never write even a single line of asm). Current-gen of game is complex, and need the best software architecture, otherwise it would lead to delay-to-death shipping schedule. Using asm would lead to premature optimisation that gains little-to-nothing.

What would really affect the dev heavily is sdk. XB1 uses custom OS, but the SDK should be closed to Windows' DirectX (just like XB360). PS4, if it's in the same fashion as PS3, would use the custom-made SDK with OpenGL/OpenGL ES API (PS3 uses OpenGL ES, if I'm not mistaken). It needs another layer of abstration to make it easier to make it fully cross-platform, just like the current generation.

The thing that might be shared across two platform might be the shader code, if AMD can convince both MS and Sony to use the same language.

Well all this is pointless if nobody makes good hardware using it.It's the old story. The last generation Trinity would have allowed very decent mid range notebooks with very long battery run time and more than sufficient power at reasonably low costs.

Have we seen anything?Nope.

So where is a nice 11" Trinity Laptop?Or a 10" Brazos?All either horrible cheap Atom or expensive ULV core anything.

Are the hardware makers afraid that AMD can deliver enough chips?Are they worried stepping on Intels toes?Are they simply uncreative all running in the same direction some stupid mainstream guide tell them?

I suspect it is largely the latter - and most current notebooks are simply uncreative. The loss of sales comes to no surprise I think. And its not all M$ fault.M.Reply

It could be another instance of Intel paying oems to not use certain AMD parts. They've done it before, wouldn't be surprised if it happens again in area's where AMD might have a better component.

But it's also not totally true, having worked at Wal-Mart and other big chain stores, I can tell you that many do carry laptops and ultrathins that use Trinity A series chips, and Brazos E series chips. But, right now, everyone still wants that ipad or galaxy tab. And in general the only people I saw buying laptops and ultrathins were the people during the back to school or back to college crowds. And of course black Friday hordes.

And with AMD having both next gen consoles under their belt, them and many OEMs may be able to leverage that to draw sales of jaguar based systems. Reply