They've reportedly been working on a new core for awhile - basically shortly after Jim Keller was hired back in 2012. This is the first public admission that their Bulldozer heritage is going to be bulldozed over.Reply

"Jim Keller added some details on K12. He referenced AMD's knowledge of doing high frequency designs as well as "extending the range" that ARM is in. Keller also mentioned he told his team to take the best of the big and little cores that AMD presently makes in putting together this design. "So it is basically Bulldozer based crap. Well done AMD, do some tweak, rename it as 'new core' and try to sell it again. But clients are not that stupid.Reply

wonder which company took the best of a great mobile chip and a high frequency chip to make a chip that DESTROYED their competition in every (I believe it was every) category?

Hm... Intel. Now, Intel executes far better than AMD has (Although, AMD has been getting better, the improvement has not been huge) but, no reason that AMD could not take BD + cats and get a chip that managed to capture positives of both like the original Core chips did.Reply

Not really. Granted, we're going to see at least one more BD-derived design between now and then just to keep things moving. But the new x86 architecture is going to combine what they've learned with all the big core designs (including BD iterations all the way through Excavator) and what they've learned with their small core designs (such as Jaguar and Puma).

Kim Keller has lead the design of some very successful architectures, so I am very interested to see what they cook up. Too bad we won't really see the results until ~2016.Reply

The Project SkyBridge announcement was that there will be a single platform that's "pin compatible" between both ARM (A57s) and x86 (Puma+). Presumably this gives AMD the opportunity to reuse IP blocks between the two different chips while allowing OEMs to design a single tablet/device and utilizing either ARM or x86 depending on what they feel is more suitable (think a single tablet with ARM for Android but swapped for x86 for Windows but everything else remaining the same).

Anyhow, based on the above correction being true, that is silly. As for NVidia on its way to get rid of x86... Nvidia never made x86 chips, and, if you mean Tegra... Well, Tegra is an abject failure from a fiscal standpoint.Reply

High-end cpus are becoming increasing irrelevant in the consumer space. They are still needed in the workstation/server space, but how many consumer applications are actually cpu limited? As far as I can tell, most games are gpu limited, even at 1080p. I saw a review somewhere and you had to go back to a Core 2 Duo before performance dropped in any really significant way. I suspect that may be due to the age of the platform (DDR2 + PCIe 1.1) rather than the actual cpu core.

If you look at CPU cores, they are tiny. Haswell is actually very large compared to low-power ARM cores and such. I think it is only around 14.5 square millimeters for a single core, and I believe that includes the 256KB L2 (hard to find data for individual core). It is definitely time for the cpu to move onto the gpu die. After the caches, the largest component is probably the FPU due to all of the vector extensions (MMX, SSE, etc). As far as I know, Intel is still planing on expanding the vector capabilities in the CPU. This makes little sense to me. If they can tie in the GPU units closely enough, then any vector code should just use the GPU. For the cpu, it seems like just a few low-latency, scalar fp units would be the way to go, and execute all of the vector operations on GPU units.

We are currently being held back by the form factor, although I have wondered if anyone will make a laptop (or something other than a console) with GDDR5 connected directly to an APU (would probably need to be soldered to board also). Many problems disappear once they can start stacking memory chips in the APU or GPU package. All that is needed for APUs to take over is really high speed interconnect to allow multiple APUs to work together. Nvidia is working on this with nvlink, although only for gpus; they still need an external cpu in the PC/workstation space. Intel is really working on the same technology as AMD, they are just at different points. Intel has a good CPU but not that great of GPU yet. AMD has sufficient CPU performance and good GPU tech. Intel still has the marketing position of having the best single thread performance, but this isn't anywhere near as important as it used to be. Enthusiast probably place too much importance on it.

For the server space, Intel tried to switch ISA with IA-64, which is now dead. They are stuck with x86, since they failed to make IA-64 fly, and they will not use ARM. The question is, for high-density servers, will x86 be able to match the performance per watt of ARM cores? AMD will be able to put a large number of ARM cores on a server chip, so if they are not too far behind in process tech, this could be a very good solution.

So Apple released their first 64 bit custom ARM core in 2013 and AMD is going to do the same in 2016?

That makes them 3 years late. Why should we expect them to produce a class leading device? Sure AMD has experience, but Apple and Qualcomm have a huge head start and proven CPU design teams of their own.Reply

I don't see how the Bulldozer design inhibits single threaded performance in and of itself. Boosting instruction throughput requires prioritizing it during core design, and allocating the resources needed.

It would seem to be about the same kind of work and amount of work regardless of the core. I figured that AMD didn't have the resources to pursue all their goals and maybe gambled that certain efforts would work out.Reply