Intel will merge its mobile computing and PC businesses Comments

AfterBurner, 18 Nov 2014You do physical design? oh boy that sounds so cool and i dont even know what that means.. im a... morePhysical Design is another term of ASIC design or layout focused design efforts where the engineer converts schematic level logic designs or netlists from the front end levels to actual FAB-able formats/design that can be sent to the FAB for fabrication a.k.a GDSII. Logic designers come up with the actual logic with schematics and simulations but they do not have much freedom to control power constraints like we do at physical level but turn that around the logic designers have more influence on the actual logic than us as we have other priorities to improve power consumption and gate density. In a few years time I'll look to move into front end RTL.

K12 is different from the ARM-x86 mash-up people have assumed to be coming. My impression from my counterparts at AMD is that ARM and x86 solutions will be made pin compatible or something along those lines where you can interchange with ease. Those who need the absolute high end can put in the X86 chip, those on a budget or using it for some different purpose go with ARM solution. Have a dual socket board and you could have one x86 and the other an ARM chip running on the same board doing totally different workloads or handling them same tasks in tandem focusing each ISA's strengths. I don't think they are planning to have x86 ISA talking to ARM on the same die, don't see the point. For PSP security they already have small cores like Cortex A5 on die which are more than enough.

Bulldozer was a chip bet on server loads, it does well at server loads. People will doubtless say well it isn't a match to Intel on servers either, that's because it was two years late. Meaning two generations late, again poor execution. That chip was supposed to be out in 2009, it actually came out at the end of 2011 so it was doomed for it's true intended market simply by being outdated at launch. In the CPU world at that time if you were 1 year late you are already playing catch up. For desktop type work loads and gamers this was not the perfect chip. Dirk Meyer and the rest of folks wanted to focus on the large ASP market, servers. Well it would have worked out for them if they launched on time. It is not a poor design infact it was a very clever one for the type of workloads it was intended to handle while ceding some ground to the brand new Core processors on desktop but not by huge margins as we see today. They had to take that route because their nodes were well behind what Intel had at it's disposal at the time. If it had launched in 2009 the CPU landscape, as far as x86 is concerned, would be a very different place but there no IFs and Buts in life, you can only learn from the mistakes. Unfortunately the clear signs coming out from AMD are that the worthless management cancer within AMD is still strong while engineers always get the axe when layoff rounds come from time to time. They just laid-off 7% of their work force.

Then there is another thing, let's say AMD does come out with one heck of a chip and beats Intel left right and center. Will it matter? You can already see the PC sales and workstation sales are stagnant. We can thank the social media revolution crowd for that, always stuck on their little phones for everything. It hasn't dipped further but it hasn't grown either. People are just fine with even 4-5 year old systems as they are getting the job done just fine. They see no reason to upgrade. Even corporate sectors feel the upgrade cycles have slowed down because there is simply no need to. This is another reason why I am almost certain AMD wont even try too hard to match or beat Intel. Their aim should be to offer 90% of Intel's best at near 3/4 or better at half the price if possible. AMD should be a strong value for the buck player again to get interest going. Once they have gained true market share then they can go about chasing performance crowns from Intel's vice grip.

MHanz, 18 Nov 2014Yeah I know about the movements of people like Jim Keller, I do physical design so I am forced... moreYou do physical design? oh boy that sounds so cool and i dont even know what that means.. im a computer science under graduate..
And the I would certainly would never have known that about AMD.. Let's hope Keller's team stays together..
Yep - bulldozer was a bad idea.. two small integer clusters, a shared float cluster and everyhting else divided among them.. they took hyperthreading too seriously..
And I heard about the K12 core or something based on the ARM.. That sounds good - the last phenom as K10, counting the earthmovers as 11, K12 is fine..

AfterBurner, 18 Nov 20141)i didn't mention - 12W was just an example.. its actually what the A7's dual CPU cluster hit... moreYeah I know about the movements of people like Jim Keller, I do physical design so I am forced to have an interest to know these things ;) Got some friends at AMD, they did hint at some big things in terms of design direction. It will certainly bring up their IPC compared to what they had with bulldozers but about beating Intel or matching them that's not a priority. It would be a nice hallow effect but not a necessity as they have a whole package to consider, it's the sum of all the parts that will their options attractive. I wouldn't accept an offer to work for AMD though at any reasonably high salary rate, they treat their employees like dirt with false promises of no lay offs while management slim gets to be retained no matter what. TBH I am not sure how long a guy like Jim will stick around when he sees frequent lay offs whenever wall street nut jobs make a lot of noise every quarterly earnings report. You do need lots of people under you to take your designs forward through to fruition and if those people are frequently moving around due to lay offs or constant threats of those then the working environment is not conducive to good R&D. AMD's issue: execution, how can be good with all these massive staff movements? Good talented leaders in engineering need good staff to back them up. If that back up is unstable the good guy leaves to another place to make his time worthwhile. I wont be surprised to see him back at Apple or head over to Samsung in Texas. A number of AMD chief architects are now at Samsung Texas R&D, like Brad Burgess and Jeff Rupley.

MHanz, 18 Nov 20141) I don't know about the the 12W draw claims, I have however seen actual readings on the Nvid... more1)i didn't mention - 12W was just an example.. its actually what the A7's dual CPU cluster hit under full load..

2) that was certainly enlightening.. i didn't take into account the effect of instruction set on micro-ops count/complexity.. but you should still bear in mind that the Cyclone core is bigger and faster than the Haswell in benchmarks.. i know execution resources are non measure of performance but i'm sure Apple made good use of them..

Trivia: guess who designed the swift and cyclone cores.. Jim Keller.. The guy who designed the K7 and wrote the x86-64 instruction set and left AMD :/He is back at AMD though.. lets see what he comes up with next

Subsidizing? Werent Intel recently fined by the EU for using similar ungentleman tactic against AMD?
Anyway, my guess is Intels architecture is inherently more power hungry than ARMs for the same work - a problem Intel is aiming to fix by reducing chip size to 7nm. But what prevents ARM from not following along? And rather than spend the money on developing a new power efficient architecture Intel chooses to spend it on cowardly subsidies.
Either way, the good news is that computer business is no longer synonymous with Intel and Microsoft. Sigh of relief.

Anonymous, 18 Nov 2014What I really would like is a full 10" windows tablet like surface pro but cheaper with... moreit will be good, but standart pc cpu kinda need fans, if you start to stress it will heat up and a lot! it needs cooling , or a latge passive cooler , but still it will heat up

MHanz, 18 Nov 2014You know nothing about architectures in question here, AMD vs Intel is x86 space. The article ... moregot it, thanks! In lay man's term, intel using adult to do children's works, they eat up all the burgers & pizza!!! While the child can get by just with fries! Maybe also with soda pop...

FrostJoke, 18 Nov 2014The next question is "on what year will we get a Smartphone/Tablet with the same desktop-... moreimagine then a backpack where is your battery, and a big cooler or grill to coock something !

AfterBurner, 18 Nov 2014Agreed..
Agreed..
Agreed..
Agreed..
You're absolutely right.. Finally someone who knows t... more1) I don't know about the the 12W draw claims, I have however seen actual readings on the Nvidia sheild to be about 8.4 watts peak running some intensive games for the tablet. 12 W would be 2 watts above the absolute limit of 10 watts for tablets. Intel's TDPs are are crossing 15watts and sometimes it doesn't stay there. If the OEM cheats under Intel's behest then the spike in power draw can be even higher momentarily. You are going to feel it under that battery.

2) think of what the decode block is doing? :)
But that's not the only complexities when it comes to ARM based SoCs vs x86, the base instruction sets are still design to handle less bite sized chunks than Haswell or any competing x86. Then there are plenty of blocks to extract ILP and TLP, works sets in ARM world and x86 world are different so most of these blocks are not needed or say are redundant in ARM code space. You save transistor counts there too. Having more execution resources is misleading, it's what these individual blocks do in comparison to their x86 counter parts that matter.

I can see them looking to beef up their ARM designs to fit into their forecasts for replacing Intel in their sleek macbooks. They have never needed haswell or Ivybridge power, it was frankly over kill. Where they need them were in the workstations and there it does make perfect sense to use the fastest x86 they could get their hands on.

Coming back to the topic I was trying explain earlier: Best analogy I have thought of to compare ARM and x88 ISA is this: A muscle man on steroids with huge bulk and pro-swimmer. The x86 is the muscle man great in the dry land able to lift more and do it more quickly simply by sheer might. The swimmer though is super fast in the water because of less drag and well toned body shape. The muscle man will probably swim fast enough by brute force but that burns more energy trying to beat the laws of fluid dynamics inducing drag slowing him down. Different workloads different ISA fits the needs.

AfterBurner, 18 Nov 2014i'm afraid you're wrong, sir..
most of the 90's intel and AMD were at parity.. And there were... moreYou are right, those old AMD processors were something. I recently found out that my old AMD 64 3500+ Venice (released in 2005) is able to run freaking Skyrim (just barely mind you) but still. It's also able to run Crysis 2, Modern Warfare 3. So yeah, those were the days.

Unfortunately, in my part of the world, people were so technologically unskilled, that when you told them you bought a computer, they would ask "What pentium is it?" Even when Intel came up with dual core, they still measured the performance of a PC by the number that came after "Pentium". Few people even heard about ghz, ram, GPU, and even fewer know what they meant.

So with just a little marketing, almost no one would buy AMD CPUs because it didn't have the word "Pentium" in it. The only people that would consistently buy their products were knowledgeable gamers, and that was a very small part of the market.

MHanz, 18 Nov 2014You know nothing about architectures in question here, AMD vs Intel is x86 space. The article ... moreAgreed..
Agreed..
Agreed..
Agreed..
You're absolutely right.. Finally someone who knows their thing..
My comment was just to counter a statement made by that guy who thought Intel had always dominated the x86 space.. it was just an AMD/Intel thing - me being an AMD fanboy. Nothing to do with the mobile space.

But if i were to nitpick your Answer :P
1) I do not know if Nvidia/Qualcomm quote their chip's TDP - I always wish they did - Even then, in mobile devices the power allowed to chip is regulated on the basis of chip and/or skin temperatures.. For e.g. A 12W tegra would suck all 12 of it if the chip is under say 55 degree celcius and then start throttling to 5W then to 3W as temp rises..
2) The impact of ISA choice on Chip complexities have dwindled over the years.. Its just the decode module that is affected.. and the complexity of decode hardware with respect to the entire core has reduced over the years.. For e.g. The Cyclone micro-arch in the Apple A8 chip has as many execution resources and more frontend width than a Haswell core.. The Cyclone is consequenly bigger than the Haswell.. And it actually performs better by a shade clock-for-clock in primitive benchmarks..