Intel is claiming that they have caught up with ARM in energy usage... using a full fledged CPU.

No, no they aren't. If tri-gate were applied to atom, and there was some die shrinking going on, then atom may be similar to ARM power-wise. Here is the first article you should read- particularly the graph, and that this is comparing 32nm (future) atom with 40nm (existing) ARM. Also that it is only 'near' the low end of 'competitive range'; not in first place and certainly not advancing things (remember, intel made these slides).

It has been suggested that the 3d transistors could offer equivalent gains to a die shrink. This is the second article you should read. As you can see in the graph mid way down the page, this (combination of die shrink and tri-gate transitors) offers an 18% speed increase, or reduction of 0.2v, around desktop levels. At lower power/speed levels a 37% speed increase is possible at the same voltage (this would be applicable to atom).

It will take a while for ARM to 'go 3d' but it will also take a while for intel to get atom up to scratch too. Quite frankly it sucked when it was introduced and has pretty much stagnated ever since. Meanwhile ARM chips are advancing rapidly. Quad core are due next year, IIRC. Intel do seem to be waking up to the issue however, with 3 die shrinks planned before 2014 is out.

Also, I think you missed the point of lodestar; we don't need intel for these HTPC systems.. just windows on ARM. ARM chips are already capable of decoding 1080p60. All we need is windows 8 with ARM support (or.. linux.. low volume has tended to keep the price of these things high).

Finally, with regards to GPU power; remember everything will change when the PS4 generation of consoles come out. Stagnation of PC GPU requirements have largely been down to xbox/ps3 abilities and a core of cross platform games.

Also, I think you missed the point of lodestar; we don't need intel for these HTPC systems.. just windows on ARM. ARM chips are already capable of decoding 1080p60.

Intel has been making some noise that 86 applications won't automatically run on the ARM chips even if Windows does. MS seems to be taking some objection to this... but these does seem to be an element of truth to that.

The "2.5x/1.9x" claim comes from another anonymous IP (75.57.119.233) near Chicago which has no other contributions and so it's hard to characterize.

However I suspect this was a second person, who thought they could estimate performance based on the existing bandwidth claims (like "1024-bit L2/L3 cache datapath bus") and doubled FPU throughput ("768 GFLOPS", which itself seems to be based on more errors). Notably that Chicago person also added "Quad Channel memory".

If this was a genuine leak or serious hoax effort, the performance claims and the (micro-)architecture change claims should have come at the same time, not 24 hours apart (in my opinion anyway)

It sounds like it's the next version of mobile processor. So they're targeting lower power with it, not really much in the way of performance change. Interesting to get some details, though. There's probably more info out there for the googling.

If they have them planned, doesn't that kind of imply that they already know they'll be able to shrink the dies? Why not shrink them all the way down to 11nm or less NOW, and promptly blow all of their competition right out of the water?

If they have them planned, doesn't that kind of imply that they already know they'll be able to shrink the dies? Why not shrink them all the way down to 11nm or less NOW, and promptly blow all of their competition right out of the water?

A child prodigy at say age 10 might state that they will do x level exams at 13, y level at 15 and a degree at 17 but they still have to go through the steps to get the degree; they can’t just do the degree NOW.

Tech companies are often talking about what they HOPE to do in the future but they are often dependent on resolving a whole bunch of issues before they can implement their plans. If their plans are too aggressive there is an increased risk that they are unable to implement them and that may well hurt them through product delays.Intel and the like know their field very well and I’m sure they consider risk analysis also. Intel are already the market leaders so they have more to lose and less to gain than others so it is unlikely that they will be the ones to take a major risk.

If they have them planned, doesn't that kind of imply that they already know they'll be able to shrink the dies? Why not shrink them all the way down to 11nm or less NOW, and promptly blow all of their competition right out of the water?

A child prodigy at say age 10 might state that they will do x level exams at 13, y level at 15 and a degree at 17 but they still have to go through the steps to get the degree; they can’t just do the degree NOW.

Tech companies are often talking about what they HOPE to do in the future but they are often dependent on resolving a whole bunch of issues before they can implement their plans. If their plans are too aggressive there is an increased risk that they are unable to implement them and that may well hurt them through product delays.Intel and the like know their field very well and I’m sure they consider risk analysis also. Intel are already the market leaders so they have more to lose and less to gain than others so it is unlikely that they will be the ones to take a major risk.

Yeah, but what I mean is that the issues you face when shrinking a die (electron migration is the only one I'm remotely familiar with) are going to be the same regardless of whether you're going from 45nm to 32nm or 22nm to 11nm, it's just that they increase in magnitude as you shrink more and more. So if they're going to shrink the die from 45nm to 32nm, they might as well just shrink it all the way, because every time you shrink the die the engineering challenges you face are essentially the same.

Sure, die shrinks - how hard can it be. The best evidence that die shrinks are just not that simple are AMD's efforts with their new Bulldozer and Llano CPUs. Although apparently only a straightforward shrink from 45nm to 32nm neither of these CPUs is yet available for the desktop. Llano has just been released as a mobile part, but still nothing you can buy for the desktop. And that's despite Llano being in development for around 3 years plus. It's been on the way for so long that it's a sort of Duke Nukem of the CPU world. The last I heard of Bulldozer the AMD development engineers were struggling to get decent performance out of it. It must be particularly galling for AMD that Intel has released the low end, and low cost Pentium series of CPUs - check out your local price for the Pentium G620.

I suspect part of the unobvious difficulty with die shrinks is that they need to be combined with architecture changes to derive the most benefit from them, and the architecture changes require new chipsets and so on. The free run that Intel has had, and currently continues to have with Sandy Bridge could have been a different matter had the errant Llano been in the shops last Autumn, but it wasn't and maybe won't be available to buy until September. It's given Intel more time I suspect to prep the next mid range die shrink, their 22nm Ivy Bridge technology, and has probably pushed back the Ivy Bridge release until the Spring of 2012. Given its market dominance, there is no doubt that Intel's step by step or Tick-Tock approach to CPU development has been a stunning success. For AMD, well to date less so.

I know it sounds easy to shrink a process down, but it really isn't. When they shrink a process down, they are often using new equipment, new materials, new processes, and so on. The new steps get worked on in individual labs, but putting them together and putting them into production is a fairly long process. Deciding to put a new process into production is typically a business decision, but you also have to have the technology available to move to the next process. There are also dependencies on other businesses, notably equipment manufacturers. The equipment manufacturers have their own timelines for when they expect to release their technology, although they may give their bigger foundry customers some samples so they can work out some of the problems. The real driving force is just expectation, the whole industry does all it can to satisfy Moore's law. The foundries have a vague plan for what they want, but they need to line up the equipment, get the facility work done, get the R&D work done, and get it all put together to support the plan they have. I'm sure marketing and management would be pushing the process to skip nodes if they could, but the engineering work to bring up a new process is pretty substantial, and the technology has to catch up to plan.

For instance, putting 3 new process shrinks before 2014 is fairly impressive. Just scooping the industry on getting fin fets working at 22nm was a fairly amazing development, so Intel has already skipped ahead of many other foundries out there. But as the transistor widths drop, more and more problems pop up. Just going from 90nm to 45nm to 28nm has required a bunch of amazing advances in lithography, getting to 11nm will require some major improvement in lithographic light source, or higher speed e-beam technolgoy. I'm not sure what Intel has planned, I'd bet they have some contenders in different equipment companies that they're sizing up. I haven't heard of many new enhancements that would let them get to 11nm easily. Of course, having the fin fets working this early gives them a big leg up, but there's still going to be an huge pile of work to do to get those chips working and with good enough yield to make money.

On the other hand, this could just be some marketing hype. It wouldn't be the first time that a foundry hit big yield problems with new technology nodes, and from some marketing slides it's hard to know where they are in solving the problems it will take to bring up a new process. But Intel has a pretty good track record, good luck to them!

But by then it will not be important. Hardware capability is outpacing software burden on the desktop.

Yes, the whole IT industry is worried that people no longer need more power each year. Virtualization was and now SSDs & clouds are cutting server sales, as no commonly-used applications push hardware limits.

ces wrote:

Core 2 Duo has all the horsepower most people need. That was 2 generations ago. MS bloat just isn't what is used to be.

Even "Core" (non-2 from Jan 2006), is not so slow that normal people notice. Give them a SSD with that Core computer, and they will think it is a new super-fast machine.

My Micron C300 SSD really boosted the speed of my C2Q. I don't know how many generations of CPUs I will skip, but I might get a larger SSD.

hall1k wrote:

If they have them planned, doesn't that kind of imply that they already know they'll be able to shrink the dies? Why not shrink them all the way down to 11nm or less NOW, and promptly blow all of their competition right out of the water?

They know that they will be able because they have many thousands of engineers that are working hard on the problems. They also know that they will spend a billion dollars (per fab) buying equipment that has not been built and is in the process of being designed at other companies (like Applied Materials) with feedback from Intel's people.

In short, they are working on this, but it needs a few more years of work by many tens of thousands of people. They would be rushing that work (as they did in '99-'07), if AMD was beating them, but as things stand they are taking their time and milking us for our cash at each step.

Even "Core" (non-2 from Jan 2006), is not so slow that normal people notice. Give them a SSD with that Core computer, and they will think it is a new super-fast machine.

Amen to that. The only reason I invested in a core i5 2500k (currently en route from newegg) was for scientific computing (I'm a grad student in bioinformatics). I'm currently running an athlon x2 7750BE for my work, but the machine learning algorithms and such take like a day or two to run (and every time I find another bug I gotta run them again). So anything that helps cut down that run time is more than welcome.

On another note, I ordered an Antec 300 case for it, but apparently they're sending me a 900 instead. SCORE!

I just upgraded a machine from an i5-750 P55 to an i5-2500k Z68. The bottom line is that clock for clock Sandy Bridge is only a little better than Lynfield, but Sandy Bridge will achieve much higher clock speeds. There are a few things happening here. SB clocks high on reduced voltages and the smaller die size generates less heat to start with. I have had experience with three SB uATX motherboards and am considering doing a mini roundup on them.

[...] I read that it would have a "Massive increase in clock for clock single threading performance compare to Sandy Bridge(2.5x) and Ivy Bridge(1.9x)." on wikipedia. Here is the interesting part. I went back to look at this again.... and it was gone! [...]

Even the caches are getting scrubbed [...]

I know this is an old question and we know plenty about Haswell by now, but I think it's still worth answering the original question. From all my research on CPU microarchitecture I have been able to work out what happened with that Wikipedia article. You can read it all here, but here's the summary.

By looking through the entire history of the edits of that Wikipedia article, I found:

In November 2008 a link was made to a PC Watch article featuring a "Performance/Core" chart that implies that Intel cores would gain about 50% performance per year during the Sandy Bridge and Haswell architectures,

In August 2009 a connection to Larrabee was made, and that information was removed in March 2010, but the link to the original source remained.

Anyone trying to check the sources then would have found several PC Watch articles which make it look like Haswell would have about twice the performance of Sandy Bridge and 512-bit-wide vector units.

The information you noticed being "scrubbed" was added late in April 2011, and removed a few days later. Since all of the site you noticed are just copies of Wikipedia articles, it is no surprise that removing the information from Wikipedia caused it to be (automatically) removed from all the other sites.

You have to be careful about information from the internets, especially information about something that does not yet exist and is itself a moving target. The information can be accurate, but subsequent changes in product trajectory can make information that once was true, no longer so.

Who is online

Users browsing this forum: No registered users and 3 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum