Post Your Comment

26 Comments

So, no mention of Hyperthreading so far.
It seems reasonable to assume that Intel has either found a way to temporarily deactivate idle execution units, or has implemented a greatly improved Out-of-order engine able to keep all four (integer units) busy. Considering the increase in power draw additional execution units cause, it seems likely that with Conroe's emphasis on Performance per Watt, Intel felt they could widen the core without sacrificing efficiency.
So, how have they managed this? Reply

In the past, I've read that there are two main reasons for doing SMT. One is that you have a deep pipeline and want to keep the penalty of mispredicted branches and stalls to a minimum. The other is if you have a really wide core and lots of execution units, and you want to keep them filled. I'd say there's still a reasonable chance that we'll see some form of SMT in Conroe, though it could be like the original HTT where it sits inactive initially and Intel only turns it on after further testing. (/speculation) Reply

The number of instructions active in the pipeline at any one time is determined by the issue rate and the pipeline length.
Although the P4 had three integer units, its decode rate was actually quite low, as a result of its single decoder (and Trace Cache).
Now that we know Conroe is a four-issue design, it could well have more instructions in flight that the P4, with deeper re-order buffers in order to extract a sufficient amount of ILP from the instruction stream to keep its execution units busy.
Reply

Nah, this is a 65nm process, were looking at low to mid 2GHZ for the mobile chips with mid to high 2 GHZ for the desktop revisions, also as time goes on they should breach 3 GHZ on this processor for desktop. Reply

I really thought Intel would have put that in this generation. I think the two cores will be beating each other up for the memory controller in peak usage scenarios. Even with a 1066Mhz FSB and dual channel ddr 667. Reply

quote:The basic integer pipeline appears to be 14 stages long, making it a significant decrease from the 31+ stage pipeline in Prescott and the 12 stage pipeline in the Athlon 64.

NO you are wrong it says this: The basic integer pipeline appears to be 14 stages long, making it a significant decrease from the 31+ stage pipeline in Prescott and a slight increase from the 12 stage pipeline in the Athlon 64. Reply

An extremely predictable direction; I actually yawned in the middle ... maybe because its time for my coffee or maybe the details do not excite.

All-round very boring.

I wanted to see a new architecture that can dazzle.
- a proper vector co-processor similar to what PPC970FX or the Xbox360 XPU have.
- hyperthreading done proper – like what IBM does with the Power5 series
- on-die memory / PCI-e controller

***YAWN****

I want to see what AMDs next gen architecture will offer. They seem to more on the cutting edge.

Well AMD can already produce quad cores since the SRI on current a64s actually has 4 ports for cores. If you mean who will be first to market I'd guess AMD as long as they can pull off a timely 65nm shrink... it always comes back to that. Reply

Considering how long it has taken AMD to get in to the swing of 90nm process, I would be very surprised if they beat Intel to 65nm. Intel has the size and the money to make ridiculous investments in new process technologies, and I think that will be enough to keep them ahead, in spite of the AMD/IBM partnership. Reply

LOL, contrary to wild speculation? That articles was one of Nicholas Blachford's saner predictions. I mean, it's more down to earth than his anti-graviton engine for faster-than-ligth travel or his "the Cell will redefine the chip industry" garbage. Reply