The same slide deck details side-by-side projections of Nehalem with other AMD and Intel processors

George Out estimates the floating point point and integer performance by extrapolating datapoints on the slid (Source: ZDNet)

Sun confirms what Intel has been dying to tell us, at least off the record

Late last month in Austria, Intel presented Sun with roadmaps discussing details of its upcoming server platforms, including the fairly secret Xeon Dunnington and Nehalem architectures. Unfortunately for some, this presentation ended up on Sun's public web server over the weekend.

Dunnington, Intel's 45nm six-core Xeon processor from the Penryn family, will succeed the Xeon Tigerton processor. Whereas Tigerton is essentially two 65nm Core 2 Duo processors fused on one package, Dunnington will be Intel's first Core 2 Duo processor with three dual-core banks.

Dunnington includes 16MB of L3 cache shared by all six processors. Each pair of cores can also access 3MB of local L2 cache. The end result is a design very similar to the AMD Barcelona quad-core processor; however, each Barcelona core contains 512KB L2 cache, whereas Dunnington cores share L2 cache in pairs.

To sweeten the deal, all Dunnington processors will be pin-compatible with Intel Tigerton processors, and work with the existing Clarksboro chipset. Intel's slide claims this processor will launch in the second half of 2008 -- a figure consistent with previous roadmaps from the company.

The leaked slide deck also includes more information about Intel's Penryn successor, codenamed Nehalem. Nehalem is everything Penryn is -- 45nm, SSE4, quad-core -- and then some. For starters, Intel will abandon the front-side bus model in favor of QuickPath Interconnect; a serial bus similar to HyperTransport.

Perhaps the most ambitious aspect of Nehalem? For the first time in 18 years Intel will pair its processors cores up with on-die memory controllers. AMD made the switch to on-die memory controllers in 2003. For the next three years its processors were almost unmatched by Intel's offerings. The on-die memory controller can't come a moment too soon. Intel will also roll out tri-channel DDR3 with the Nehalem, and all that extra bandwidth can only be put to use if there are no bottlenecks.

As noted by ZDNet blogger George Ou, the slides contain some rudimentry benchmarks for Nehalem and other publicly available processors. From this slide deck, Ou estimates Nehalem's SPEC*fp_rate_base2006 at 163 and the SPEC*int_rate_base2006 at 176. By contrast, Intel's fastest Harpertown Xeon X5482 pulls a measly 80 and 122 SPEC fp and int rate_base2006.

The Nehalem processor more than doubles the floating point performance of its current Penryn-family processors. Ou adds, "We’ll most likely know by the end of this year what the actual scores
are, but I doubt they will be more than 5% to 10% off from these
estimated projections."

It's important to note that these estimates are not actual benchmarks. Intel's document states, "Projections based on *SPECcpu2006 using dual socket Intel Xeon 5160 Processor performance as the baseline." As discussed on DailyTech before, simulated benchmarks offer little substance in favor of the real deal.

The reason for not going on-die with a memory controller has nothing to do with the bus. The reason has to do with compatible memory types. Today, you can buy a brand new socket 775 x38 motherboard with DDR3 slots and drop in a Pentium 4/D, Celeron, Core 2 Conroe, etc... that is pin and BIOS compatible.

Because the memory controller is in the Northbridge, the motherboard determines what type of memory is compatible.

However, no AMD processor manufactured today is DDR3 compatible and none of them ever will be. The memory controller on the CPU has to be upgraded for each new memory tech. And AMD simply doesn't do this on a whim. Remember how long it took just to get away from Registered DDR or up to DDR2. Being compatible with new memory types requires all new steppings for each processor for this to happen.

You're saying that Intel didn't use an ODMC because it was concerned that its customers may have to upgrade more often? Intel may offer that reason to the public, but I doubt that's the real reason for not including the ODMC previously. My best guess is that with the netburst architecture Intel's primary motivation (back when the marketing dept. made the decisions instead of the engineers) was to increase the CPU clock speed as much as possible. Intel originally thought it could get to 10Ghz speeds with netburst. By keeping the memory controller separate from the CPU, Intel probably thought it could scale CPU speeds better.

If your reason for not going with the ODMC in the past is accurate, why is Intel planning on using it with its future architectures? Has Intel had a change of heart and now doesn't care if its customers purchase CPU's bound to certain memory types?

I agree with you that it might be a marketing tool to support their cause for not keeping MC on die.. But its a matter of fact that if your CPU has 4yr cycle and North bridge has 2yr cycle, you want to have the overall system performance to go up with newer tech.That aside, i don't even think your explanation is even close to the one given earlier. Why would CPU freq not scale well if you want to put a IMC? It damn sure will be in a separate clock well which will have no correlation with the CPU frequency scaling... Though you will be wasting precious high-speed silicon for slower IMC :)I agree with the FSB hypothesis... Being a 90% market holder (back then) and with scores of people developing programs, its a lot of pressure on Intel to change its architecture. I don't blame FSB... Sweet FSB made first Core2duo to Core2quad in less than 3 months and a year b4 AMD's Phenom(enal???)... which BTW the highest version in the market performs under Q6600 (not even penryn! :( )...