If you look carefully enough, you may notice that things are changing. It first became apparent shortly after the release of Nehalem. Intel bifurcated the performance desktop space by embracing a two-socket strategy, something we'd never seen from Intel and only once from AMD in the early Athlon 64 days (Socket-940 and Socket-754).

LGA-1366 came first, but by the time LGA-1156 arrived a year later it no longer made sense to recommend Intel's high-end Nehalem platform. Lynnfield was nearly as fast and the entire platform was more affordable.

When Sandy Bridge launched earlier this year, all we got was the mainstream desktop version. No one complained because it was fast enough, but we all knew an ultra high-end desktop part was in the works. A true successor to Nehalem's LGA-1366 platform for those who waited all this time.

Left to right: Sandy Bridge E, Gulftown, Sandy Bridge

After some delays, Sandy Bridge E is finally here. The platform is actually pretty simple to talk about. There's a new socket: LGA-2011, a new chipset Intel's X79 and of course the Sandy Bridge E CPU itself. We'll start at the CPU.

LGA-2011, the new socket

For the desktop, Sandy Bridge E is only available in 6-core configurations at launch. Early next year we'll see a quad-core version. I mention the desktop qualification because Sandy Bridge E is really a die harvested Sandy Bridge EP, Intel's next generation Xeon part:

Sandy Bridge E die

If you look carefully at the die shot above, you'll notice that there are actually eight Sandy Bridge cores. The Xeon version will have all eight enabled, but the last two are fused off for SNB-E. The 32nm die is absolutely gigantic by desktop standards, measuring 20.8 mm x 20.9 mm (~435mm^2) Sandy Bridge E is bigger than most GPUs. It also has a ridiculous number of transistors: 2.27 billion.

Around a quarter of the die is dedicated just to the chip's massive L3 cache. Each cache slice has increased in size compared to Sandy Bridge. Instead of 2MB, Sandy Bridge E boasts 2.5MB cache slices. In its Xeon configuration that works out to 20MB of L3 cache, but for desktops it's only 15MB. That's just 1MB shy of how much system memory my old upgraded 386-SX/20 had.

CPU Specification Comparison

CPU

Manufacturing Process

Cores

Transistor Count

Die Size

AMD Bulldozer 8C

32nm

8

1.2B*

315mm2

AMD Thuban 6C

45nm

6

904M

346mm2

AMD Deneb 4C

45nm

4

758M

258mm2

Intel Gulftown 6C

32nm

6

1.17B

240mm2

Intel Sandy Bridge E (6C)

32nm

6

2.27B

435mm2

Intel Nehalem/Bloomfield 4C

45nm

4

731M

263mm2

Intel Sandy Bridge 4C

32nm

4

995M

216mm2

Intel Lynnfield 4C

45nm

4

774M

296mm2

Intel Clarkdale 2C

32nm

2

384M

81mm2

Intel Sandy Bridge 2C (GT1)

32nm

2

504M

131mm2

Intel Sandy Bridge 2C (GT2)

32nm

2

624M

149mm2

Update: AMD originally told us Bulldozer was a 2B transistor chip. It has since told us that the 8C Bulldozer is actually 1.2B transistors. The die size is still accurate at 315mm2.

At the core level, Sandy Bridge E is no different than Sandy Bridge. It doesn't clock any higher, L1/L2 caches remain unchanged and per-core performance is identical to what Intel launched earlier this year.

Those of you buying today only have two options: the Core i7-3960X and the Core i7-3930K. Both have six fully unlocked cores, but the 3960X gives you a 15MB L3 cache vs. 12MB with the 3930K. You pay handsomely for that extra 3MB of L3. The 3960X goes for $990 in 1K unit quantities, while the 3930K sells for $555.

The 3960X has the same 3.9GHz max turbo frequency as the Core i7 2700K, that's with 1 - 2 cores active. With 5 - 6 cores active the max turbo drops to a respectable 3.6GHz. Unlike the old days of many vs. few core CPUs, there are no tradeoffs for performance when you buy a SNB-E. Thanks to power gating and turbo, you get pretty much the fastest possible clock speeds regardless of workload.

Early next year we'll see a Core i7 3820, priced around $300, with only 4 cores and a 10MB L3. The 3820 will only be partially unlocked (max OC multiplier = 4 bins above max turbo).

Post Your Comment

163 Comments

I want native USB3 plus significantly higher number of PCIe channels so I can run two cards at full 16x and a decent RAID controller at 4x without having to pay over $300 for the mobo. Oh, and for god's sake say goodbye to the PCI slots please while improving the motherboard layout so dual slot cards don't cover any available PCIe slots.

Bullshit like "Three PCIe x16 slots!!!! (running at 8x, 8x, 2x) make me sick. The latest Intel motherboards were rather underwhelming in terms of features.Reply

It really seems as if Intel wants to kill off this high-end enthusiast desktop segment completely; what we have here is a by-product of their server market and perhaps the last of a dying breed. First sign was the change to multiple sockets and locking clock frequency on their non-enthusiast parts. Also, SB-E comes with a huge increase in platform cost compared to Nehalem that doesn't really justify the increase in performance over SB.

$500 for the entry-level SB-E CPU and $300+ for the motherboard is going to be a bitter pill to swallow for those used to the $200-$300 entry-level Nehalem CPUs and $200 boards. I know there's going to be a 4-core part that may be closer to that price point sometime next year, but again, one has to ask if it will be worthwhile over a 2600K at that point, especially since the K is unlocked and the SB-E part isn't.

Also factor in the reality PCIE 3.0 is going to be a negligible benefit of the chipset. Maybe if ATI/Nvidia's next-gen GPUs make use of the extra bandwidth. You also don't get any additional benefits in the way of SATA or USB support compared to last-gen SB products....its really quite disappointing for a chipset that was held off this long.

Overall the performance looks good, but at the price and size....is this the path CPUs are headed for? Huge and hot like GPUs? I mean we thought Bulldozer was massive, SB-E is just as big but at least it delivers when it comes to performance I guess. I can see why Intel wanted to bifurcate their server/desktop business, but I think the unfortunate casualty will be the high-end enthusiast market that don't want to pay e"X"treme prices for the privelege.Reply

For the most part Amd's bulldozer did give us 2500K speeds.. and multithreaded performance is there. This cpu is the fastest we've seen but it certainly doesn't blow one away in comparison to the 2600K. The Amd CPU is criticized for one thing really.. it's single threaded performance which is no better then it's cheapest proccessors.Reply

A minor performance boost in most real-world scenarios, and yet a massive increase in cost and power consumption...

This whole chip is basically a big kludge. Take an 8-core Xeon and disable a quarter of the chip, slap a "consumer" label on it, and call it a day? That's not even trying, that's just lazy.

This chip is 50% faster than SNB in heavily multithreaded applications because it has 50% more cores. A much more interesting chip would have been to take the existing SandyBridge chips and increase the core count, rather than taking a Xeon and disabling parts of the core.Reply

Actually isn't that basically what they did with this? They took 8 SB cores and threw it on the die, took out the IGP and dropped in a massive L3 cache. I mean if your going to be building a gaming rig based on SB-E, would you actually care about the IGP at that point when you got SLI or X-Fire GPUs?

I understand how this move would make some people feel like they've been slapped in the face by Intel. Years as loyal customers and this time around they get a "crippled" part to call a flagship. Look at what the state of the high end CPU market is like. At this point Intel is dominating and there is really no incentive for them to do a completely different chip when a "crippled" Xeon can run circles around the best AMD has to offer. From their point of view this is the most economic way to do business. But yes, meh indeed when you already have a i7 2600K running smoothly.Reply

Or will there be a desktop version again without ECC support and a workstation Xeon version that does? (890x/990x vs x5680 Xeon) I'll take ECC support for RAM over faster RAM with eight populated slots please. The larger and larger memory amounts means more likelihood of bit errors, but for two generations of CPU's from Intel, no EEC RAM support on the CPU memory controller.Reply

I understand the need to keep news positive for AMD. Competition and all. However, repeatedly stating over time that they are competitive on price is kind of misleading in the grand scheme. Each new CPU arch. from Intel yields double digit performance gains(lately). AMD's are often delayed and in BD's case yield backwards results in many benchmarks.

The truth is, clock for clock, given as many transistors, given as much power and heat, AMD is grossly not competitive.

The ONLY reason one can say that their chips are competitive in relation to price, is that they have NO other choice but to sell them at that price. AMD looks at where it's new CPU's relate in terms of performance to Intel's lineup and price accordingly. As many R&D $$, transistors, etc that go into each FX-8150, the flagship CPU should be at least competing with the 2600K, 990X, etc of the world. Forcing either Intel either to lower the $1000 tag on SNB-E or allowing AMD a $1000 alternative.

However, all we get from AMD is mediocre, late to market attempts to "catch-up". My point, AMD needs a new infusion of engineers and/or new approach. A complete new idea/redesign/etc..

Let's face it, the x86 market is now Intel x86. Perhaps, AMD should take what it knows in processor design and embrace a new idea.. Maybe a mixed ARM/x86 or an enhanced ARM 64-bit for desktop PCs. Something to stand out and deliver on. Pure x86, AMD is falling further behind. BD did not even catch up to 4 core SNB. And Ivy Bridge is being held back, as there is no really competition for it. The landscape looks like AMD will be out of the desktop CPU space within a year or two. Or at least religated to Cyrix status from the 2000's.Reply

Your point about AMD's prices don't make any sense. You're saying that AMD is not a good value because it is selling its chips at a price that makes them a good value rather than making faster chips and selling them for more money like Intel does?!?

Since when does "a good price:performance ratio" not equal "a good value" just because the CPU vendor doesn't have high (or any!) profit margins?Reply