Ministry of Innovation —

Intel walks across Sandy Bridge, goes up Tunnel Creek

The Intel Developer Forum is in full swing, with Intel dropping fresh details …

SAN FRANCISCO — Processor industry watchers who have been waiting for the Intel Developer Forum because they're dying for a deep dive on the microarchitectural details of the company's 32nm Sandy Bridge architecture are still waiting. Intel's top brass delivered more jargon and eye candy than they did technical dirt, but Sandy Bridge is still worth talking about in light of Intel's keynote. And then there's the Tunnel Creek announcement, which sees Intel taking aim right at the embedded market.

The 32nm process node marks the first time that Intel can fit many of the main parts of a PC on a single slice of silicon. The first Sandy Bridge parts will integrate four CPU cores, a GPU, and northbridge hardware (display, I/O, and a memory controller) onto a single piece of silicon. (I didn't bring a camera to the keynote, so you'll have to hit Anandtech for pictures.) This is indeed a major milestone for the PC, and it will make Sandy Bridge-based systems faster and cheaper than even their 32nm predecessors.

The few solid Sandy Bridge details that Intel has confirmed so far is that all of the major blocks on the chip—the CPU cores, the GPU, and the northbridge—will be connected by a high-bandwidth ring bus, and they'll all share a common L3 cache.

We saw this ring bus first with Larrabee, where Intel used it to link the part's x86 cores together. It's very fast and relatively simple, but also fairly power hungry when compared to, say, switched tiles. Overall though, it's sure to be a good fit for the low core count that Intel is targeting with this generation.

The fact that the CPU and GPU are not only connected so closely, but also share a common L3 cache, is no doubt a major factor in Sandy Bridge's stellar graphics performance. Some early preview numbers show that Sandy Bridge's integrated graphics performance is very, very good compared to the other IGP options on the market. The Sandy Bridge GPU even gives some of the entry-level discrete graphics cards a run for their money.

The ability to bundle this level of graphics performance on a quad-core CPU die is, again, a major milestone for the PC, and it will significantly drive down costs and platform-level power consumption. Large IT buyers will especially love this, because they'll finally get best-in-class IGP performance by default on all of their machines, and (eventually) for less money.

Up Tunnel Creek

As we pointed out in our previous coverage of the part, what's special about Tunnel Creek is that it combines a CPU, GPU, memory controller, and PCIe bus on the same chip. The presence of the PCIe bus in particular is important, because it lets the part gluelessly integrate with a whole host of application-specific circuits. In other words, many integrated circuits that you would want to use in an embedded system talk PCIe, and for a normal Moorestown part you'd need an I/O hub chip in between that IC and the main SoC. This isn't the case with Tunnel Creek, however, because it speaks PCIe natively.

A great example of this is an Atom-based NAS box. With the Atom E600, the main SoC could talk directly to the RAID controller on the PCIe bus, without going through a bridge chip. By cutting out the extra chip, the NAS would be cheaper, cooler, and more reliable.

While Intel's press kit on the E600 talks up the part's potential as the basis for an in-car infotainment platform, Intel has also mentioned that it is targeting the Chinese telecom market with this chip—the company is hoping to get a slice of the Chinese telecom build-out that the country is undertaking to stimulate its economy.

Looks like we're going to be in for interesting times in ultralight laptops, tablets and storage devices.

Definitely can say that again.

The bit about the IPG being able to rival the entry level discrete cards made me especially happy. The low end in graphics has been FAR too low for FAR too long (which has pretty much been entirely Intel's fault). With any luck, Intel's (graphics) engineers have been beaten with a clue-by-four and the IPG isn't /just/ better because it's so close to the CPU... or maybe Intel should buy out the PowerVR division of Imagination Technologies so they have some people that know how to build a GPU (talk about a slap in the face )

In addition to making the PC less expensive, this could make it smaller. Obviously, since an Intel Mac is also an Intel PC that can run Windows, it's already possible to make a PC like the Mac Mini. But PC buyers generally prefer desktop PCs that they can put expansion cards in.

I don't think that the GPU being able to compete with discrete graphic cards will be useful at all to enthusiasts. Even today's IGPs can render everything in Win7 properly, so are we to expect actual playability on games with the eye-candy turned on? Or does it just allow us to delay replacing a family member's computer a year longer?

Since the integrated GPU is going to suck, could we instead use a separate GPU, while using the integrated GPU as a floating-point co-processor?

Looking at the linked Anandtech preview of the chip, it performs VERY well for an IGP (better than any other IGP on the market), so I wouldn't say it sucks. It looks like it would be very good for a notebook solution or other low heat/energy applications, like an HTPC. What may suck, is that the upgrade path of the chip would be limited, as it would be on it's own socket type. Of course, few folks upgrade the cpu or gpu in a notebook anyways.

Since the integrated GPU is going to suck, could we instead use a separate GPU, while using the integrated GPU as a floating-point co-processor?

The GPU in Sandy Bridge is fixed-function. Which means its not programmable - no OpenCL or DirectCompute.

Ah well. At the very least it'll force the low-end discrete cards to become better than the piles of crap they currently are (low-end discrete = piles of crap, IGP = they should have sent a poet). I guess it was probably too much to hope that they could be used for OpenCL... Why the hell DOES Intel suck so badly at graphics? It's not like they don't have the resources. Do they just not care?

@MrClockGenerally you can't replace the GPU on a laptop/notebook anyways (at least all the ones I've seen are integrated directly onto the mainboard- discrete or IGP)... so this, if anything, would improve the upgrade path for laptops (since the CPU would be sitting on a standardized socket). But yeah, it's not like anyone really upgrades them.

"The fact that the CPU and GPU are not only connected so closely, but also share a common L3 cache, is no doubt a major factor in Sandy Bridge's stellar graphics performance. Some early preview numbers show that Sandy Bridge's integrated graphics performance is very, very good compared to the other IGP options on the market. The Sandy Bridge GPU even gives some of the entry-level discrete graphics cards a run for their money."

So, it has "stellar performance", but only gives "entry-level discrete graphics cards a run for their money"?In non-marketing terms, this means it's about as fast as the current low-end graphics card. Once it's actually released, it'll be significantly slower than newer low-end cards.

"The ability to bundle this level of graphics performance on a quad-core CPU die is, again, a major milestone for the PC, and it will significantly drive down costs and platform-level power consumption. Large IT buyers will especially love this, because they'll finally get best-in-class IGP performance by default on all of their machines, and (eventually) for less money."

Also a stupid marketing statement. Right now, Intel is the only desktop CPU with IGP. And since this version is faster than Intel's previous versions, it's automatically "best in class". Intel throwing in the GPU into the CPU package has never been about increasing value to the customer, but in getting more of the system purchase price into Intel's pocket.

For embedded applications, I'm disappointed that Tunnel Creek still requires a "platform controller hub" chip for USB, Ethernet, I2C, SPI, UART, etc. What isn't disappointing is the Stellarton, which is a package containing Tunnel Creek and an Altera FPGA.

Since the integrated GPU is going to suck, could we instead use a separate GPU, while using the integrated GPU as a floating-point co-processor?

The GPU in Sandy Bridge is fixed-function. Which means its not programmable - no OpenCL or DirectCompute.

It was before my time, but I believe people started programming code for GPU's back in the non-programmable fixed pipeline era. It required people to cleverly alter the functions they wanted to compute so that the operations fixed gpu's could compute corresponded to the calculations that they wanted to run. Obviously, it would have been a real dog to program in terms of algorithms and I seriously doubt anyone would have the stomach to continue on in this manner with the advent of CUDA and then OpenCL.

Also to the original quote, why not do the same but in reverse. Use the separate GPU as your co-processor and the on die GPU for graphics. Problem solved.

Sorry, but playable frame rates on lowest graphic settings at 1024x768 doesn't cut it. Why does anyone care if their GPU is faster if it still doesn't play games at decent quality. Those Anandtech screenshots are making my eyes bleed, and they're not even the most graphics-intensive games of this generation.

It's not programmable for scientific computing, and even old iGPUs do fine with Aero and basic graphics, so all it's doing is wasting power.

If Ars could do a sound comparison of Tunnel Creek and AMD's Bobcat based chips it would be very nice to read. However repeating marketing crap or the testing of writers that are in Intels pockets do not leave one seeing Ars as a credible publication.

Of course hardware is not available yet but the point remains that only release hardware running released software really counts. The same could be said about Sandy Bridge though right no my interests are in low power systems.

I guess the problem I have is that these articles sound way to much like the release of marketing info that has been carefully crafted to avoid detailed analysts, thoughtful comparison or a commitment to anything. In other words garbage. Don't even get me started on early preview numbers that are like statistics, lies and half truths. Just to repeat the only thing that really matters is real shipping hardware and software that the user is likely to well use.

90 percent of the PC's that this chip is targeted at will never play a game! Think business; boring stuff like Excel, Word, Powerpoint, IE9, and Outlook. Also, most people just surf and e-mail! The SB-GPU will be all they ever need! I am sure Intel will have an "Enthusiast" chip available too! Try the i7-980X for example, it is overkill for many years to come!

Although, a person was on WoW recently while at work! He had to flip the screen real quick when someone walked in his office. Left us without a tank for a bit! Must have been working at the SEC?