Note: This preview was not sanctioned or supported by Intel in any way.

I still remember hearing about Intel's tick-tock cadence and not having much faith that the company could pull it off. Granted Intel hasn't given us a new chip every 12 months on the dot, but more or less there's something new every year. Every year we either get a new architecture on an established process node (tock), or a derivative architecture on a new process node (tick). The table below summarizes what we've seen since Intel adopted the strategy:

Intel's Tick-Tock Cadence

Microarchitecture

Process Node

Tick or Tock

Release Year

Conroe/Merom

65nm

Tock

2006

Penryn

45nm

Tick

2007

Nehalem

45nm

Tock

2008

Westmere

32nm

Tick

2010

Sandy Bridge

32nm

Tock

2011

Ivy Bridge

22nm

Tick

2012

Haswell

22nm

Tock

2013

Last year was a big one. Sandy Bridge brought a Conroe-like increase in performance across the board thanks to a massive re-plumbing of Intel's out-of-order execution engine and other significant changes to the microarchitecture. If you remember Conroe (the first Core 2 architecture), what followed it was a relatively mild upgrade called Penryn that gave you a little bit in the way of performance and dropped power consumption at the same time.

Ivy Bridge, the tick that follows Sandy Bridge, would typically be just that: a mild upgrade that inched performance ahead while dropping power consumption. Intel's microprocessor ticks are usually very conservative on the architecture side, which limits the performance improvement. Being less risky on the architecture allows Intel to focus more on working out the kinks in its next process node, in turn delivering some amount of tangible power reduction.

Where Ivy Bridge shakes things up is on the graphics side. For years Intel has been able to ship substandard graphics in its chipsets based on the principle that only gamers needed real GPUs and Windows ran just fine on integrated graphics. Over the past decade that philosophy required adjustment. First it was HD video decode acceleration, then GPU accelerated user interfaces and, more recently, GPU computing applications. Intel eventually committed to taking GPU performance (and driver quality) seriously, setting out on a path to significantly improve its GPUs.

As Ivy is a tick in Intel's cadence, we shouldn't see much of a performance improvement. On the CPU side that's mostly true. You can expect a 5 - 15% increase in performance for the same price as a Sandy Bridge CPU today. A continued desire to be aggressive on the GPU front however puts Intel in a tough spot. Moving to a new manufacturing process, especially one as dramatically different as Intel's 22nm 3D tri-gate node isn't easy. Any additional complexity outside of the new process simply puts schedule at risk. That being said, its GPUs continue to lag significantly behind AMD and more importantly, they still aren't fast enough by customer standards.

Apple has been pushing Intel for faster graphics for years, having no issues with including discrete GPUs across its lineup or even prioritizing GPU over CPU upgrades. Intel's exclusivity agreement with Apple expired around Nehalem, meaning every design win can easily be lost if the fit isn't right.

With Haswell, Intel will finally deliver what Apple and other customers have been asking for on the GPU front. Until then Intel had to do something to move performance forward. A simple tick wouldn't cut it.

Intel calls Ivy Bridge a tick+. While CPU performance steps forward, GPU performance sees a more significant improvement - in the 20 - 50% range. The magnitude of improvement on the GPU side is more consistent with what you'd expect from a tock. The combination of a CPU tick and a GPU tock is how Intel arrives at the tick+ naming. I'm personally curious to see how this unfolds going forward. Will GPU and CPUs go through alternating tocks or will Intel try to synchronize them? Do we see innovation on one side slow down as the other increases? Does tick-tock remain on a two year cadence now that there are two fairly different architectures that need updating? These are questions I don't know that we'll see answers to until after Haswell. For now, let's focus on Ivy Bridge.

Post Your Comment

195 Comments

I think it's a long time away from approaching 560m performance. If you're going to do any remotely serious gaming on a laptop it's still best to get a dedicated graphics card.

I'm still sticking to gaming on a tower, so these CPUs (esp the AMD llano) make sense for me in laptops. Don't ever see myself gaming on a laptop unless I completely get rid of the towers in my house... which won't happen anytime soon (if ever.)Reply

I felt the same way when I was shopping recently. I WANTED to buy a Llano-based notebook (inexpensive, better graphics vs. Intel). The problem is there's no such thing as a slim and light Llano. Every OEM sticks you with the same configuration: six pounds and 15.6" turd-768 resolution screen. It's bizarre.

For the sake of competition, I hope Trinity will get some better design wins.Reply

If you look at the gaming charts, the resolution may go past x768, but the settings are on LOW, and don't give us a minimum frame rate, so the answer is: That's all that llano can handle is low end low rez.So AMD forces the giant .lb weighted monster as a selling point.Reply

I agree with you there. To get those "$100 mid range GPUs" on a laptop you need to bump up the cost by around $400 to get to one that simply can have one. Most laptops currently do not have discrete GPUs.

I am glad to see that integrated graphics from both Intel and AMD can now be compared with low end cards like the GT520 and GT440 without it becoming a laugh. Also that they are actually completing the tests well now. That is a rather major step. I remember some reviews of integrated graphics that resulted in a lot of either "could not complete" or "the bar is too small to fit a number on" entries.Reply

The IGP provides the QuickSync implementation. It would be insane to not include the silicon for it on the high end system. In addition moving forward you can get compute work out of the GPU so why would you ever not include it.Reply

Because gaming isn't the only thing that uses graphics cards. For instance, more and more video editors use the graphics card for doing video decode/encode/applying effects. So having a high performance graphics engine to go along with the high performance CPU can be a really nice thing.Reply