Intel’s Haswell CPUs will fit in everything from tablets to servers

An improved GPU and lower power consumption make for a versatile chip.

Information about Intel's next-generation processor architecture, codenamed Haswell, has been leaking steadily for some time, but presentations at today's Intel Developer Forum (IDF) are finally giving us details on what to expect from the fourth-generation Core processors when they launch in 2013.

Haswell is a "tock", in Intel parlance—a completely new processor architecture manufactured using the same 22nm process and "3D" tri-gate transistors as Ivy Bridge. As with Ivy Bridge, the bulk of Intel's attentions are focused on improving graphics performance and reducing power consumption—while Haswell's optimizations will definitely make it faster than Ivy Bridge at the same clock speeds, CPU performance definitely took a back seat during Intel's Haswell-oriented keynote today.

The CPU: modest improvements in a power-efficient package

Much about Haswell's architecture is similar to Ivy Bridge in many ways: key technologies like Turbo Boost and Hyperthreading are still in play, and the instruction pipeline and L1 and L2 cache sizes remain the same.

Haswell gains its speed mostly from tweaks to existing technologies: a new version of the Advanced Vector Extensions (AVX), predictably named AVX2, potentially doubles the theoretical floating point performance over the Sandy and Ivy Bridge architecture due to the long-awaited addition of a fused multiply-add instruction, and the bandwidth of both the L1 and L2 caches have been increased to accommodate the new extensions; two more ports have been added to the processor's Unified Reservation Station, allowing for the execution of up to eight operations per clock cycle (up from six in Sandy and Ivy Bridge); and branch prediction and the out-of-order execution units have also been improved.

While we don't have any hard performance numbers for Haswell just yet, these improvements (along with current rumors) suggest a CPU that is faster than Ivy Bridge, but not staggeringly so. The Haswell platform as a whole is also meant to reduce power consumption significantly—Intel has said that laptops using Haswell and its associated chipset could see up to double the battery life over Ivy Bridge. The biggest factor here is the introduction of a new power state, which Intel calls "S0ix."

Ivy Bridge-based systems are either active (in the S0 state) or in sleep or Hibernate modes (the S3 and S4 states). The S0ix power state splits the difference, keeping the system active but using only five percent as much power as Sandy Bridge systems do while idling—the benefit over the S3 and S4 states is that going from the S0ix state back to an active state is instantaneous and seamless to the user. Haswell can also switch between these power states more quickly than previous platforms, wasting less power while transitioning.

The desire to save power extends beyond the CPU itself—Intel has also added support for several low-power interfaces normally associated with ARM-toting tablets, including I2C, SDIO, I2S, and UART, along with more traditional SATA, USB, and PCI Express interfaces. All of these improvements, along with system-on-a-chip (SoC) versions of Haswell with a TDP of just 10W (down from the 17W in Ivy Bridge processors), should enable Intel to put Ultrabook-class performance into tablets with similar size, weight, and battery life to today's ARM-based offerings.

GPU performance: Big increases, if you pick the right chip

In Ivy Bridge, improvements to graphics performance were much more noteworthy than improvements to CPU performance, and that song remains the same with Haswell, which is supposed to be about twice as fast as the Intel HD 4000 in Ivy Bridge depending on the chip you get (which we'll discuss more in a moment). Intel has achieved this mostly by adding more hardware to the GPU—the actual architecture is similar to Ivy Bridge, which itself was an improved version of the Sandy Bridge graphics chip. Intel's next next-generation processor, codenamed Broadwell, is slated to introduce a revamped GPU architecture, which should bring even further gains.

The integrated graphics processors in Sandy and Ivy Bridge came in two flavors: the more powerful GT2, which you know as the Intel HD 3000 and HD 4000 graphics processors, and the cut-down GT1, which came in the form of the HD 2000 and HD 2500. The high- and low-end GPUs from each generation are architecturally the same—each supports the same video decoding features, DirectX and OpenGL versions, and number of displays—but the higher-end GPUs have more of Intel's "execution units" (EUs) on board. The HD 4000 had 16 EUs to the HD 2500's six.

We don't know the exact number of the EUs in either GT1 or GT2 in Haswell, but we do know a bit about their features and general performance level: the new GPUs will support DirectX 11.1, OpenCL 1.2, and OpenGL 4.0, and should perform similarly to Ivy Bridge while using about half the power.

Enlarge/ Haswell will double your graphics performance, but only if you pick the right GPU.

Intel

The performance increases come in a new performance level unique to Haswell, called GT3—this chip will essentially double the number of EUs found in the GT2 part, delivering about twice the graphics performance using the same amount of power as the Intel HD 4000. The performance implications are clear: AnandTech has posted a video from the IDF floor that shows GT3 running Skyrim on High settings at 1920x1080. Next to it is a current laptop with Intel's HD 4000 GPU running the same game at the same (apparent) framerate, but at a lower 1366x768 resolution and Medium settings. That's quite an exciting bump, especially if you want a thin-and-light Ultrabook that can game—the GPU also supports up to 4K resolutions, which should make it a good match for laptops and tablets with high-density displays.

The downside of GT3 is that it further muddles the already bewildering segmentation of Intel's CPU portfolio: currently, all Ivy Bridge mobile CPUs use the Intel HD 4000, but the desktop CPUs use a confusing mix of the Intel HD 4000 and HD 2500 products; with Haswell, mobile chips and SoCs can come with either GT2 or GT3, introducing the confusion of the desktop side to the mobile chips. Customers hoping for the GPU improvements found in GT3 may be sorely disappointed if OEMs cheap out and use GT2-equipped CPUs in all of their machines.

Other noteworthy improvements to the GPU in Haswell include a dedicated Video Quality Engine, which can decode video without waking up the rest of the GPU, and the ability to ramp up GPU clock speeds without also ramping up CPU clock speeds. Like Ivy Bridge, the Haswell GPUs support three monitors as long as at least one of the outputs is a DisplayPort—you can't run three monitors using three HDMI or DVI ports or any combination of the two.

Conclusions

If Intel's upcoming Atom processors are getting serious about competing with Cortex A9 and A15-based ARM processors, then Haswell is about giving us tablets in a whole new class of performance—benchmarks for Ivy Bridge Ultrabooks are five or six times higher than they are for the quad-core Tegra 3 in the Nexus 7, just to pick an example. ARM is still superior from a power usage standpoint, since even at full-tilt ARM SoCs are still sipping power compared to x86 SoCs, but the option to have that kind of performance in a tablet would be mighty tempting to a lot of companies and consumers.

All of that is to say nothing of the advancements that Haswell will bring to laptops and Ultrabooks, which stand to gain both better battery life and integrated graphics that you'd actually want to use for gaming; to desktops, which will be both more power-efficient and powerful than before; and to servers, which of course stand to benefit from all the power and performance enhancements of the consumer devices. Intel's confusing portfolio aside, Haswell looks like another solid incremental improvement on what came before.

Intel is taking a "mobile first" approach to Haswell's launch, which means that laptops and tablets should be the first devices to see these processors when they launch in early 2013.

84 Reader Comments

You're doing it wrong. There is no place for integrated graphics in a desktop build where form factor and (to a lesser extent) power consumption aren't really factors. This is for laptops, tablets, and possibly/probably, set-top devices (think HTPCs)

Ridiculous. In what universe is power-consumption not a factor in a desktop? In most of the world, power isn't free.

Also, heat and space is a consideration given that most people don't want a tower, or even mini tower standing around if they can avoid it. AIO desktops are popular because they take up little space and can be aesthetically pleasing to put in the corner of a livingroom. They don't have much room for discreet graphics above laptop-class GPUs, and heat is ALWAYS an issue with smaller spaces.

Besides, even the HD 4000 is fast enough for a vast majority of consumers, laptop or desktop. MOST people don't game on the computers, at least not anything more demanding than Facebook and Flash games. The HD 4000 will run WoW etc. fast enough that most people won't care to spend money on a discreet GPU.

On the Linux front Intel is the new big swinging thing with all the work they have put into their open source drivers. No blob garbage needed. This shows that the story from AMD and NVidia about no possible way to do open source drivers has been complete garbage. From a performance stand point Intel integrated graphics already exceed top end NVidia and AMD discrete cards if you can't use the blobs for various reasons. In my case I has a Southern Islands card that does all of it's work on the CPU because the open source drivers don't support the cards beyond the very basics. All 3D is done on the CPU. No video decoding at all. So really there is no point at all in buying a new AMD graphics card. Ether get some thing old or don't bother. AMD used to own the value line but I just don't see them being able to compete going forward. The new open source Intel video drivers have changed the value proposition drastically.

The fact that "tock" is a new architecture and "tick" is a die shrink always seemed backwards to me. Seems the architecture would lead with the tick, and then the shrinking tock would follow.

I don't think I've ever heard anyone say "tock-tick".

/rant

I'm kind of spitballing here, but if I were to guess, it's a concession by Intel that, before trying to create new and better processors, the die shrink must happen first, since given the size constraints we're talking about, along with the fact that desktop/laptop processors have remained at more or less the same physical size for decades now, just making it bigger isn't really a viable option.

The die shrink and the new technologies are separate achievements requiring different kinds of innovations. It makes sense to do things that way, to my mind, but I think having the shrink as the 'tick' is indicative of the fact that it needs to happen before you can add new instruction sets and silicon to bolster an existing design.

Of course, I could just be overthinking it.

On a related note, I second the question of what Intel's plan is post-Skymont, as it's now only one more shrink and a new architecture away with no game plan past that point. 2016 or so is going to be an interesting time for computing as we see if there's a way forward for traditional processors that isn't going to involve major concessions to continue increases at the present rate.

On a related note, I second the question of what Intel's plan is post-Skymont, as it's now only one more shrink and a new architecture away with no game plan past that point. 2016 or so is going to be an interesting time for computing as we see if there's a way forward for traditional processors that isn't going to involve major concessions to continue increases at the present rate.

We're actually 2 nodes away from Skymont; 14nm in 2014, and then Skymont at 10nm in 2016(hopefully)

I think you're confusing Intel not announcing their roadmap and Intel not having a roadmap.

The conundrum I have is that computer tech seems to be so robust, yet changes so fast. I've got 10yo rigs still running with original parts. They're woefully underpowered, but..they're still running. Wish something like cars would be that resilient. It's upsetting to build a new rig with "current tech" just to see something new and shiny still come out next 6 months. The thing you have still works, but ... damn, the new features! Start turning into Golem with The Ring. "Me wants it!"

Less mechanical wear and tear basically. With a car you have a whole bunch of bearings that wear out over time. Most parts in a computer just sits there, with the most likely failure being the mechanical parts of HDDs and fans.

Andrew Cunningham / Andrew has a B.A. in Classics from Kenyon College and has over five years of experience in IT. His work has appeared on Charge Shot!!! and AnandTech, and he records a weekly book podcast called Overdue.