Haswell's GPUs more than double the performance of Ivy Bridge, sometimes.

Every year, Intel releases a new line of CPUs: in 2011 we got Sandy Bridge, last year brought us Ivy Bridge, and this year is bringing us Haswell. Each of these new architectures typically increases performance by a bit while consuming the same amount of power (and often less). However, the really interesting thing about Intel's recent processors has been their GPUs, not their CPUs.

The company's graphics improvements since the beginning of the decade have been particularly impressive: pre-Sandy Bridge GPUs were near-useless for high-end gaming, but Ivy Bridge's HD 4000 graphics chip in particular made high-end gaming possible on Intel's GPUs, at least at lower resolutions and settings. This isn't something that you could have said in 2010.

With Haswell, Intel will continue to up its game (and its gaming performance), and the company has just officially announced details about its latest GPUs, the performance that we can expect from them, and the products we can expect them to appear in.

“HD Graphics” becomes “Iris”

Ivy Bridge CPUs can come paired with one of three GPUs: HD Graphics 4000, the highest-performing (and most common) part; HD Graphics 2500, a less performant version of the same architecture that uses six of Intel's graphics execution units (EUs) rather than the HD 4000's 16; and HD graphics, which performs at roughly the same level as the HD 2500 part but removes Intel's QuickSync video encoding feature and a few other video-related capabilities.

Haswell's new GPUs expand the number of performance tiers to five and cover different types of devices, power envelopes, and price points just as Intel's CPU portfolio has continued to grow over the years. The three top-performing parts are the ones that Intel is focusing on today.

Enlarge/ Intel remains committed to cramming more GPU performance into the razor-thin bodies of Ultrabooks.

Intel

We'll start with Ultrabooks, which two of these three GPUs are designed to fit inside. The higher-performing of these two GPUs sheds the HD Graphics moniker for a new name: the Iris Graphics 5100. Intel's 3DMark11 tests show this part as being just over twice as fast as Ivy Bridge's HD 4000 chip, and while we'll actually want to play some games with both GPUs to see how these benchmarks translate to real life, that's an impressive gain.

The one potential wrinkle is that the Iris 5100 is confined to chips with a 28W thermal design power (TDP), which is a fair bit higher than the 17W TDP used by both Sandy Bridge and Ivy Bridge CPUs. We've talked before about how Intel's TDP ratings (and the newer SDP ratings) are a bit nebulous, but it may be the case that these chips are confined to slightly larger (think 13-inch) Ultrabooks because of power or thermal constraints. It's also worth noting that the Ultrabook specifications are always changing, and that Intel can always tweak the specifications to fit a slightly larger, more powerful chip as they deem necessary.

For smaller (and perhaps also cheaper) Ultrabooks, there's the lower-performing of the two Ultrabook parts, called the Intel HD Graphics 5000. Intel's benchmarks show this part as being about 1.5 times as quick as the HD 4000 part, which is still a nice increase given that chips with this GPU fit into a 15W TDP envelope. Note that this chip doesn't use the "Iris" moniker, which Intel is limiting to its fastest GPUs for now.

These are the GPUs we'll probably see the most often, given the prevalence of Ultrabooks in today's PC market, but there's also an even faster GPU intended for larger laptops (think a 15-inch MacBook Pro compared to a 13-inch MacBook Air): the Iris Pro Graphics 5200 GPU is similar to the Iris 5100 in execution resources, but it adds a small amount of integrated eDRAM to the CPU package to increase performance. This performance comes with a fairly heavy power cost—the H-series quad-core processors that use the Iris Pro 5200 have a 47W TDP compared to the dual-core Iris 5100-equipped U-series CPUs' 28W—but this is to be expected, given that you're integrating extra memory on the die and going from two to four cores to boot. As Ars contributor David Kanter writes, this eDRAM should also be usable by the processor cores as another level of cache memory, further increasing CPU performance as well.

Enlarge/ The Iris 5200 GPU is about 2.5 times as fast as Intel's HD 4000 in 3DMark11 (see the second bar from the right), and can be tweaked to be a bit faster given enough cooling capacity (the striped bar at the right).

Intel

Intel benchmarks the Iris Pro 5200 at about 2.5 times faster than the HD Graphics 4000 GPU, which if true puts it in roughly the same league as today's midrange dedicated GPUs. If Intel's partners can incorporate that level of performance into a laptop without also needing to devote extra motherboard space to a dedicated GPU and graphics memory, that opens the door to both smaller laptops and similarly sized laptops with larger batteries.

Enlarge/ In a desktop, the Iris Pro 5200 will have more room to ramp up the clock speeds. Intel says it should be about three times as fast as the HD 4000.

Intel

The Iris Pro 5200 will also be making an appearance in certain desktop CPUs. Just as Ivy Bridge desktop CPUs use an S suffix to denote a lower-TDP chip and a K suffix to denote an unlocked overclocker-friendly chip, Haswell's desktop CPUs will introduce an R suffix to indicate that they use the Iris Pro 5200. The desktop version of the 5200 has a bit more thermal headroom, and thus it ought to be a bit quicker than its laptop counterpart—Intel's 3DMark11 benchmarks rated it nearly 3 times better than its predecessor.

Finally, we get to the bottom two parts in Haswell's GPU lineup. Intel isn't saying much about how these perform relative to the Ivy Bridge parts, but these lower-end chips have always been less about raw 3D performance and more about the API levels and features supported. If past is prologue, the Intel HD Graphics 4600, 4400, and 4200 parts (as well as the server-and-workstation-oriented P4700 and P4600 parts) will likely support the same features as the 5000-series parts, but with less raw 3D performance. The lowest-end part, still called simply "Intel HD Graphics," will likely remain as it is—a basic GPU to be paired with low-end Pentium and Celeron CPUs.

Most of these new Haswell GPUs should support the same general feature set: Direct3D 11.1, OpenGL 4.0, and OpenCL 1.2; a new, faster version of Intel's QuickSync video encoding engine; DisplayPort 1.2, which supports a higher bandwidth than the HD 4000's DisplayPort 1.1; improved support for 2K and 4K resolutions; and something called "three screen collage display," which appears to allow you to stretch one "logical" monitor across up to three physical monitors (AMD and Nvidia have supported a similar feature for several product generations now).

Conclusions: Mo' parts mo' (potential) problems

Enlarge/ Regardless of 3D performance, most of the Haswell GPUs should support the same basic features.

Intel

We'll start with the good: if actual games see the same performance improvements as 3DMark, then Haswell's GPUs will be a significant step up over Ivy Bridge, at least most of the time. The Iris products in particular look to continue integrated graphics' incursion into the low- and mid-end dedicated graphics market, leaving fewer places for dedicated chips from AMD and (especially) Nvidia to hide. Intel's own benchmarks show that performance will likely vary from title to title, though—3DMark11 consistently showed larger gains than 3DMark06 or 3DMarkVantage, for example.

The downside is that this flood of new GPUs further complicates Intel's already-plenty-complicated CPU lineup. Currently, if you're in the market for an Ultrabook it's a sure bet that you'll be getting the HD 4000 GPU most of the time (even if the maximum clock speed of the part can vary by 100MHz or so from CPU to CPU). If you're buying a Haswell system, things aren't as clean-cut. On top of the standard CPU questions—dual-core versus quad-core, Hyperthreading or none, and clock speed, to say nothing of more obscure extensions like TSX—you'll also need to keep track of which GPU you're buying.

It's possible that certain form factors will gravitate toward and informally standardize around certain GPUs—13-inch Ultrabooks around the Iris 5100 and 11-inch Ultrabooks around the HD 5000, for instance—but we won't know that for sure until we've gotten a good look at the first crop of Haswell laptops.

We'll be writing more about Haswell as its launch date nears—Intel's teaser page suggests the official launch will happen on June 3, just before Computex—and as we get our first Haswell systems in for review we'll be paying special attention to just how much its various GPUs improve over the old ones.

117 Reader Comments

I have seen nothing on the actual Handbrake development about using QuickSync. All I see is something about Intel OpenSourcing their encoding engine. H.264 is handled by libx264 the shared library version of x264 which doesn't use QuickSync due to the lack of low-level controls. x264 is already as fast as QuickSync at the same quality level. QuickSync isn't really a cpu "feature" as much as a library which Intel developers extra optimized for Intel processors - the x264 devs are already able to use AVX2 just like the Intel devs.

First let me say I have zero interest in Intel's GPU. But to suggest that Quicksync and x264 are remotely equivalent is just silly. Quicksync's quality isn't bad (not as good as x264 higher quality options) but is faster than ANYTHING else. x264 just can't touch it no matter how much to dial it's quality back. And no it isn't a software library it is dedicated hardware in the GPU (and actual fixed function hardware for most of it though it does run partly on the shaders as well). So it's a GPU "feature".

The problem Intel has is physics. More GPU performance requires more power and more cooling and these are not particularly compatible with surface and ultra book style designs. Still, the potential performance at fairly low power consumption might make a tablet a viable "real" PC that can be a portable all-in-one machine. And with thunderbolt the possibility of a graphical boost when you dock is also possible. Could be very interesting.

Oh yes, let's concentrate on the GPU (again) and ignore the elephant in the room: CPU performance will be the same. It has remained flat since the advent of Wolfdale. And even that wasn't an increase in IPC, but an inrease in GHz because of the newer fab process.

Wasn't that suppose to be how tick...tock, suppose to work?

Indeed: and it is how tick...tock is working. 10% improvement in *average* instructions per cycle, from what was already the absolute best instructions-per-cycle available, is fantastic work. We're way into diminishing-returns territory, the work required to get 10% is incredible.

If you're looking at peak performance, Sandy Bridge / Ivy Bridge doubled it for floating-point over Nehalem / Westmere, and Haswell doubles it again for floating-point and now integer SIMD; the FMA instruction allows you to do some interesting maths optimisations.

Intel does offer server CPUs without an iGPU. The iGPU is power gated as well... it being on/off die really doesn't make any difference.

The "iGPU" is still present, albeit snipped and partially still active.

Not on the actual server CPUs (Xeon E5 and friends); Xeon E3 is very close to Sandy/Ivy Bridge, but I've never seen convincing circumstances where you'd buy an E3 rather than the consumer model.

Quote:

The engineering effort, and die space, is wasted on a GPU that you will rarely or never use.

That's much more true on a machine where I'm obliged to fit a whole external GPU just to get my compute node to boot up! I'll be ordering an i7/4770 on release day, and its graphics will be used only for half an hour to install Linux, and five minutes to check it's booting properly once per power cut.

Yes, I'd rather have the die space used by eight cores and GMA950-level integrated graphics, but I appreciate I'm an immaterial part of the market.

Never did have a issue with Intel graphics. I always thought they performed as well as they were designed too. It was the expectations of the end user that expected too much. How many times did I read in a gaming forum how someone was trying to play Crysis with a Intel graphic chip. Many times it was not only the graphics that were weak but also the CPU and memory. In the end some ended up just buying too little of PC for what they wanted to do and then decided to blame Intel. I myself prefer Intel graphics because I know I will not be doing any 3D gaming and I would prefer a graphics chip with a focus on 2D performance and energy savings. If users would simply buy based on what they need and not on costs they would end up with a system that performs to their expectations.Maybe this is why PC's get lower satisfaction ratings?I bought a $250 Gateway Desktop a while ago, with a Celeron CPU and Intel HD graphics. It performs everything quite well. Does HD video, is responsive enough and uses very little energy. It has almost no upgrade possibilities but then again why should it? I would never intend to buy such a inexpensive PC and then proceed to spends hundreds trying to make it a gaming machine. Intel has made leaps and bounds in improvements to its graphic solutions. But as someone else said. Its still a mainstream graphics solution. But that's what most mainstream PC users need.

Still a mainstream solution though. It's not just the raw graphics power - all too often Intel's drivers are of poor quality.

That being said, it doesn't really concern us tech enthusiasts for whom it will not be as good as a discrete GPU. Compounding the problem, the integrated GPU wastes valuable die space (which could be used for something that enthusiasts find useful such as more cache), and uses power (cannot be switched off unlike Xeons and present on all non-E CPUs), which means more heat output.

For enthusiasts, the integrated GPU will remain of negative value, although I will note that the gains have been impressive compared to Sandy/Ivy Bridge.

Sigh - if only we had better IPC gains than the 10% that Haswell brings over Ivy.

There are laptops that have discrete gpu's that you could toggle on/off to save power (or automatically based on task). If the integrated gpu is potent enough for all of my normal computing, I'd love to be able to switch off the twin space heaters in my case when not gaming. I think the power savings from desktop hybrid graphics would be significant.

One thing I like about Intel's integrated graphics is that their drivers are open, and I don't have to futz around with driver issues on my budget linux box. I don't really care about playing the newest and most graphically intensive games, but I like to have the option to play some games at lower settings. I'm also not an expert at Linux by any stretch, so it's nice to keep things simple. As gaming on Linux becomes easier (thanks Steam!), having integrated graphics removes a lot of hassle for having a cheap system that does what I want.

I'm not likely to pick one of these up when it is brand new and still expensive, but it'll be nice in a few years when the generation after this comes out.

I don't believe Steam has yet to make gaming significantly easier on Linux at all. Most every game that was available to me, already had native Linux binaries and installers. And the couple that did not, were already running in a Wine environment for me as they were quite old. In fact, Steam has made it slightly cumbersome in some cases. For example, many games that you may have purchased through previous humble bundles allow activation via Steam,

You go to Steam to find out that quite a few give steam keys but are not available but it uses your activation for the couple that do. And I find it interesting that you are pro Intel due to open drivers in one sentence and then praise Steam in another, which is a significantly locked down DRM scheme wrapped in a digital distribution platform itself.

I personally have gamed with proprietary Nvidia drivers on Linux for many many years and feel that they have offered much more to the gaming community on Linux than Valve has thus far. Of course there are open drivers for Nvidia as well, but you would be doing yourself a disservice to avoid the official drivers if an Nvidia chipset is an option for you.

+1 on the drivers issue. Intel drivers are generally pretty shoddy, but more importantly their rollout of new drivers and the subsequent rollout of these updates through the 100s of custom laptop installs means that whenever I see a wierd gfx issue it's a safe bet it's on Intel.

Without a solid driver system their hw will always be something of a running dev joke.

The term "Intel integrated graphics" has become synonymous with "won't run current games, will barely run older games at low rez" with me thanks to laptops I had owned pre-2008 (before going back to beasty desktop rigs at home).In my circle of friends and co-workers, this connotation is also true.

Intel would need to re-brand that term (that is, never use those words in that sequence again and make up another term) for me to consider buying a laptop with "Intel integrated graphics" for anything more than web surfing... which my Chromebook does perfectly fine. ;-)

Still a mainstream solution though. It's not just the raw graphics power - all too often Intel's drivers are of poor quality.

That being said, it doesn't really concern us tech enthusiasts for whom it will not be as good as a discrete GPU. Compounding the problem, the integrated GPU wastes valuable die space (which could be used for something that enthusiasts find useful such as more cache), and uses power (cannot be switched off unlike Xeons and present on all non-E CPUs), which means more heat output.

For enthusiasts, the integrated GPU will remain of negative value, although I will note that the gains have been impressive compared to Sandy/Ivy Bridge.

Sigh - if only we had better IPC gains than the 10% that Haswell brings over Ivy.

There are laptops that have discrete gpu's that you could toggle on/off to save power (or automatically based on task). If the integrated gpu is potent enough for all of my normal computing, I'd love to be able to switch off the twin space heaters in my case when not gaming. I think the power savings from desktop hybrid graphics would be significant.

I would buy that if the 'turn on discrete gpu' button was labeled 'Turbo' instead.

One huge advantage of using Intel's integrated GPU is that the system can make use of QuickSync for certain applications. Currently, one would have to choose between QuickSync and mediocre graphic. With Iris, this might just make the choice a bit easier.

What applications are using QuickSync these days? I know OS X uses it for Airplay video mirroring, but at some point I remember reading that even Handbrake didn't take advantage of it.

The problem with these weird (in that they are not obviously CPU-like) additions to the SoC is the same as the problem of using a GPU for general-purpose computing: the devices do not virtualize well. This doesn't mean that they don't work with VMWare, it means that they don't offer the facilities that allow an OS to let every app think it has exclusive control of the facility. When it comes to RAM, or CPU, or disk, or network, an app can do what it likes and things work. The app doesn't have to explicitly negotiate access with every other app, rather the OS (through a variety of HW means like interrupts and page protections) mediates and allows every app to think it can do what it wants with the facility. This is not true for GPUs (and related items like QuickSync). So, for example, GPUs don't page like the rest of the system, but more relevant to us, they also don't get interrupted and context-switched like the CPU does.

It's not clear exactly what the best way forward is. Obviously there are time-critical aspects to at least some GPU usage which mean that, unlike most CPU usage, it's not of value to anyone if three apps all get to use the GPU at one third real-time speed. On the other hand, this lack of virtualization is really holding back GPGPU (because you can't write any sort of calculation that you just fire up and run for a long time; you have to manually break the calculation up into segments each of which runs for less than the maximum allowed run time), and it means that facilities like QuickSync are limited to very specialized usages rather than being generally available.

Ideally AMD's new hUMA stuff provides not just generic TLB &VM support, but also generic interrupt and context-switching support. That would at least provide a framework within which the OS vendors could now experiment (apps could for example, then reserve some level of throughput). But the hUMA PR so far has not mentioned generic interrupt support, so we may have to wait for a few years more. (And, Intel, WTF do we have to wait for AMD to keep making these sorts of advances? Do you not have a single damn OS expert in your employ who can suggest to you the facilities that GPUs need going forward?)

You do realize laptop sales eclipsed desktop sales many years ago? Most people don't want their personal computer to be a desktop.

Only a very few people out there care about GPUs faster than Intel's iGPUs, as well. Since SandyBridge, Intel's GPUs have actually be respectable on the GPU front (held back by the drivers, but shitty drivers is the norm in the GPU world... nVidia being the only one that employs quasi-competent driver developers), making gaming more accessible to the majority of users out there as well.

Another poster already expressed this, but most people want one device and they want that device to be portable. Intel, ironically, is addressing this possibility in Thunderbolt which could in the future be used to drive a more powerful graphics solution located in the display (hopefully, as an option card). The preverbal "most people" probably don't care about a "discrete GPU", I agree, but they do care about experience.

There is zero evidence that people WANT a single device.What people want is a pleasant computing experience that is cheap. For a long time this meant a single device (both for cheapness and because computers did a lousy job of synching between machines). But both these problems (cost and synch) become less of an issue every year.

Your argument is no different from claiming that people WANT a single TV or a single car or a single coat. All the evidence suggests that, if they can afford it, they buy multiple TVs, multiple cars, multiple coats, each optimized to a certain task, and with the convenience of multiple devices outweighing the extra cost.

Make no mistake: the future belongs to those device vendors skating to where the puck will be in this respect (Apple certainly, Google to some extent), NOT to those skating to where the puck was five years ago (MS unless they make a hard right turn regarding Win8).

The term "Intel integrated graphics" has become synonymous with "won't run current games, will barely run older games at low rez" with me thanks to laptops I had owned pre-2008 (before going back to beasty desktop rigs at home).In my circle of friends and co-workers, this connotation is also true.

Intel would need to re-brand that term (that is, never use those words in that sequence again and make up another term) for me to consider buying a laptop with "Intel integrated graphics" for anything more than web surfing... which my Chromebook does perfectly fine. ;-)

Look dude, Intel IS NOT CREATING THIS GPU FOR GAMERS. How many fscking times do you need to be told that?This GPU is created for the 90% of the population who don't give a damn about the latest games, but appreciate the lower cost and power an integrated GPU offers.

You are like a guy who wanders into a Harley Davidson dealership and starts complaining that none of these "motorcycles" offers a trailer hitch, and so they are all useless.

From these comments you would assume that any Intel integrated GPU is useless/wasted space/pointless.

I must be the only person on the internet grateful that my i5-2500K has a decent integrated GPU. I didn't have a lot of money to build a new box so I decided to put most of my money into the CPU and hope I would have money for a good video card later.

It doesn't make sense to me to buy a mid-range GFX card for $150 when the gains for buying a high-end one for $250+ are so great. A good high end GFX card will last years while a midrange one will be outpaced quickly. However I have not be able to save up the money for one yet but that hasn't stopped me from playing games.

Just the other day i was able to run Oblivion on the HD3000 in 720p with most options turned up and it looked great. Can I play the latest and greatest maxed out? No, but I can't afford the $60 for a new game anyways. I live for steam sales and buying 3-5 year old games for $5 and I can play them just fine on a HD3000 which means these new integrated GPUs will be even better. Not everyone has tons of money to blow on PC parts, I think its a good thing people buying cheap pre-made PCs and Laptops have the ability to play PC games as it expands the market for them.

From these comments you would assume that any Intel integrated GPU is useless/wasted space/pointless.

I must be the only person on the internet grateful that my i5-2500K has a decent integrated GPU. I didn't have a lot of money to build a new box so I decided to put most of my money into the CPU and hope I would have money for a good video card later.

It doesn't make sense to me to buy a mid-range GFX card for $150 when the gains for buying a high-end one for $250+ are so great. A good high end GFX card will last years while a midrange one will be outpaced quickly. However I have not be able to save up the money for one yet but that hasn't stopped me from playing games.

Just the other day i was able to run Oblivion on the HD3000 in 720p with most options turned up and it looked great. Can I play the latest and greatest maxed out? No, but I can't afford the $60 for a new game anyways. I live for steam sales and buying 3-5 year old games for $5 and I can play them just fine on a HD3000 which means these new integrated GPUs will be even better. Not everyone has tons of money to blow on PC parts, I think its a good thing people buying cheap pre-made PCs and Laptops have the ability to play PC games as it expands the market for them.

That's just it. Intel welded the GPU to the CPU not to make it cheaper for you, but to GUARANTEE more money for Intel, and incidentally make it harder for AMD to compete with them. It's just like Microsoft's old "you have to pay a Windows license fee for every computer you ship, regardless of whether Windows is installed on it" contracts, only with hardware instead.

Uh, no.Anadtech recently did a story on the difference between an old Core2Duo and current IvyBridge designs and showed a mainstream IVB part (i7-3770K) performing at 4-6 times the speed of the old C2D on computation-heavy tasks. Even games (which are heavily dependent on the GPU) delivered a 50-100% increase in performance. These improvements go well beyond what can be explained by the increased core count and frequency.

Thanks for the link. Correcting for my 3.0GHz E8400, Intel's latest greatest was 30% faster in a single core benchmark.

What Haswell does a lot better is consume drastically less power and has new TSX instructions, which should make huge differences in many multi-threaded work-loads.

I really doubt TSX will dramatically improve anything much, in any case it will take years to be adopted since to really take advantage of it you need to assume an optimistic locking model. Non-TSX code paths will generally be pessimistic (or at least lots of fine grain locks) that is one hell of a structural change in something that's quite hard to debug. HLE (part of TSX) is far easier to retrofit but is unlikely to make much of a difference in real world situations (it can be potentially slower in high contention cases).

However, I think everyone is forgetting AVX2/BMI they look at lot more interesting from a performance point of view and are far easier to implement.

You can change pessimistic locks from using instructions to wait for memory to regular non-waiting updates. The change to the lock itself would only be one instruction, but you do need to add the extra TSX instructions to begin and end and have a failure path that runs the "normal" way.

Essentially, any code that makes use of library or system calls can be instantly upgraded to higher performance with no code changes in the application.

TSX may actually make some algorithms faster to using locking than lockless. The biggest issue with locking is waiting for memory latencies, otherwise locks are quite instruction wise.

One huge advantage of using Intel's integrated GPU is that the system can make use of QuickSync for certain applications. Currently, one would have to choose between QuickSync and mediocre graphic. With Iris, this might just make the choice a bit easier.

What applications are using QuickSync these days? I know OS X uses it for Airplay video mirroring, but at some point I remember reading that even Handbrake didn't take advantage of it.

The problem with these weird (in that they are not obviously CPU-like) additions to the SoC is the same as the problem of using a GPU for general-purpose computing: the devices do not virtualize well. This doesn't mean that they don't work with VMWare, it means that they don't offer the facilities that allow an OS to let every app think it has exclusive control of the facility. When it comes to RAM, or CPU, or disk, or network, an app can do what it likes and things work. The app doesn't have to explicitly negotiate access with every other app, rather the OS (through a variety of HW means like interrupts and page protections) mediates and allows every app to think it can do what it wants with the facility. This is not true for GPUs (and related items like QuickSync). So, for example, GPUs don't page like the rest of the system, but more relevant to us, they also don't get interrupted and context-switched like the CPU does.

It's not clear exactly what the best way forward is. Obviously there are time-critical aspects to at least some GPU usage which mean that, unlike most CPU usage, it's not of value to anyone if three apps all get to use the GPU at one third real-time speed. On the other hand, this lack of virtualization is really holding back GPGPU (because you can't write any sort of calculation that you just fire up and run for a long time; you have to manually break the calculation up into segments each of which runs for less than the maximum allowed run time), and it means that facilities like QuickSync are limited to very specialized usages rather than being generally available.

Ideally AMD's new hUMA stuff provides not just generic TLB &VM support, but also generic interrupt and context-switching support. That would at least provide a framework within which the OS vendors could now experiment (apps could for example, then reserve some level of throughput). But the hUMA PR so far has not mentioned generic interrupt support, so we may have to wait for a few years more. (And, Intel, WTF do we have to wait for AMD to keep making these sorts of advances? Do you not have a single damn OS expert in your employ who can suggest to you the facilities that GPUs need going forward?)

Ono paper AMD sounds better, but the biggest issue isn't their hardware, but their drivers.

Linux support, even Windows issues, hamstring AMD all the time. They have been working on fixing drivers and hiring more engineers for drivers, but they keep laying people off and it's hard to tell where quality may get cut.

After Reading everything around the web, i have a feeling that Intel is going to disappoint again. Why? Simply because whenever Intel start to hype up their Graphics, it has consistently and continuously disappoint.

Their drivers, as far as i am concern sees no improvement to development speed. And They generally stop supporting it after a year or two.

Gotta love Intel piggybacking on competitors marketing with their product names. The HD label was happily borrowed from AMD and now we have something that sounds remarkably similar to an Apple product. You can trademark "Retina" but "HD" isn't so easy.

Would I be wrong to say that they're targeting this marketing at people who are too stupid to realise that GPU's and displays are different things? "As a part of the eyeball, Iris must be similar to Retina".

Also people who'll believe you'll get respectable performance for games out of it. In reality it's probably designed to give decent performance for the Windows GUI, then up-sold as a "gaming" solution.

Care to explain why they're using a 3D game benchmark to advertise their amazing new graphics, then? Or all the comparisons to a mid-range nVidia mobile chip they're making in the press?

More seriously, Intel invested a colossal amount of engineering effort and die area into developing an integrated GPU with on-package DRAM and more than twice as many graphics execution units. Have you seen the size of that thing compared to the CPU cores now? They even went and created a whole new brand to market it!

Bearing that in mind, consider that there are plenty of people in this thread telling us that Intel Integrated graphics work just great for them and asking why other people are complaining that they're weak (why they think those two facts are incompatible is beyond me, but long live human nature).

When we put those facts together, precisely who ARE these newer, "2x faster" graphics designed for? Because you seem to be arguing they're for people who are already happy with what they have. What a waste of engineering effort that would be if your argument held any water.

In an unrelated note, thanks to everyone for the down-votes on my comment about Intel's normalised performance figures. If anyone cares to explain exactly what was wrong about what I said, I'll make an edit...

I love how Intel only provided benchmarks relative to their own chips. "Twice the performance!" Yeah, well, two times a very low number is still a very low number.

This is how I look at it as well. The current HD4000 is pretty much only good for 720p ultra-low setting gaming. So if the new one is twice as fast, and you want to do 1080p gaming, which is twice the number of pixels to pump out . . . guess where you will be? Yeah, ultra-low stripped down settings.

These will be fine for cheap laptops and tablets, but anyone who actually want to play any real games on a desktop PC still need a discrete card. That's pretty much the same as today, so effectively nothing has changed. People who like face book games will still like face book games, and on the CPU side for anyone with a discrete card there has been limited to no change. Realistically if you have a 2500k, 3570k, 2600k, 3770k or similar chip there is nearly no advantage to upgrading.

My 2 year computer upgrade cycle comes up next spring Intel... make something better. I guess with AMD lagging so far behind Intel really has no need to push CPU performance anymore.

I honestly thing that it's not because AMD is lagging that Intel isn't pushing CPU performance. I think they're not doing it simply because there's no reason to! CPUs have been quite capable of performing tasks such as web browsing (including flash), writing documents, spreadsheets and slides, for a long time now. I dare say since the Core 2 Duo era. And that's what the vast majority of people use computers for.

It's only gives improvement in games, but games tend to show better increases in performance with a better graphics card anyway. i.e. spending an amount of money to upgrade your graphics would give you a vastly superior increase in game performance rather than spending the equivalent amount to upgrade your CPU.

Also, I'm not exactly sure, but didn't we reach some kind of limit at miniaturisation for the transistors on CPUs? I thought we reached a point where trying to stuff more transistors per unit area caused the heat to rise too much, so it wasn't possible to just simply reduce the size, stuff more transistors in there and get a boatload more of performance.

Is it time to buy a high end graphics card for my desktop before AMD and nVidia go out of business and they can't be found anymore?

Just being sarcastic but with these GPU's Intel will have locked up about 95% of the market. Not many crumbs left for the other guys to survive on.

I reckon Intel already has that much! And they still would even without an integrated GPU. It hasn't stopped nVidia and AMD from surviving. Well, debatable in the case of AMD... we'll see how they fare in the future.

I honestly thing that it's not because AMD is lagging that Intel isn't pushing CPU performance.

Intel *is* pushing CPU performance, despite having no credible competition; they're adding execution units, changing instruction sets, redesigning the deepest internals of the cores. It's not at all clear what more you could ask them to do!

If they weren't pushing CPU performance, they'd have just kept selling Sandy Bridge, and returned twelve billion dollars in FinFET R&D costs and multi-exposure lithography tools as dividends to shareholders.

The term "Intel integrated graphics" has become synonymous with "won't run current games, will barely run older games at low rez" with me thanks to laptops I had owned pre-2008 (before going back to beasty desktop rigs at home).In my circle of friends and co-workers, this connotation is also true.

Intel would need to re-brand that term (that is, never use those words in that sequence again and make up another term) for me to consider buying a laptop with "Intel integrated graphics" for anything more than web surfing... which my Chromebook does perfectly fine. ;-)

Look dude, Intel IS NOT CREATING THIS GPU FOR GAMERS. How many fscking times do you need to be told that?This GPU is created for the 90% of the population who don't give a damn about the latest games, but appreciate the lower cost and power an integrated GPU offers.

Right. The past is the past. The previous "Intel integrated graphics" setups were incapable of running games of their era/previous era. Many of us made some broad assumptions at the time, and learned our lesson. And the connotation I mentioned was formed then and has stuck around even today.

Now, look at the last picture in this article. It would suggest that the new "integrated" system is capable of running modern games, future games with DX11 even, such as Assassin's Creed, pictured, and delivering "Amazing performance" doing so. Is that referring to Assassin's Creed III, or the future "AC 4: Black Flag"? Or the original from 2007?I don't know.And based experience and your own words, Intel is *not* creating this GPU for gamers.So... what's going on here? Mixed signals.

[For reference I have a desktop with an NVIDIA GTX 560 with 3D monitor and an AMD FX 8-core processor -- I'm not in the market for a laptop nor an Intel processor, nor am I under the expectation that a laptop and/or Intel processor would deliver the power of my current desktop.]

The "for gaming" discussion misses a major point. That most users have no idea about this stuff. They see that their laptop has "3D Xtraz Power!" on a sticker and thus assume that they can play anything they like.

ffwd to them trying to play some game they have purchased. It's utterly unplayable due to performance/support/driver issues. Half the time the drivers are a customised set that can't be updated (even if horribly bugged) and it's not practical to get hold of every laptop ever made to work out how to fix it.

So yet another potential gamer is lost to crappy integrated (usually Intel) parts.

Have any of you actually checked to see if the HD300/400 can run games? Because they can. Maybe not the latest and greatest from 2013/2012 but I was able to play Bioshock 1, Spec Ops: The Line, Oblivion, Fallout 3, ect. These may be older games but they are still great and no I didn't have to turn all the options down.

Seriously, the new GPU's integrated into Intel chips are light years ahead of that crap that was soidered onto the motherboard. Update your sterotypes already. Now if they would just work on their drivers some more...

And you wonder why hard core PC gamers get a bad rep, maybe its because of this "holier than thou" attitude over hardware.

One huge advantage of using Intel's integrated GPU is that the system can make use of QuickSync for certain applications. Currently, one would have to choose between QuickSync and mediocre graphic. With Iris, this might just make the choice a bit easier.

What applications are using QuickSync these days? I know OS X uses it for Airplay video mirroring, but at some point I remember reading that even Handbrake didn't take advantage of it.

Not many apps do use QuickSync yet. More will if these are put into a platform that lots of people buy. I'm envisioning a day when I drop my 7 inch tablet into my pocket, take it to work, pop it into a dock with dual 24 inch montiors and work through lunch, take it out, grab a burger and check my personal emails, go back in office, take it over to my boss and review on the fly everything I worked on in the morning while he gives me a rundown of the next thing to add or tweek. Then its home to lounge on the couch for a netflix vid while dinner warms up, swap to itunes for some music during dinner, then pop it into my home dock for a few hours of my favorite MMO. Then take her over to my bed, set her on charge as my alarm clock.

And its the only PC i used the whole day. If cell phone cap is integrated--or heck, just 4G and I'll use Skype 24/7 as my contact number--Its the only device with a CPU I touch.

Power users have been mentioned. And theres simply no reason--I'll agree--for a Desktop or high end laptop to have a GPU on the die. Those both have plenty of room for the guy who wants 60 frames per second on three monitors with SLI and all the goodies without melting it or putting a fan that sounds more like a wind simulating turbine. But I'm looking at this and thinking: compact might compete.

If JQ Consumer can do all the stuff he wants easily on a tablet and a dock, with monitors and keyboard and mouse sitting where he needs them...why does he need 3 computers in all those places? Why not just leave the dock there, and have 1 tablet PC?

That's what Ubuntu is working on right now. They claim devices will ship in 2014.