Haswell's GPUs more than double the performance of Ivy Bridge, sometimes.

Every year, Intel releases a new line of CPUs: in 2011 we got Sandy Bridge, last year brought us Ivy Bridge, and this year is bringing us Haswell. Each of these new architectures typically increases performance by a bit while consuming the same amount of power (and often less). However, the really interesting thing about Intel's recent processors has been their GPUs, not their CPUs.

The company's graphics improvements since the beginning of the decade have been particularly impressive: pre-Sandy Bridge GPUs were near-useless for high-end gaming, but Ivy Bridge's HD 4000 graphics chip in particular made high-end gaming possible on Intel's GPUs, at least at lower resolutions and settings. This isn't something that you could have said in 2010.

With Haswell, Intel will continue to up its game (and its gaming performance), and the company has just officially announced details about its latest GPUs, the performance that we can expect from them, and the products we can expect them to appear in.

“HD Graphics” becomes “Iris”

Ivy Bridge CPUs can come paired with one of three GPUs: HD Graphics 4000, the highest-performing (and most common) part; HD Graphics 2500, a less performant version of the same architecture that uses six of Intel's graphics execution units (EUs) rather than the HD 4000's 16; and HD graphics, which performs at roughly the same level as the HD 2500 part but removes Intel's QuickSync video encoding feature and a few other video-related capabilities.

Haswell's new GPUs expand the number of performance tiers to five and cover different types of devices, power envelopes, and price points just as Intel's CPU portfolio has continued to grow over the years. The three top-performing parts are the ones that Intel is focusing on today.

Enlarge/ Intel remains committed to cramming more GPU performance into the razor-thin bodies of Ultrabooks.

Intel

We'll start with Ultrabooks, which two of these three GPUs are designed to fit inside. The higher-performing of these two GPUs sheds the HD Graphics moniker for a new name: the Iris Graphics 5100. Intel's 3DMark11 tests show this part as being just over twice as fast as Ivy Bridge's HD 4000 chip, and while we'll actually want to play some games with both GPUs to see how these benchmarks translate to real life, that's an impressive gain.

The one potential wrinkle is that the Iris 5100 is confined to chips with a 28W thermal design power (TDP), which is a fair bit higher than the 17W TDP used by both Sandy Bridge and Ivy Bridge CPUs. We've talked before about how Intel's TDP ratings (and the newer SDP ratings) are a bit nebulous, but it may be the case that these chips are confined to slightly larger (think 13-inch) Ultrabooks because of power or thermal constraints. It's also worth noting that the Ultrabook specifications are always changing, and that Intel can always tweak the specifications to fit a slightly larger, more powerful chip as they deem necessary.

For smaller (and perhaps also cheaper) Ultrabooks, there's the lower-performing of the two Ultrabook parts, called the Intel HD Graphics 5000. Intel's benchmarks show this part as being about 1.5 times as quick as the HD 4000 part, which is still a nice increase given that chips with this GPU fit into a 15W TDP envelope. Note that this chip doesn't use the "Iris" moniker, which Intel is limiting to its fastest GPUs for now.

These are the GPUs we'll probably see the most often, given the prevalence of Ultrabooks in today's PC market, but there's also an even faster GPU intended for larger laptops (think a 15-inch MacBook Pro compared to a 13-inch MacBook Air): the Iris Pro Graphics 5200 GPU is similar to the Iris 5100 in execution resources, but it adds a small amount of integrated eDRAM to the CPU package to increase performance. This performance comes with a fairly heavy power cost—the H-series quad-core processors that use the Iris Pro 5200 have a 47W TDP compared to the dual-core Iris 5100-equipped U-series CPUs' 28W—but this is to be expected, given that you're integrating extra memory on the die and going from two to four cores to boot. As Ars contributor David Kanter writes, this eDRAM should also be usable by the processor cores as another level of cache memory, further increasing CPU performance as well.

Enlarge/ The Iris 5200 GPU is about 2.5 times as fast as Intel's HD 4000 in 3DMark11 (see the second bar from the right), and can be tweaked to be a bit faster given enough cooling capacity (the striped bar at the right).

Intel

Intel benchmarks the Iris Pro 5200 at about 2.5 times faster than the HD Graphics 4000 GPU, which if true puts it in roughly the same league as today's midrange dedicated GPUs. If Intel's partners can incorporate that level of performance into a laptop without also needing to devote extra motherboard space to a dedicated GPU and graphics memory, that opens the door to both smaller laptops and similarly sized laptops with larger batteries.

Enlarge/ In a desktop, the Iris Pro 5200 will have more room to ramp up the clock speeds. Intel says it should be about three times as fast as the HD 4000.

Intel

The Iris Pro 5200 will also be making an appearance in certain desktop CPUs. Just as Ivy Bridge desktop CPUs use an S suffix to denote a lower-TDP chip and a K suffix to denote an unlocked overclocker-friendly chip, Haswell's desktop CPUs will introduce an R suffix to indicate that they use the Iris Pro 5200. The desktop version of the 5200 has a bit more thermal headroom, and thus it ought to be a bit quicker than its laptop counterpart—Intel's 3DMark11 benchmarks rated it nearly 3 times better than its predecessor.

Finally, we get to the bottom two parts in Haswell's GPU lineup. Intel isn't saying much about how these perform relative to the Ivy Bridge parts, but these lower-end chips have always been less about raw 3D performance and more about the API levels and features supported. If past is prologue, the Intel HD Graphics 4600, 4400, and 4200 parts (as well as the server-and-workstation-oriented P4700 and P4600 parts) will likely support the same features as the 5000-series parts, but with less raw 3D performance. The lowest-end part, still called simply "Intel HD Graphics," will likely remain as it is—a basic GPU to be paired with low-end Pentium and Celeron CPUs.

Most of these new Haswell GPUs should support the same general feature set: Direct3D 11.1, OpenGL 4.0, and OpenCL 1.2; a new, faster version of Intel's QuickSync video encoding engine; DisplayPort 1.2, which supports a higher bandwidth than the HD 4000's DisplayPort 1.1; improved support for 2K and 4K resolutions; and something called "three screen collage display," which appears to allow you to stretch one "logical" monitor across up to three physical monitors (AMD and Nvidia have supported a similar feature for several product generations now).

Conclusions: Mo' parts mo' (potential) problems

Enlarge/ Regardless of 3D performance, most of the Haswell GPUs should support the same basic features.

Intel

We'll start with the good: if actual games see the same performance improvements as 3DMark, then Haswell's GPUs will be a significant step up over Ivy Bridge, at least most of the time. The Iris products in particular look to continue integrated graphics' incursion into the low- and mid-end dedicated graphics market, leaving fewer places for dedicated chips from AMD and (especially) Nvidia to hide. Intel's own benchmarks show that performance will likely vary from title to title, though—3DMark11 consistently showed larger gains than 3DMark06 or 3DMarkVantage, for example.

The downside is that this flood of new GPUs further complicates Intel's already-plenty-complicated CPU lineup. Currently, if you're in the market for an Ultrabook it's a sure bet that you'll be getting the HD 4000 GPU most of the time (even if the maximum clock speed of the part can vary by 100MHz or so from CPU to CPU). If you're buying a Haswell system, things aren't as clean-cut. On top of the standard CPU questions—dual-core versus quad-core, Hyperthreading or none, and clock speed, to say nothing of more obscure extensions like TSX—you'll also need to keep track of which GPU you're buying.

It's possible that certain form factors will gravitate toward and informally standardize around certain GPUs—13-inch Ultrabooks around the Iris 5100 and 11-inch Ultrabooks around the HD 5000, for instance—but we won't know that for sure until we've gotten a good look at the first crop of Haswell laptops.

We'll be writing more about Haswell as its launch date nears—Intel's teaser page suggests the official launch will happen on June 3, just before Computex—and as we get our first Haswell systems in for review we'll be paying special attention to just how much its various GPUs improve over the old ones.

117 Reader Comments

Still a mainstream solution though. It's not just the raw graphics power - all too often Intel's drivers are of poor quality.

That being said, it doesn't really concern us tech enthusiasts for whom it will not be as good as a discrete GPU. Compounding the problem, the integrated GPU wastes valuable die space (which could be used for something that enthusiasts find useful such as more cache), and uses power (cannot be switched off unlike Xeons and present on all non-E CPUs), which means more heat output.

For enthusiasts, the integrated GPU will remain of negative value, although I will note that the gains have been impressive compared to Sandy/Ivy Bridge.

Sigh - if only we had better IPC gains than the 10% that Haswell brings over Ivy.

Still a mainstream solution though. It's not just the raw graphics power - all too often Intel's drivers are of poor quality.

That being said, it doesn't really concern us tech enthusiasts for whom it will not be as good as a discrete GPU. Compounding the problem, the integrated GPU wastes valuable die space (which could be used for something that enthusiasts find useful such as more cache), and uses power (cannot be switched off unlike Xeons and present on all non-E CPUs), which means more heat output.

For enthusiasts, the integrated GPU will remain of negative value.

One huge advantage of using Intel's integrated GPU is that the system can make use of QuickSync for certain applications. Currently, one would have to choose between QuickSync and mediocre graphic. With Iris, this might just make the choice a bit easier.

I'm interested in the Dell XPS 18.4" tablet with a stand and keyboard/mouse. The only thing holding me back is that it is not really a great replacement for my desktop as a gaming machine. Hopefully, the second or third iteration of the tablet, will add an i7, 16GB of RAM and the latest Haswell graphics. Maybe early next year, I'll have a table that is powerful enough to replace my desktop and still afford me a fair bit of tablet goodness mobility. It's barely bigger than a 17" laptop and about half the weight of one. Just need it to be a desktop monitor that I can pickup and take with me when I want to.

2013 is looking like the year for reusing old trademarks, previously used for obsolete tech, by another brand for related tech.First out was AMD/Sony to reuse the name "Jaguar" for game console hardware.Now, Intel is reusing the name "Iris" for computer graphics.

I read this and think (no pun intended) this could be the game changer for tablets. The argument against why someone needs a tablet has largly been that a mobile device just has no real way to compete with a laptop/desktop on hi gpu used apps.

A CPU/GPU combo makes more sense in such a small platform and would allow the tablet form factor to run all those uber awesome x86 programs that currently they more "crawl" than "run".

Still a mainstream solution though. It's not just the raw graphics power - all too often Intel's drivers are of poor quality.

That being said, it doesn't really concern us tech enthusiasts for whom it will not be as good as a discrete GPU. Compounding the problem, the integrated GPU wastes valuable die space (which could be used for something that enthusiasts find useful such as more cache), and uses power (cannot be switched off unlike Xeons and present on all non-E CPUs), which means more heat output.

For enthusiasts, the integrated GPU will remain of negative value.

One huge advantage of using Intel's integrated GPU is that the system can make use of QuickSync for certain applications. Currently, one would have to choose between QuickSync and mediocre graphic. With Iris, this might just make the choice a bit easier.

One huge advantage of using Intel's integrated GPU is that the system can make use of QuickSync for certain applications. Currently, one would have to choose between QuickSync and mediocre graphic. With Iris, this might just make the choice a bit easier.

What applications are using QuickSync these days? I know OS X uses it for Airplay video mirroring, but at some point I remember reading that even Handbrake didn't take advantage of it.

If Microsoft can upgrade the next Surface Pro to use one of these, instead of the current i5 with HD 4000 graphics, how many more people would be interested in them?

Okay, maybe thats a bad example... [grin]

But I am curious to see just how many more games are playable on these devices, because I'm tired of being tied-down to a desktop computer or high-end laptop - which needs to be plugged-in to keep the discrete GPU at full-tilt - just to play most graphics-based games.

It makes me wonder if those Ultrabooks will double as an Ironing appliance when you try to run games like Civ 5 or Metro Last Light on them. I just can't imagine the amount (and lack of) of heat dissipation if they're so intent to make them as slim as they possibly can.

Plus.. Didn't they (Intel) announced a while back that the CPUs with the best IGPs will come in BGA form? That would also make sense with the R moniker as in Restricted?

One huge advantage of using Intel's integrated GPU is that the system can make use of QuickSync for certain applications. Currently, one would have to choose between QuickSync and mediocre graphic. With Iris, this might just make the choice a bit easier.

What applications are using QuickSync these days? I know OS X uses it for Airplay video mirroring, but at some point I remember reading that even Handbrake didn't take advantage of it.

Not many apps do use QuickSync yet. More will if these are put into a platform that lots of people buy. I'm envisioning a day when I drop my 7 inch tablet into my pocket, take it to work, pop it into a dock with dual 24 inch montiors and work through lunch, take it out, grab a burger and check my personal emails, go back in office, take it over to my boss and review on the fly everything I worked on in the morning while he gives me a rundown of the next thing to add or tweek. Then its home to lounge on the couch for a netflix vid while dinner warms up, swap to itunes for some music during dinner, then pop it into my home dock for a few hours of my favorite MMO. Then take her over to my bed, set her on charge as my alarm clock.

And its the only PC i used the whole day. If cell phone cap is integrated--or heck, just 4G and I'll use Skype 24/7 as my contact number--Its the only device with a CPU I touch.

Power users have been mentioned. And theres simply no reason--I'll agree--for a Desktop or high end laptop to have a GPU on the die. Those both have plenty of room for the guy who wants 60 frames per second on three monitors with SLI and all the goodies without melting it or putting a fan that sounds more like a wind simulating turbine. But I'm looking at this and thinking: compact might compete.

If JQ Consumer can do all the stuff he wants easily on a tablet and a dock, with monitors and keyboard and mouse sitting where he needs them...why does he need 3 computers in all those places? Why not just leave the dock there, and have 1 tablet PC?

I'm sure this will get down voted as well, but for the record, I personally think integrating the graphics in to the CPU is one of Intel's worse technical decisions - right up there with the Pentium 4 series. The only place it makes sense (debatable) is in the tablet or other small factor device market.esktops are generally served better by a more discrete GPU, even though the current generation could stand a redesign from the ground up (imho).

You do realize laptop sales eclipsed desktop sales many years ago? Most people don't want their personal computer to be a desktop.

Only a very few people out there care about GPUs faster than Intel's iGPUs, as well. Since SandyBridge, Intel's GPUs have actually be respectable on the GPU front (held back by the drivers, but shitty drivers is the norm in the GPU world... nVidia being the only one that employs quasi-competent driver developers), making gaming more accessible to the majority of users out there as well.

Quote:

Most servers only need graphic capabilities during the initial setup and/or during hands on maintenance. An independent piece of hardware that can be shutdown entirely or close, thus reducing power / heat generation is more helpful.

Intel does offer server CPUs without an iGPU. The iGPU is power gated as well... it being on/off die really doesn't make any difference.

One huge advantage of using Intel's integrated GPU is that the system can make use of QuickSync for certain applications. Currently, one would have to choose between QuickSync and mediocre graphic. With Iris, this might just make the choice a bit easier.

What applications are using QuickSync these days? I know OS X uses it for Airplay video mirroring, but at some point I remember reading that even Handbrake didn't take advantage of it.

Handbrake just announced that they will support QuickSync, but not on OS X though. According to AnandTech, Media SDK isn't available on OS X. So I think you are right, there is little support for QuickSync in OS X.

One huge advantage of using Intel's integrated GPU is that the system can make use of QuickSync for certain applications. Currently, one would have to choose between QuickSync and mediocre graphic. With Iris, this might just make the choice a bit easier.

What applications are using QuickSync these days? I know OS X uses it for Airplay video mirroring, but at some point I remember reading that even Handbrake didn't take advantage of it.

Not many apps do use QuickSync yet. More will if these are put into a platform that lots of people buy. I'm envisioning a day when I drop my 7 inch tablet into my pocket, take it to work, pop it into a dock with dual 24 inch montiors and work through lunch, take it out, grab a burger and check my personal emails, go back in office, take it over to my boss and review on the fly everything I worked on in the morning while he gives me a rundown of the next thing to add or tweek. Then its home to lounge on the couch for a netflix vid while dinner warms up, swap to itunes for some music during dinner, then pop it into my home dock for a few hours of my favorite MMO. Then take her over to my bed, set her on charge as my alarm clock.

And its the only PC i used the whole day. If cell phone cap is integrated--or heck, just 4G and I'll use Skype 24/7 as my contact number--Its the only device with a CPU I touch.

Power users havce been mentioned. And theres simply no reason--I'll agree--for a Desktop or high end laptop to have a GPU on the die. Those both have plenty of room for the guy who wants 60 frames per second on three monitors with SLI and all the goodies. But I'm looking at this and thinking: compact might compete.

That would be a great idea and for better or for worse Apple is very like to be the one who achieve something like that first. That is, if they deem it worthy to go back to the enterprise environment.

Why am I saying Apple? Because they have the habit of standarising connection/docking ports (one port to rule them all) instead of doing the infamy of tying a specific, custom, high-speed connection/docking port to a specific model; and/or tweaking the location/placement of said port on the next model, making it incompatible with last year's docking station/lapdock model; and then complain that nobody bought it even though it was a great idea *coughsonycough* *coughmotorolacough* *coughasuscough*

One huge advantage of using Intel's integrated GPU is that the system can make use of QuickSync for certain applications. Currently, one would have to choose between QuickSync and mediocre graphic. With Iris, this might just make the choice a bit easier.

What applications are using QuickSync these days? I know OS X uses it for Airplay video mirroring, but at some point I remember reading that even Handbrake didn't take advantage of it.

Not many apps do use QuickSync yet. More will if these are put into a platform that lots of people buy. I'm envisioning a day when I drop my 7 inch tablet into my pocket, take it to work, pop it into a dock with dual 24 inch montiors and work through lunch, take it out, grab a burger and check my personal emails, go back in office, take it over to my boss and review on the fly everything I worked on in the morning while he gives me a rundown of the next thing to add or tweek. Then its home to lounge on the couch for a netflix vid while dinner warms up, swap to itunes for some music during dinner, then pop it into my home dock for a few hours of my favorite MMO. Then take her over to my bed, set her on charge as my alarm clock.

And its the only PC i used the whole day. If cell phone cap is integrated--or heck, just 4G and I'll use Skype 24/7 as my contact number--Its the only device with a CPU I touch.

Power users have been mentioned. And theres simply no reason--I'll agree--for a Desktop or high end laptop to have a GPU on the die. Those both have plenty of room for the guy who wants 60 frames per second on three monitors with SLI and all the goodies without melting it or putting a fan that sounds more like a wind simulating turbine. But I'm looking at this and thinking: compact might compete.

If JQ Consumer can do all the stuff he wants easily on a tablet and a dock, with monitors and keyboard and mouse sitting where he needs them...why does he need 3 computers in all those places? Why not just leave the dock there, and have 1 tablet PC?

The aforementioned XPS 18.4 comes oh so close. It needs a tad bit better parts and I remember the tech support guy telling me that a docking station won't work so it needs a docking station for a second monitor, a third would be even better. That said, what you describe is what I want to see from a hardware standpoint. Now, how that thing interacts with the different networks from work to home to lunch is an entirely different issue that Windows 8 doesn't perfectly handle at this moment.

This performance comes with a fairly heavy power cost—the H-series quad-core processors that use the Iris Pro 5200 have a 47W TDP compared to the dual-core Iris 5100-equipped U-series CPUs' 28W—but this is to be expected, given that Intel is effectively putting a dedicated GPU on the same die as the CPU.

The bolded part of this paragraph seems a bit out of place even though it uses the term 'effectively'. Intel has a GPU on-die in the 5100 and 5200 parts. As mentioned else where in the article, Intel is merely adding eDRAM. Also a large chunk of that 28W to 47W difference comes form the move from dual core to quad core. This sentence needs a bit of reworking.

If Microsoft can upgrade the next Surface Pro to use one of these, instead of the current i5 with HD 4000 graphics, how many more people would be interested in them?

Okay, maybe thats a bad example... [grin]

But I am curious to see just how many more games are playable on these devices, because I'm tired of being tied-down to a desktop computer or high-end laptop - which needs to be plugged-in to keep the discrete GPU at full-tilt - just to play most graphics-based games.

There is a very simple solution to your problem. Play older games. They will be more demanding and work perfectly fine on the anemic hardware. If you think that built in GPUs will catch up to dedicated and games that are made for dedicated gaming rigs then you are in for a wait.

That being said, it doesn't really concern us tech enthusiasts for whom it will not be as good as a discrete GPU. Compounding the problem, the integrated GPU wastes valuable die space (which could be used for something that enthusiasts find useful such as more cache), and uses power (cannot be switched off unlike Xeons and present on all non-E CPUs), which means more heat output.

Still a mainstream solution though. It's not just the raw graphics power - all too often Intel's drivers are of poor quality.

That being said, it doesn't really concern us tech enthusiasts for whom it will not be as good as a discrete GPU. Compounding the problem, the integrated GPU wastes valuable die space (which could be used for something that enthusiasts find useful such as more cache), and uses power (cannot be switched off unlike Xeons and present on all non-E CPUs), which means more heat output.

For enthusiasts, the integrated GPU will remain of negative value, although I will note that the gains have been impressive compared to Sandy/Ivy Bridge.

Sigh - if only we had better IPC gains than the 10% that Haswell brings over Ivy.

Enthusiasts will most likely get the higher end unlocked intel CPUs which won't have these integrated graphics shenanigans.

Edit: nm, they do, but they will have the lower end integrated graphics which will take less die space. *shakes head* what are you doing intel.

More good news, nvidia will be sure to quickly outpace these chips. Yay for competition

nVidia already does. The catch is that nVidia's GPUs are all in a separate package that consumes additional power, requires more board area and adds cost. (Also remember that nVidia's GPUs come with extra GDDR5 memory nowadays which increases the impact of these factors even more.) Those factors make discrete GPU's more of a premium item in laptops today and incredibly rare in ultra books.

Out of all these issues, nVidia does have a potential plan to address board space. They'll be incorporating in-package memory with their Volta generation of GPU's due in 2015.

You do realize laptop sales eclipsed desktop sales many years ago? Most people don't want their personal computer to be a desktop.

Only a very few people out there care about GPUs faster than Intel's iGPUs, as well. Since SandyBridge, Intel's GPUs have actually be respectable on the GPU front (held back by the drivers, but shitty drivers is the norm in the GPU world... nVidia being the only one that employs quasi-competent driver developers), making gaming more accessible to the majority of users out there as well.

Another poster already expressed this, but most people want one device and they want that device to be portable. Intel, ironically, is addressing this possibility in Thunderbolt which could in the future be used to drive a more powerful graphics solution located in the display (hopefully, as an option card). The preverbal "most people" probably don't care about a "discrete GPU", I agree, but they do care about experience.

Intel does offer server CPUs without an iGPU. The iGPU is power gated as well... it being on/off die really doesn't make any difference.

The "iGPU" is still present, albeit snipped and partially still active. The engineering effort, and die space, is wasted on a GPU that you will rarely or never use. There are far more beneficial things you can do with that die space, and better streamlining of the overall architecture in a server or "consoleless" environment. What the on-die GPU really boils down to is trying to keep the marketshare and and the price level. IMHO, Intel will lose that war in the server area over the next decade unless they dramatically change their strategy.

One thing I like about Intel's integrated graphics is that their drivers are open, and I don't have to futz around with driver issues on my budget linux box. I don't really care about playing the newest and most graphically intensive games, but I like to have the option to play some games at lower settings. I'm also not an expert at Linux by any stretch, so it's nice to keep things simple. As gaming on Linux becomes easier (thanks Steam!), having integrated graphics removes a lot of hassle for having a cheap system that does what I want.

I'm not likely to pick one of these up when it is brand new and still expensive, but it'll be nice in a few years when the generation after this comes out.

There is a very simple solution to your problem. Play older games. They will be more demanding and work perfectly fine on the anemic hardware. If you think that built in GPUs will catch up to dedicated and games that are made for dedicated gaming rigs then you are in for a wait.

I don't mind waiting. I've been seeing this idea since I bought my first smartphone. Before then actually.

How much longer will the massive desktop--or even the high end laptop--last if small portable devices continue to be the darling of the PC purchaser?

here's a quick use-case: I used to make a good bit of side money custom building desktops for gamers and upgrading their equipment. But the local market dried up. I asked one of my previous customers why and his answer was simple: "I bought a PS3". I knew exactly what that statement meant. The graphics were just fine for him--beautiful in fact--and he could play online with his friends. And he didn't have to upgrade to maintain any "edge" over the other people he played with or against. Last I heard, he has an EagleEye, so he's still a bit on the performance kick, looking for that edge. But I betcha he's still on the same PS3 all the way here in 2013.

If people don't buy them, company's stop selling them. Then they stop building them...and suddenly developers start coding for what people are going to use their software on, and that's not necessarily the best beast out there.

Honestly, even SB was overkill for the average person. If you're talking about someone that wants to play WoW or CoD and doesn't mind turning down the graphics settings, Llano and Trinity are just fine. and significantly cheaper.

Where Intel gets stupid is when the put the high-end iGPU with QS support in the high-end processors...where it's going to get supplanted by a discrete card anyway. This looks like this will be more of the same - a confusing array of artificially-limited parts and a naming scheme that seems intended to be impossible to unravel.

And it's hilarious that people are already bitching about the Haswell-E parts when IB-E just dropped a couple weeks ago. I seriously doubt even Intel is stupid enough to try cramming Iris graphics into workstation/server-class processors.

One huge advantage of using Intel's integrated GPU is that the system can make use of QuickSync for certain applications. Currently, one would have to choose between QuickSync and mediocre graphic. With Iris, this might just make the choice a bit easier.

What applications are using QuickSync these days? I know OS X uses it for Airplay video mirroring, but at some point I remember reading that even Handbrake didn't take advantage of it.

Not many apps do use QuickSync yet. More will if these are put into a platform that lots of people buy. I'm envisioning a day when I drop my 7 inch tablet into my pocket, take it to work, pop it into a dock with dual 24 inch montiors and work through lunch, take it out, grab a burger and check my personal emails, go back in office, take it over to my boss and review on the fly everything I worked on in the morning while he gives me a rundown of the next thing to add or tweek. Then its home to lounge on the couch for a netflix vid while dinner warms up, swap to itunes for some music during dinner, then pop it into my home dock for a few hours of my favorite MMO. Then take her over to my bed, set her on charge as my alarm clock.

And its the only PC i used the whole day. If cell phone cap is integrated--or heck, just 4G and I'll use Skype 24/7 as my contact number--Its the only device with a CPU I touch.

Power users have been mentioned. And theres simply no reason--I'll agree--for a Desktop or high end laptop to have a GPU on the die. Those both have plenty of room for the guy who wants 60 frames per second on three monitors with SLI and all the goodies without melting it or putting a fan that sounds more like a wind simulating turbine. But I'm looking at this and thinking: compact might compete.

If JQ Consumer can do all the stuff he wants easily on a tablet and a dock, with monitors and keyboard and mouse sitting where he needs them...why does he need 3 computers in all those places? Why not just leave the dock there, and have 1 tablet PC?

You started talking about QuickSync and then went on a total tangent ...

I have seen nothing on the actual Handbrake development about using QuickSync. All I see is something about Intel OpenSourcing their encoding engine. H.264 is handled by libx264 the shared library version of x264 which doesn't use QuickSync due to the lack of low-level controls. x264 is already as fast as QuickSync at the same quality level. QuickSync isn't really a cpu "feature" as much as a library which Intel developers extra optimized for Intel processors - the x264 devs are already able to use AVX2 just like the Intel devs.

I'm sure this will get down voted as well, but for the record, I personally think integrating the graphics in to the CPU is one of Intel's worse technical decisions - right up there with the Pentium 4 series. The only place it makes sense (debatable) is in the tablet or other small factor device market. Desktops are generally served better by a more discrete GPU, even though the current generation could stand a redesign from the ground up (imho).

I've built computers for people for their kids, parents, grandparents, aunts, uncles and cousins that don't need a discrete GPU. Lots of people using desktops only do very basic stuff on them and don't need/require anything more (yes, a cheap laptop would probably be better for them, but when you're being paid to build what they asked for, you don't say no).

So no, I don't see it as a mistake because it fits a need that lots of people have.

Honestly, even SB was overkill for the average person. If you're talking about someone that wants to play WoW or CoD and doesn't mind turning down the graphics settings, Llano and Trinity are just fine. and significantly cheaper.

Where Intel gets stupid is when the put the high-end iGPU with QS support in the high-end processors...where it's going to get supplanted by a discrete card anyway. This looks like this will be more of the same - a confusing array of artificially-limited parts and a naming scheme that seems intended to be impossible to unravel.

And it's hilarious that people are already bitching about the Haswell-E parts when IB-E just dropped a couple weeks ago. I seriously doubt even Intel is stupid enough to try cramming Iris graphics into workstation/server-class processors.

They might if the ideas behind AMD's nUMA and Fusion take root in the marketplace.

"...called "three screen collage display," which appears to allow you to stretch one "logical" monitor across up to three physical monitors..."Maybe I'm stating the obvious, but would a "...Similar to Nvidia's Surround or AMD's Eyefinity solutions"* added here would be clearer?

I'm sure this will get down voted as well, but for the record, I personally think integrating the graphics in to the CPU is one of Intel's worse technical decisions - right up there with the Pentium 4 series. The only place it makes sense (debatable) is in the tablet or other small factor device market. Desktops are generally served better by a more discrete GPU, even though the current generation could stand a redesign from the ground up (imho).

I've built computers for people for their kids, parents, grandparents, aunts, uncles and cousins that don't need a discrete GPU. Lots of people using desktops only do a very basic stuff on them and don't need/require anything more (yes, a cheap laptop would probably be better for them, but when you're being paid to build what they asked for, you don't say no).

So no, I don't see it as a mistake because it fits a need that lots of people have.

One of the biggest markets right under people's noses is the business desktop.