Post Your Comment

55 Comments

I would kill for some solid data on how the entire Haswell lineup does at Starcraft 2 (SC2). It is one of the only games I play and I would love to know what FPS I could expect at a variety of resolutions and quality settings.

There are more notebooks with singlechannel configuration sold as you might think. Unfortunately some reviewers are unable to run some tools like CPUz to check if it runs in dualchannel or in singlechannel mode. Furthermore Anands 3dmark 2013 Firestrike result is 15% slower than another HD5000 Macbook from notebookcheck. With such tests it is also important to run it in high performance mode. No info here if he did or not.Reply

graphics are very bandwidth heavy. the bandwidth difference: single channel 1600 is 12.8 gigabytes per second, while dual channel is 25.6 giabyte per second. when the graphics are using system memory, every gigabyte counts. moving from single to dual channel with ivy bridge pushed over 60% higher framerates with the same laptop, so the difference with the more powerful haswell chip will be even more noticeable.and the yoga 13 is not put through graphically intensive work, hence the reason nobody reviews it as slow.Reply

for reference, the relatively old and low end amd hd 6650m has 25.6 gigabyte of bandwidth all for itself, and is considered bandwidth starved. it is also similar to intel hd 4600 in terms of game performance. and intel hd graphics have to share bandwidth with the system, so single channel is dog slow with anything intensive.Reply

ASUS is cramming one of those into a relatively slim 13.3" laptop. I'm excited to see how it does.

The rMBP13'13 will probably have someone similar, but its chassis can handle 35W CPUs. So I'm hesitant to think that Apple will give up 7W (and whatever the PCH puts out). Does anyone else smell a custom 35W GT3 chip?Reply

this. notice that less cpu intensive games see a much higher performance boost, since the cpu cores dont have to boost as high, while more demanding games dont speed up much at all. the high temperatures also impacted the boost clocks, something else the macbook pro should remedy.Reply

So in other words there is very little reason for OEMS to pay the $50 extra for the hd5000 instead of the hd4400, unless they use cTDP up? cTDP would allow the maximum tdp to go up giving the graphics more headroom to high the higher turbos.

Then again if the OEMs would be okay with better cooling to use the higher cTDP chips why wouldn't they just go the for the chip with the base tdp of 28 watts instead of 15? You can even get iris 5100 in the 28w chips thus you can use the iris marketing.

Very cool, definitely good for comparisons to Ivy Bridge. I'd really like to see those same game benchmarks for Haswell GT2 graphics as well. I'd like a Haswell convertible but I want to see how much an upgrade from HD4400 to HD5000 nets me. Thanks!Reply

The HD5000's underwhelming performance boost really is interesting because that higher price tag seems to be doing very very little, and Apple isn't the type to cut into its profit margins just for the hell of it.

Anand, do you know if compute/openCL benchmarks perform any differently?Reply

Again the main problem with Intel Graphics are drivers. Intel tends to stop development of previous generation graphics drivers once they have a new generation of graphics Arch out. Which is schedule to appear for Broadwell SoC. This wouldn't be much of a problem for since they develop their own graphics drivers. Sometimes i wonder if this means Apple get a much heifier discount from Intel since Intel spend less resources on the Mac Platform.

Given the performance of Intel 5000 i understand why Apple doesn't want to make the Air Retina. Reply

In the article you're often referring to these chips having "less thermal headroom". I'd rather say they are power constrained: attach a better cooler (not easy in an Ultrabook, but possible) and these chips won't perform any better, just because they're already using their full 15 W under load. If the chips were thermally limited you should hear the fan screaming.. which you didn't, as I understand from the article.

BTW: it would be really nice to see measured power draw while running these benchmarks as well. This would make Haswell look even better compared to Ivy. And average clock speeds could also reveal some more.. maybe HD5000 has to clock so low in that 15 W config that the voltage already hits the absolute minimum and further sclaing down couldn't improve efficiency over HD4400. For this one would need to read out the voltages or at least know the frequency-voltage curves of these GPUs. Would be nice if you could do either of this :)Reply

It's primarily for the lower power required. With more EUs, they can run at lower clock speed and voltages to perform as well with less power used. In the 28W versions (presumably headed for the 13" MBP) we'll see how the GT3 can really perform when power is less of a consideration. Reply

MacBook Pros at least the 15 inch will use non-single chip quad-core processors, also the 13.3 inch version uses 35W chips today. They could bump that one to the 47W GT3e part if they wanted performance, as that would roughly put it slightly under the old 15.6 inch pros in graphics performance. You just have to wait and see what the refreshes and new models brings when it comes to Haswell, Apple or not Apple for that matter. Lots of PC's simply use the dual-core GT2-part for example. The single-chip ULT-parts doesn't have any external PCIe links for any gpu. Don't think 5000 and 5100 Iris graphics really matters that much either. 28W parts are just about CPU-performance. All depends on where they want to take it.Reply

Honest question, not trying to troll: is there a purpose to better graphics outside from gaming or professional applications? Have we already reached a baseline of UI acceleration for common office / browsing / content consumption tasks? Basically, if I'm not running Crysis or Photoshop, should I care? Will I notice anything?Reply

You will notice it when playing high res videos on the web and also when scrolling through heavy websites (if the browser takes good usage of the GPU).By far the biggest purpose is for higher resolution "retina like" screens. And all this is just for the "regular" office/web user. Like you said gamers and professionals will also like having more graphic performance in a more portable form factor. It's a great win/win for everyone.Reply

I think we have reached that baseline of UI acceleration. Intel's baseline integrated graphics is now the HD. It suffices for everything aside from gaming. I'm using it in a celeron 847 windows box connected to my 1080p TV. CPU usage can be a bit high when streaming HD content from services like netflix, but I very rarely see a skipped frame, and with an SSD, performance is snappy.Reply

"increasing processor graphics performance in thermally limited conditions is very tough, particularly without a process shrink. The fact that Intel even spent as many transistors as it did just to improve GPU performance tells us a lot about Intel's thinking these days. "

As always on the internet, the game fanatics completely miss the point when they think this is all about them. Intel doesn't give a damn about game players (except to the extent that it can sell them insanely overpriced K-series devices which they will then destroy in overclocking experiments --- a great business model, but with a very small pool of suckers who are buying).

What Intel cares about is following Apple's lead. (Not just because it sells a lot of chips to Apple but because Apple has established over the past few years that it has a better overall view of where computing is going than anyone else, or to put it differently, where it goes everyone else will follow a year or two later.)

So what does Apple want? It's been pretty obvious, since at least the first iPhone, how Apple sees the future --- it was obvious in the way the iPhone compositing system works with as basic elements the "layer" (ie a piece of backing store representing a view, some *fragment* of a window) rather than with a window as the basic unit. The whole point of layers is that they allow us to move the graphics heavy lifting from JUST compositing (ie CPU creates each window, which GPU then composites together) to all drawing.

We've seen this grow over the years. Apple has moved more and more of OSX (subject to the usual backward compatibility constraints and slowdowns) to the same layering model, for example they've given us a new scrolling model for 10.9 which allows for smooth ("as butter????") scrolling which is not constrained by the CPU.

So step 1 was move as much of the basic graphics (blitting, compositing, scaling) to the GPU.

But there is a step 2, which became obvious a year or so later, namely moving as much computation as makes sense to the GPU, namely OpenCL. Apple has pushed OpenCL more and more over the years, and they're not just talking the talk. Again part of what happens in 10.9 is that large parts of CIFilter (Apple's generic image manipulation toolbox, very cleverly engineered) moves to run on the GPU rather than the CPU. Likewise Apple is building more "game physics" into both the OS (with UI elements that behave more like real world matter) and as optimized routines available for Game Developers (and presumably ingenious developers who are not developing games, but who can see a way in which things like collision detection could improve their UIs). I assume most of these physics engines are running on the GPU.

Point is --- the Haswell GPU may well not be twice as large in order to run GRAPHICS better, it's twice as large in order to be a substantially better OpenCL target. Along the same lines, it's possible that the A7 will come with a GPU which does not seem like a massive improvement over its predecessor and the competition insofar as traditional graphics tasks goes, but is again a substantially better OpenCL target.

(I also suspect that, if they haven't done so already, it will contain a HW cell dedicated to conversion to or from RGB to sRGB and/or ICC. Apple seems to be pushing really hard for people to perform their image manipulation in one of these two spaces, so that linear operations have linear effects. This is the kind of subtle thing that Apple tends to do, which won't have an obviously dramatic effect, not enough to be a headline feature of future HW or SW, but will, in the background, make all manipulated imagery on Apple devices just look that much better going forward. And once they have a dedicated HW cell doing the work, they can do this everywhere, without much concern for the time or energy costs.Reply

I'm not sure I follow. GT3 ULV is twice as big so that it can be clocked lower, the two cancel parts of each other out. The OpenCL performance, then, won't increase any more than game performance, as far as I can see. Reply

A better OpenCL target doesn't just mean faster. It means better handling of conditionals for example. And there are plenty of things in OpenCL 1.2 which could be done inefficiently with older HW, but much more efficiently with appropriately targeted hardware.

This is my point. Anand et al are assuming all that extra HW exists purely to speed up OpenGL, and that it does basically a lousy job of it. I'm suggesting that most of the HW is there to speed up OpenCL, and it won't appear in benchmarks which don't test OpenCL appropriately.Reply

I see. I've suspected that was the route Apple was taking years ago, but so far the GPU acceleration is nothing exotic, but perhaps they've been laying a multi year foundation for some big upgrade, we never know. I think they broke the chain of OpenCL compatible GPUs one time though, going from the 320m to HD3000 if I'm not mistaken. It would be a bummer if the newer 3000 Macbooks couldn't get the upgrade while the older 320m ones were fine. Reply

Really? That's what I was doing? I thought I was explaining why a doubling in the area of the GPU didn't appear to result in a commensurate improvement in OpenGL performance; along with some business background as to WHY Intel is ramping up the OpenCL rather than the OpenGL performance.

If you have an alternative explanation for the performance characteristics we're seeing, I think we'd all like to hear it.Reply

Well, I believe name99 makes some great points. OpenCL is gaining traction not just because it accelerates massively-parallel exotic scientific algorithms which the general user would never use, but also because Apple is leveraging it to accelerate everyday operations of the OS which the general user would use constantly.

Sales data shows that people are opting to purchase more and more mobile devices, and they want better battery life and decent performance. A discrete GPU, although powerful, cannot deliver the "better battery life" part of that equation, so Intel has a big win if they can improve their IGP to the point where it can deliver, say, 80% of the performance of a mid-range discrete GPU at 20% of the power cost. That makes sense to me.

Gamers will still only be satisfied with their desktop machines and discrete GPUs, no change there, but that is not the target Intel is intending to go after with their IGP efforts.Reply

So if I do not game and mainly use laptop for editing RAW pics and watching 1080p videos, is the hd5000 overkill? other uses is web browsing ADD (i usually have about 10 tabs open. 5 articles and 5 loading youtube). im asking b/c im considering the vaio duo 13 but not sure if the hd5000 is better for my needs or not.Reply

Wow - Incredibly helpful. I was actually thinking about changing my 4400 (s7) for a 5200 (gs30). Seems like that wouldn't do much.

I think we are taking this thin and light stuff too far. I don't want a 10lb gaming laptop, but I also don't need a 2 lb, 11", 1/2" thick laptop that almost fits in my pants pocket. Why can't we get a high-end, 15", 5lb brick, with a that has a 100whr battery and some decent thermals? Put a 5557U in it for "good enough" cpu performance and the ability to play a few games with its Iris 6100. Probably could get 10+ hours off a charge.Reply