As much as I like the idea of decent Skyrim framerates on every laptop, and even though I find the HD4000 graphics an interesting read, I couldn't care less about it in my desktop. Gamers will not put up with integrated graphics - even this good - unless they're on a tight budget, in which case they'll just get Llano anyway, or wait for Trinity. As for IVB, why can't we have a Pentium III sized option without IGP, or get 6 cores and no IGP?Reply

Strategy, they're using their lead in CPUs to bundle it with a GPU whether you want it or not. When you take your gamer card out of your gamer machine it'll still have an Intel IGP for all your other uses (or for your family or the second-hand market or whatever), that's one sale they "stole" from AMD/nVidia's low end. Having a separate graphics card is becoming a niche market for gamers. That's better for Intel than lowering the expectation that a "premium" CPU costs $300, if you bring the price down it's always much harder to raise it again...Reply

There is actually a good reason for both AMD and Intel to keep a GPU on their CPUs no matter what. That reason is OpenCV. This move makes the assumption that OpenCV or programming languages like it will eventually become mainstream. With a GPU coupled to every CPU, it saves developers from writing two sets of code to deal with different platforms.Reply

OpenCV is Open Computer Vision and runs either way. I think you're talking about OpenCL (Open Compute Language). and even that runs fine without a GPU. OpenCL can use all cores CPU + GPU and does not require separate code bases.

Maybe we could actually see some hard numbers before heaping so much praise on Trinity??

I will be convinced about the claims of 50% IGP improvements when I see them, and also they need to make a lot of improvements to Bulldozer, especially in power consumption, before it is a competitive CPU. I hope it turns out to be all the AMD fans are claiming, but we will see.Reply

this the frames pre secound for CPU, you cant really figure out well when gamming its all mostly based around what ever video card they used, in this artical so I would have to guess that they might have used diffrent GPU video card in each system.

obivuously they cant use same motherboard for amd vs intel

Also I find it wired that other reciews have Rated the phenom II x6 lower in preformance then the FX chip makes it wired how these review claims that the phenom II which is lower grade CPU is more powerfull then the top of the line AMD product out.Reply

So HD4000 igp is weaker than last gen Brazos ?? Based on the leaked Trinity benchmarks, Trinity blows any Intel igp into the weeds, never mind the ( already 1.5 yr old ) Brazos, which is 'only' 5% faster. Reply

So HD4000 igp is weaker than last gen Brazos ?? Based on the leaked Trinity benchmarks, Trinity blows any Intel igp into the weeds, never mind the ( already 1.5 yr old ) Brazos, which is 'only' 5% faster. Reply

For anyone like me who already has a Sandy Bridge quad core (mine's a 2600K) it wouldn't make a lot of sense to "upgrade" to an Ivy bridge. But for those with older systems looking to upgrade, these actually seem like pretty good deals. @ $313 the 3770K is cheaper than the 2700K and cheaper than the typical price on a 2600K (unless like me you are lucky enough to live near a Micro Center).

As to those complaining about graphics, come on. Will anyone who really cares a lot about graphics, particularly gaming, be using the on board graphics anyway? Reply

...but really, they already have large quantities of benchmarks of the 2600K and the difference between that and 2700K is going to be relatively meaningless...my guess, not worth the trouble of running the whole suite of benchmarks on it. To my knowledge, the only difference is 100 MHz of clock speed, right?Reply

I see them just killing off the 13" MacBook Pro entirely, and upgrading the SSD base size to 256GB. There's little reason for the Pro to live on anymore when the Air is far superior in everything except for CPU.Reply

That would be a shame, even without the optical drive they could differentiate the 13" pro from the Air with the mentioned 35w quad core CPU, or a discreet GPU in all the space they saved from ditching the ODD, and have more space for battery to offset it. Reply

I've seen so many negative comments like this about Ivy Bridge all over the web... so I'll respond to them all here:

I'd rather have higher energy efficiency and stability (and less noise), which comes with running at stock voltage/clock speeds. I am not alone there... Plus with turbo boost, why bother OCing, when it has the thermal headroom, it boosts the clock...

If you think that you can find a significantly better sweat spot of performance, power consumption, and reliability/lifespan of the average high-end (K) CPU from intel, then I have to call BS on that. If you agree that intel knows their cpus better than you, but are purposely under-clocking them (the highest clocked models), then I would ask why.

I may say that given significantly more exotic/larger/louder coolers, maybe you can dissipate more heat than the processors were designed to dissipate, and you have some headroom to raise the voltage, which may yield higher stable frequencies, but keep in mind power is roughly proportional to voltage^2, so energy efficiency goes out the window real fast...

I'm not saying no one should play around with their CPUs, have fun... but to say ivy bridge is pointless just because you can't over-volt/over-clock it to the same extent as sandy bridge is foolish when it's clearly a significant step forward, and the best solution for the vast majority x86 PCs.

For those of us who want an energy efficient, high performance computer, some variant of ivy bridge will be the best option, probably until haswell/2013.

But it's really not that much more efficient. In fact at idle, the difference is negligible by all benchmarks I've seen, which for most users is the most important area. At full power, a difference of 25 watts is practically nothing given how much time most cpu's are spent at that load. It really shouldn't affect power supply sizing either. I'm just talking about desktop usage here, we don't really have a good look at the notebook chips yet.

I'm not sure what Intel thinks it's pulling by calling this "tick+" instead of a regular "tick". I don't see anything here that's really that appealing. If helps Intel "debug the process node for 22nm", that's great for Intel, but it doesn't sway me as a consumer to buy one.Reply

Idle power quickly hits a limit based on everything else in the system. It's hardly surprising that the highly efficient Sandy Bridge does just as well at idle as Ivy Bridge. Power gating allows Intel to basically shut down everything that's not being used, and the result is that low loads have pretty much hit their limits. What's impressive is the big drop in power use under heavier loads.

As for the "tick+", it's all in the GPU. They went from DX10 12EU in SNB to DX11 and 16EU in IVB. As a percentage of total die size, the GPU in IVB is much larger than the one in SNB. But as Anand notes, we still would have liked more (e.g. a 24 EU GT3 would have been awesome for IGP performance).Reply

Marginal improvement over Sandy Bridge that can be compensated for by SB's significantly better overclocking ability. Why so much praise for the IGP when 95% of desktop users don't care AND its still vastly inferior to Llano? With so much focus on mobile I'm not even sure why they bothered releasing a high-end desktop SKUReply

Isn't the net OC performance roughly a wash? You're losing ~10% off the top in clock speed, but getting it back by the CPU doing ~10% more per clock.

I'm curious what the power gap for the OCed IB is vs SB. For a system kept running at full load, the stock power gap would give a decent amount of yearly savings on the utility bills. If the gap opens even more under OC it'd be a decent upgrade for anyone running CPU farms.Reply

I don't have a comment on this Ivy Bridge review itself since it's thorough as always from Anandtech and Ivy Bridge seems pretty much what was expected. I do want to suggest a new benchmark for the eventual OpenCL followup when Intel releases new GPU drivers. As AMD mentioned as part of heir HD7000 series launches, WinZip 16.5 has finally been released with OpenCL acceleration in collaboration with AMD. Since fluid simulations won't be a common use case for most consumers and video encoding seems better suited to fixed function hardware like QuickSync, this OpenCL accelerated file compression/decompression will probably be the first and most popular use of GPGPU by consumers. It'll be interesting to see how much of a benefit GPU acceleration brings and whether AMD's collaboration results in better performance from AMD GPUs compared to Intel and nVidia GPUs then raw hardware specs would suggest. Other interesting tests would be to see if the acceleration is more pronounced with 1x1GB compressed file versus many compressed files adding up to 1GB. How well acceleration scales with between different GPU performance classes and whether it'll be bottlenecked by PCIe bandwidth, CPU setup time, system memory transfers or more likely HDD I/O. Whether tightly coupled CPU/GPUs like Llano and Ivy Bridge gives a performance advantage compared to otherwise similar specced discrete GPUs. Whether GPU acceleration is worthwhile on older GPU generations like the AMD HD5000/6000 and nVidia 8000/9000/100/200 series which aren't as compute optimized as the latest AMD HD7000 and nVidia 400/500 series. Whether WinZip 16.5 supports the HD4000 series which is OpenCL 1.0 or whether it requires OpenCL 1.1. Does WinZip 16.5 use OpenCL to help improve performance scaling on high core count CPUs (such as 8 or more cores).

If GPU accelerated file compression/decompression is effective hopefully Microsoft and Apple will consider adding it to their native OS .zip handler.Reply

The graphics engine still cannot support 10-bit per color IPS displays, as can be found on quality modern laptops from Dell and HP. That means that one is forced to get an overpriced mobile video card from ATI or NVidia to compensate, lowering the laptops power on hours by requiring an external card to be used with these displays. On non-IPS displays, one can choose to use the Intel built in graphics engine to save battery life. No such choice on high quality IPS displays since they are incompatible with the graphics engine of even Ivy Bridge.Reply

The workstation class laptops you are referring to are only offered with discrete graphics cards. No other machine has a 10 bit IPS panel. There is zero sense in dell or HP offering a machine aimed at professionals doing 3d modeling/CAD/video editing/etc without also putting the graphics horsepower in the laptop to support it.

While I personally would love the option of getting a machine with the awesome panels that those notebooks use, without also paying for the $$$$ quadro cards that pros need, neither Dell nor HP offer anything like that. Reply

Maybe because people who prefer to have the IPS screen would also like to have support for graphics switching to have a nice battery life while not doing anything GPU intensive. This was the one thing I expected from Ivy Bridge upgrade and NADA.Reply

I didn't notice that issue. 23.976*1000 = 23976 frames, 24 * 1000 = 24000 frames, in 16 mins 40 secs. So that's about one second of mismatch for every 1000 seconds. I could not notice this discrepancy while playing a Blu Ray on my PC. Could you?Reply

Okay, well, I'm pretty sure that you would notice two seconds of discrepancy between audio and video after half an hour of viewing, or four seconds after an hour, or eight seconds by the end of a two-hour movie.

However, the issue is actually more like having a duplicated frame every 40 seconds or so, causing a visible stutter, which seems like it would be really obnoxious if you started seeing it. I don't use the on-board SB video, so I can't speak to it, but clearly it is an issue for many people.Reply

I watch Hulu and Netflix streams on a regular basis. They do far more than "stutter" one frame out of every 960. And yet, I'm fine with their quality and so our millions of other viewers. I think the crowd that really gets irritated by the 23.976 FPS problems is diminishingly small. Losing A/V sync would be a horrible problem, but AFAIK that's not what happens so really it's just a little 0.04 second "hitch" every 40 seconds.Reply

Well, I can certainly appreciate that argument; I don't really use either of those services, but I know from experience they can be glitchy. On the other hand, if I'm watching a DVD (or <ahem> some other video file <ahem>) and it skips even a little bit, I know that I will notice it and usually it drives me nuts.

I'm not saying that it's a good (or, for that matter, bad) thing that I react that way, and I know that most people would think that I was being overly sensitive (which is cool, I guess, but people ARE different from one another). The point is, if the movie stutters every 40 seconds, there are definitely people who will notice. They will especially notice if everything else about the viewing experience is great. And I think it's understandable if they are disappointed at a not insignificant flaw in what is otherwise a good product.

Now, if my math is right, it sounds like they've really got the problem down to once every six-and-a-half minutes, rather than every 40 seconds. You know, for me, I could probably live with that in an HTPC. But I certainly wouldn't presume to speak for everyone.Reply

Looks like my concerns a few years ago with Intel's decision to go on-package and eventually on-die GPU were well warranted.

It seems as if Intel will be focusing much of the benefits from smaller process nodes toward improving GPU performance rather than CPU performance with that additional transistor budget and power saving.

I guess we will have to wait for IVB-E before we get a real significant jump in performance in the CPU segment, but I'm really not that optimistic at this point.Reply

It's interesting to me that this article doesn't include any temperature measurements. I have been hearing that Ivy Bridge has got temperature issues. Could you update the article with those numbers? I'm aware that the article on undervolting and overclocking has some numbers, but none at stock voltage and clocks as far as I know.Reply

The issue I see is in the future, people will look for this review, and not know to look in another article for temp readings.

And I know the other article did have a temp graph, but it did not have the bar graph comparing it to other CPU's like we are used to. I actually have no clue how it compares to a SNB chip after reading this article in terms of temperatures. It would be great to have that information as it aids in building a new system.

Okk...so the A8-3870K beats it in almost every gaming benchmark and they are marketing the HD4000....pretty bad for intel. Trinity will completely destroy Ivy bridge then it seems. Every generation, one company is slacking off behind the other...it's always like that. Next year, intel will take the crown..then the year after it will be AMD...and so on.Reply

Not so sure about that actually. I think they're going to fork in two different directions, with Intel being your high compute power desktop friendly option, and AMD being the go to for laptop, notebook, and ultrabook-esque form factors. Unless trinity mucks up big time, AMD will have the IGP thing down. for a while.Reply

I think you're wrong on the "AMD being the go to for laptop..." part. AMD will be the go-to option for people that want an inexpensive laptop with better IGP performance. As I note in the mobile IVB article, mobile Llano GPU performance isn't nearly as impressive relative to IVB as on the desktop. Anyway, AMD will continue to lead on IGP performance with Trinity I'm sure, but there are very large numbers of laptop users that don't even play games. Of course, the highest selling laptops are still going to be the least expensive laptops.Reply

"Note that max TDP for Ivy Bridge on the desktop has been reduced from 95W down to 77W thanks to Intel's 22nm process. The power savings do roughly follow that 18W decrease in TDP. Despite the power reduction, you may see 95W labels on boxes and OEMs are still asked to design for 95W as Ivy Bridge platforms can accept both 77W IVB and 95W Sandy Bridge parts."Reply

I would like to start using quicksync, but 2 mbps for a tablet is way too much for me. I just want to quickly take a video and transcode it. There is nothing quick about copying a 1+ gigabyte file onto a tablet or phone. It does no good to be able to transcode faster than you can even copy it LOL. Can quicksync go lower? I want no more than 800 kbps,400-600 ideally.

Also, is it possible to transcode and copy at the same time? Is anyone doing that?Reply

When you mention "2 mbps," I think you are referring to the bitrate, which is generally synonymous with the quality of the encoding.

"It does no good to be able to transcode faster than you can even copy" <---I think this is completely false. The transcoding is a separate file conversion step that creates the final version which you will move to your device. Your machine won't even start copying until transcoding is complete, which means that every little bit of speed you can add to the transcoding process will directly reduce the amount of time it takes to get your file on your device.

I have a feeling that the real reason is that, if business users could get those features on a K-series processor, it would largely obviate the need/demand for SB-E. A 2600K/2700K overclocked up to, say, 4.5 GHz--which seems consistently achievable, even conservative--would compare very favorably to the 3930K, given the prices of both.

Yes, I know you can overclock the 3930K, and yes, I know it has six cores and four memory controllers and more cache. But I bet that overclocked SB or IB with VT-d, &c., would make a lot of sense for a lot of applications, given price/performance considerations.Reply

I'd be very interested in seeing overclocked 2500K and 2600K benchmarks tossed in, because lets be honest, one of those is the most popular CPU at the high end right now, and anyone with one has bumped it to at least 4.3GHz, often about 4.4-4.5.

I think it would be nice to have a visual aid to see how that fares, but I understand the impracticality of doing so.Reply

Thank you for including this section, it is great. I think it would be more relevant for people though if it were a much smaller test. I think pretty much anyone is going to know that a project of that size is going to be faster with more cores and speed. What isn't so obvious though are smaller projects, where you are compiling only a few files and debugging. A typical cycle for almost all developers is: making changes, compiling, debugging to test them out. Even though you are only talking times of a few seconds, add this up to 100s-1000s of iterations per day and it makes a difference, I base my entire computer hardware selection around this workflow. For now I use the single threaded benchmarks you post as a guide.Reply

The features table has put me in a great dilemma. I'm very much interested in running multiple virtual machines on my desktop, for debugging and testing purposes. Although I won't be running these virtual boxes 24x7, it would be great to have processor support for any kind of hardware acceleration that I can get, whenever I fire up these for testing. On the other hand, ability to overclock the K series processor is really tempting, and yes, a decent/modest overclock of say, 4.2-4.5GHz sounds lovely for 24x7 use.

Anyone using SNB/Intel processors with VT-d can share if its worth going for non-K processor to get better virtualization performance? To be more clear, my primary job involves web-application development with UX development. For which I require a varied testing under different browsers. Currently I've setup 4 different virtual machines on my desktop with different browsers installed on different windows OS versions. Although these machines will never run 24x7 and never all at once (max 2 at once when testing). Apart from that, I also do lot of photo editing (RAW files, Lightroom and works) and bit of video editing/encoding stuff on my dekstop, mostly personal projects, rarely commercial work). Is it better to opt for 3770 for better virtual machine performance or 3770k with chance to boost overall performance by overclocking?Reply

At the moment, VT-d will not give you any additional performance on your VM's using desktop virtualization programs like VMware workstation or Virtualbox. Neither supports VT-d right now. Based on progress this year, I expect VT-d support is still be a year away in Virtualbox, which is what I use.

VT-d doesn't help performance in general; instead, VT-d allows VMs to directly access computer hardware. This is essential for high performance networking on servers or for accessing certain hardware like sound cards where low latency is crucial. For your workload, the only advantage will be slightly higher network speeds using native drivers versus a bridged connection. It may facilitate testing GPU accelerated browsers in the future as well.

Really annoying how Intel decides seemingly at random which parts get VT-d and which don't.Why do you get it with the $174 i5 3450, but not with the "one CPU to rule them all", everything-but-the-kitchen-sink, $313 i7 3770K?It's also a stupid way to segment your product line, since 99% of the people buying systems with these CPUs won't even know what it does.

This means AMD also gets some of my money when I upgrade - I'll just build a cheap Bulldozer system for my virtualization needs. I can't really use my Phenom II X4 for that after upgrading - it uses too much power and it's dependent on DDR-2 RAM, which is hard to find and expensive.Reply

VT-d is required to support Intel's Trusted Execution Platform, which is used by many OEMs to provide business management tools. That's why the low end CPUs have support and the enthusiast SKUs do not. VT-d provides no benefit to Desktop users right now because desktop virtualization packages do not support it.

I agree that it is frustrating having to sacrifice future-proofing for overclocking, but Intel's logic kind of makes sense. Remember, any features that can be disabled will increase yields which means lower prices (or higher margins).Reply

VirtualBox, which is one of the most popular desktop virtualization packages, does support VT-d. In fact it's required for 64-bit guests and guests with more than one CPU being virtualized.

Does VT-d really use so many transistors that disabling it increases yields? AMD keep their hardware virtualization features enabled even in their lowest-end CPUs (even those where entire cores have been disabled to increase yields)Reply

They can say they're just kidding and used it as an example, because they would "never" actually do that. I think pirate cops would need more than talk to go to court. Imagine how bad this site would rip into them if they said anything, lol.Reply

Sure, nobody loses any money, but the entertainment industry pushed DMCA through, and they will use it if they think they could get any profit out of it. It's one law, out of many, that isn't there to protect anyone. It's there so the MPAA and RIAA can screw people over.Reply

It would be neat to see older CPUs in these benchmarks. It's always a pet peve of mine that these reviews only compare new CPUs against the previous generation and not 2-3 generations back.

Most people do NOT upgrade with every single CPU release, most people upgrade their rigs every 2-3 years I'm guessing. For example, I'm running a Core i7 930 and it's very fast already, I want to upgrade to Ivy and will either way, but I'd love to see how much faster I can expect the Ivy to compare to the ol 930/920 which tons of people have.

In my opinion going back a 2-3 generations is the ideal thing that people want to compare to. No one will upgrade from Sandy bridge (unless rich and a little stupid), but a lot of people will upgrade from the original 920 era which is a few years old now.

"One problem Intel does currently struggle with is game developers specifically targeting Intel graphics and treating the GPU as a lower class citizen."

Well, as long as Intel treats their igp as the bastard red-headed step child then I am sure that developers will too.

If they would actually put the HD3000/4000 into the main stream parts developers might pay attention to it. If I was a game developer why would I pay attention to the HD2000/2500 which isn't really capable of playing crap and is the mainstream Intel IGP? If I was a game developer I would know that anyone buying a 'K' series part is also going to be buying a discrete video card also.Reply

Intel's IGP performance has improved by about 500% since the days of GMA 4500. Is that not enough of an improvement for you? My comparison, Llano is only about 300% faster than the HD 4200 IGP. What's more, Haswell is set to go from 16 EUs in IVB GT2 to 40 EUs in GT3. Along with other architectural tweaks, I expect Haswell's GT3 IGP to be about three times as fast as Ivy Bridge. You'll notice that in the gaming tests, 3X HD 4000 is going to put discrete GPUs in a tough situation.Reply

Yes, but the majority of users will not have an HD3000/4000 since they will have an OEM built computer. Conversely, gamers will more than likely have an HD3000/4000 included with the 'K' series. BUT, these same gamers will more than likely also have a discrete video card and never use the HD3000/4000.

Again, if I was a game developer why would I put resources into optimizing for an igp that gamers aren't going to use?

I give props to Intel for the huge jump in improvement in the 'K' series igp - it went from really crappy to just sort of crappy.If Intel would stop doing the stupid igp segmentation and include the HD3000/HD4000 in ALL of their *Bridge cpus then game developers might see there is a big market there to optimize for. Until Intel stops shooting themselves in the marketing foot then game developers won't pay any attention to their igp. But, based on IB it looks like Haswell will probably do the same brain damaged thing and include the "good" graphics into cpus that less than 10% of the people buy and less than 10% of that 10% don't use a discrete graphics card.

Oh, and your 500%/300% improvement is pretty crappy since HD 4200 was way faster than GMA 4500 to begin with so in absolute terms the 4200->Llano made a bigger jump than 4500->3000:i.e.4500 starts out at 2. 500% improvement would put it to 10 for an absolute improvement of 8.4200 starts out at 6. 300% improvement would put it at 18 for an absolute improvement of 12.So, AMD is still pulling away from Intel on the igp front. And AMD doesn't play igp segmentation game so their whole market has pretty good igp.Reply

It's an estimate, and it's pretty clear that AMD did not make the bigger jump. They were much faster than GMA 4500, but not the 3x improvement you suggest. In fact, I tested this several years back: http://www.anandtech.com/show/2818/8

Even if we count the "failed to run" games as a 0 on Intel, AMD's HD 4200 was only 2.4x faster, and if we only look at games where the drivers didn't fail to work, they were more like 2X faster. So here's the detailed history that you're forgetting:

So by those figures, what we've actually seen is that since GMA 4500MHD and HD 4200, Intel has improved their integrated graphics performance 280% and AMD has improved their performance by around 90%. So my initial estimates were off (badly, apparently). If we bring Trinity into the equation and it gets 50% more performance, then yes AMD is still ahead: Intel 3.8, AMD 5.7. That will give Intel a 280% improvement over three years and AMD a similar 280% improvement.

Of course, if we look at the CPU side, Intel CPU multithreaded performance (just looking at Cinebench 10 SMP score) has gone up 340% from the Core 2 P8600 to the i7-3720QM. AMD's performance in the same test has gone up 80%. For single-threaded performance, Intel has gone up 115% and AMD has improved about 5-10%. So for all the talk of Intel IGP being bad, at least in terms of relative performance Intel has kept pace or even surpassed AMD. For CPU performance on the other hand, AMD has only improved marginally since the days of Athlon X2.

Your discussion of the Intel's market segmentation is apparently missing the whole point of running a business. You do it to make a profit. Core i3 exists because not everyone is willing to pay Core i5 prices, and Core i5 exists because even fewer people are willing to pay Core i7 prices. The people that buy Core i3 and are willing to compromise on performance are happy, the people that buy i5 are happy, and the people that buy i7 are happy...and they all give money to Intel.

If you look at the mobile side of the equation, your arguments become even less meaningful. Intel put HD 3000 into all of the Core i3/i5/i7 mobile parts because that's where IGP performance is the most important. They're doing the exact same thing on the mobile side. People who care about graphics performance on desktops are already going to by a dGPU, but you can't just add a dGPU to a notebook if you want more performance.

And finally, "AMD doesn't play IGP segmentation" is just completely false. Take off your blinders. A8 APUs have 400 cores clocked at 444MHz. A6 APUs have 320 cores clocked at 400MHz, and A4 APUs have 240 cores clocked at 444MHz. AMD is every bit as bad as Intel when it comes to market segmentation by IGP performance!Reply

I guess you are correct about AMD - I haven't really paid much attention to them since, as you said, they can't keep up on the cpu side.

But, TH lists the 6410 (A4 igp) as being 3 levels above the HD3000 in their Graphics Hierarchy Chart. They also have the HD2000 2 levels below the HD3000. So, Intel's mainstream igp is 5 levels below AMDs lowest igp.

That is why game developers treat Intel's igp as a lower class citizen.

The quote that I was addressing (as stated in my first post) is:"One problem Intel does currently struggle with is game developers specifically targeting Intel graphics and treating the GPU as a lower class citizen."

The article acts like it is a total mystery why game developers don't give the Intel igp any respect. As I have repeatedly said in my comments - until Intel starts putting the HD3000/HD4000 into their mainstream parts and not just the 'K' series game developers know that Intel igp is a lower class citizen. And, yes, I know that you can get a xxx5 variant w/HD3000 if you look around enough, but I doubt any OEM is using them and they didn't appear until 6+ months after the launch. It is just easier to slap a 5-6 year old discrete video card into a computer.Game developers can't target the HD3000/HD4000 since those are the minority for SB/IB chips. They would have to target the HD2000/HD2500. Since they don't the conclusion is that it isn't worth putting the resources into such a low end graphics solution.Reply

I don't think it's a mystery. It's straight fact: "One problem Intel does currently struggle with is game developers specifically targeting Intel graphics and treating the GPU as a lower class citizen."

It IS a problem, and it's one INTEL has to deal with. They need more advocates with game developers, they need to make better drivers, and they need to make faster hardware. We know exactly why this has happened: Intel IGP failed to run for so long that a lot of developers gave up and just blacklisted Intel. Now, Intel is actually capable of running most games, and so long as they aren't explicitly blacklisted things should be okay.

In truth, the only title I can think of from recent history where Intel could theoretically work but was blacklisted by the game developer is Fallout 3. Even today, if you want to run FO3 on Intel IGP (HD 2000/3000/4000), you need to download a hacked DLL that will identify your Intel GPU as an NVIDIA GT 9800 or something.

And really, there's no need to blacklist by game developers, because you can't predict the future. FO3 is the perfect example: it runs okay on HD 3000 and plenty fast on HD 4000, but the shortsighted developers locked out Intel for all time. It's better to pop up a warning like some games do: "Warning: we don't recognize your driver and the game may not run properly." Blacklisting is almost more of a political statement IMO.Reply

First, a diversion: "I was able to transcode a complete 130 minute 1080p video to an iPad friendly format..." Just kill me. Somebody please. Why do consumers put up with this crap? Even my ancient Galaxy S has better media playback support.

It's the same story with my HP TouchPad: MP4 container or GTFO. Who can stand to re-encode their media libraries or has the patience to deal with DLNA slingers when the hardware is perfectly capable of curb-stomping any container / codec you could even conceive? Just get an Android tablet if this is the crap they force on you. Or, in the TouchPad case, wipe it and install ICS.

As for the article... did I totally misunderstand the page about power consumption? I got the impression that idle power is relatively unchanged. I must be misreading that. Or maybe the lower-end chips will show a stark improvement. Otherwise I totally miss the point of IVB.

I'm beginning to lose confidence in Intel, at least in terms of innovation. These tick-tock improvements are basically minor pushes in the same boring direction. From an enthusiasts' perspective, the stuff going into ARM SoCs is so much more interesting. Intel makes great high-end CPUs but it seems that these are becoming less important when looking at the consumer market as a whole.Reply

I disagree with the testing methodology for the World of Warcraft test. Firstly no gamer of any game buys hardware so they can go to the most isolated areas in a game. Also the percentage of who can pay for one of these CPU's who would be playing at 1650x1050, would be pretty small.

I've been playing WoW for a number of years and I don't care about 60fps+ because my monitor won't display it anyway. I care about minimum fps and average fps. nVidia's new adaptive vsync is a great innovation, but I am sure there are other tests that while not as controlled and repeatable is a much indicative of real world performance (the actual reason behind purchasing decisions).

One possible testing methodology you could look into is to take a character into one of the topend 25man raids. There are 10 classes in WoW and my experience is that a 25man raid will show up every single possible spell/ability and effect that the game has to offer in fairly repeatable patterns.

I agree that it is not the most scientific approach but I put more stock in a friend saying "go buy this cpu/gpu you can do all the raids and video capture and you get no lag" than you telling me that this cpu will give me 100+ fps in the middle of nowhere. There is a fine line between efficient and effective. I am just hoping that you can dial down the efficiency and come up with a testing methodology that actually produces a metric I can use in my purchasing decisions. After all that is one of the core reasons most people read reviews at all.Reply

Oh boy. Another delusional red label fangirl. Maybe when AMD gets their s**t together Anandtech will have something positive to review in comparison to the Intel offerings at the moment. Bulldozer bulldozed right off a cliff. And don't get me wrong: I WANT AMD to whip out some butt-kicking CPUs to keep the competition strong. But right now, Intel is not getting complacent and keep stepping their game up when the competition isn't even on the same playing court. But that's just for now. If AMD continues to falter, Intel may not be as motivated to stay ahead and spend so much R&D in the future. After all, why put the latest F1 car on the track when the competition can only bring a NASCAR car to every track?Reply

An upgrade for me makes sense as my current cpu is an Intel Core 2 Quad and the new i7-3770K will be a pretty significant upgrade...2.34Ghz to 3.5Ghz and the heaps of additonal tech to go with it.I could see a fair number of Sandy Bridge owners holding off for Haswell, though for me this jump is pretty big and I'm looking forward to seeing what the i7-3770K can do with the Z77 motherboards and a shiny new PCI 3.0 GPU. Reply

I feel so ambivalent about Intel's current strategy. I admit I really don't understand why laptops are moving ahead so significantly. Why on earth would someone want to buy a laptop to replace a home desktop system? And if they aren't doing that then how can Intel justify a move away from the high performance end of the spectrum? If you really need to do business work away from home AND away from work then I can see the need for the mobility that a laptop can provide but this focus on a gaming-centric platform so that people can one day soon play Crisis on their laptop at traffic stops on the way to work is utterly baffling to me. On the other hand, the process technology and architectural improvements we are seeing are amazing. It just burns me that performance for what I use a desktop for is essentially at a standstill now so that Intel can save this unfathomable laptop crowd a few watts during the occasional vacation. As it stands when I see a review like this at first I'm like "Oh wow!" and then I feel cheated for having waited for Intel's latest and greatest. Ivy Bridge-E and Haswell-E better be mind blowing or I'm going to be completely crestfallen at the direction that Intel is taking.Reply

Intel is following a strategy that will in the end provide compute power for small and powerfull devices like cell phones, laptops, tablets, RFIDs, food and apparel tags, etc. They are looking at the big picture and the money is in the small designs. Desktops are a means to an end. Get used to the push to small and simple.Reply

Thanks for being an AMD apologist and reading with blinders. Trinity will probably maintain the GPU performance gap. Until Trinity arrives, however, IVB HD 4000 on laptops on average across 15 games is equal to Llano A8 on laptops.

And what other things are people doing with graphics besides games? Video transcoding? Okay, there's one task. Name another that computer users do on a regular basis. [Crickets...]

So why is Intel putting so much effort into graphics? Because their business is to sell you a new microprocessor every 2-3 years if they can. With the ability to put 1.4 billion transistors into a 130mm^2 die, what else are they supposed to do? More cache? More CPU cores? You try to make the worst aspect of your product better, and then you market it to the masses. That's all that's really happening here.Reply

Most Llano laptops I have seen are not even the A8 model, but mostly the A6, with somewhat lower graphics capabilities. So I think you are being very fair to AMD in comparing their top of the line graphics to HD4000, instead of the lower performance A6 graphics. Reply

"Unfortunately at the time only Apple was interested in a hypothetical Ivy Bridge GT3 and rumor has it that Otellini wasn't willing to make a part that only one OEM would buy in large quantities. We will eventually get the GPU that Apple wanted, but it'll be next year, with Haswell GT3. And the GPU that Apple really really wanted? That'll be GT4, with Broadwell in 2014."

"How many enthusiasts who overclock their computers are going to be running them *at full load* 24/7? None. "

I do. The 30W difference at stock would be ~$40/year directly (13c/kwh); and probably twice that indirectly since my LL pays for heat during the winter, but the AC bills all mine. Vs my current x58 boxes, probably another factor of 1.5-2x; although unless OC levels creep up over the next few months I'm tempted to just wait and hope haswell doesn't burn all its improvements on the IGP and lower TDP for mobile.Reply

CPU performance has somewhat exceed most people daily needs (and mine too), and mobile CPUs are more exciting today, especially if they can help the system to consume less battery and have better graphics. I think its now safe enough for manufacturer like for example Apple to go for Trinity instead of Ivy Bridge for their MB Air, provided Trinity can be made to consume minimal power.Reply

According to the Asus review just out by Anand, the Intel HD4000 and AMD HD6620 are essentially even in the mobile space, where it really matters. I dont know where you are getting the "soundly trounces" description, unless you are talking about the desktop. I dont really care about integrated graphics on the desktop, it is just too easy to add a discrete card that soundly trounces either Intel or AMD integrated. I have no doubt that AMD will regain the lead in the mobile space when Trinity comes out. I just question that they will make the kind of improvements that are being speculated about.

I also find it ironic that so many people are criticizing IVB for lack of cpu improvement while in the same breath saying bulldozer is OK because it is "good enough" already.Reply

Me: "As I note in the mobile IVB article, mobile Llano GPU performance isn't nearly as impressive relative to IVB as on the desktop."

You: "The mobile variant of the part that launched last year isn't as dominant over the part that just launched today as the desktop variant is?"

In other words, you want us to compare to a product that's not out because the current product doesn't look good. I mention Trinity already, but you act as though I miss it. Then you throw out stuff like, "Thanks for resorting to namecalling" when you've already been insulting with your comments since the get go. "Sad to see this kind of crap coming from Anandtech." "I guess Anandtech's standards have drastically lowered." Put another way, you're already calling me an idiot but doing it indirectly. But let's continue....

How much faster can you do Flash video when it's already accelerated and working properly in Sandy Bridge? Web browsers are basically in the same boat, unless you can name major web sites that a lot of users visit where HD 3000/4000 is significantly worse than the competition.

Does Photoshop benefit from GPUs? Sure, and lots of people use that, including me, but the same people that use Photoshop are also the people who need more than Llano CPU performance, and more than HD 4000 or Llano or Trinity GPU performance. I'm running Bloomfield with a GTX 580, which is more than 95% of users out there. Most serious Photoshop users that I know use quad-core Intel with some form of NVIDIA graphics for a reason. But even running on straight Sandy Bridge with HD 3000, Photoshop runs faster than on Llano with HD 6620G.

Vegas, naturally, is in the same category as video transcoding. I suppose I could have said "video editing/transcoding" just to be broader. There are tons of people that don't do video editing/transcoding. Even for those that do, NVIDIA GPUs are doing far better than AMD GPUs, and NVIDIA + Intel CPU is still the platform to beat. If you want quality, though, encoding is still done in software running on the CPU; Premiere for instance really just leverages the GPU to help with the "quick preview" videos, not for final rendering (unless something has changed since the last time I played with it).

So let's try again: what exactly are the areas where Intel's Ivy Bridge and HD 4000 fall short, where AMD's Llano (or the upcoming Trinity) are going to be substantially better? All without adding a discrete GPU. Llano is equal to HD 4000 for gaming, and seriously behind on the CPU department. There are still areas where AMD's drivers are much better than Intel's drivers, and there are certain tasks (shader and geometry) where AMD is better. Really, though, the only area where Intel doesn't compete is in strictly budget laptops.Reply

Yes I have heard of a "tick", and IVB has manifested itself as a tick+ as indicated in the article which means we are basically on the 3rd generation of the same architecture introduced with Nehalem in late 2008 with some minor bumps in clockspeed/Turbo modes and overclocking headroom.

Both Conroe and Nehalem were pretty huge jumps in performance only 2.5 years apart on one of Intel's Tick Tock cadence cycles and since then, nothing remotely as interesting.

Maybe you should be asking yourself why you aren't expecting bigger performance gains? Or maybe you're still reveling and ogling over Tahiti's terrible price:performance gains in the GPU space? :DReply

There is one current A8 APU faster than the A8-3520M for sale at Newegg, and it has an A8-3510MX. AMD's own list isn't much better (http://shop.amd.com/us/All/Search?NamedQuery=visio... there's one more notebook there with an A8-3530MX. So that's why we looked at A8-3520M, but if I had an MX chip I would certainly run the same tests -- no one has been willing to send us such a laptop, unfortunately.

But even if we got an MX chip, their GPUs are still clocked the same as the A8-3500M/A8-3520M. We might be CPU limited in a couple games, but while there are Llano parts with 20% higher CPU clocks, that just means Intel is "only" ahead by 60-70% instead of 100% faster on CPU performance.Reply

you people over-analyzed my comment. All I wanted to say is that they are bragging about HD 4000 when it doesn't come close to the current competition.Couple of years down the road, people won't want dedicated graphics cards in their laptops anymore..its too bulky and consumes too much power. We will all have integrated GPUs. the AMD APU is the way to go. To be honest, CPU power is already way more than enough for a lot of things most people use their laptops for (browsing the web, writing documents, play web-based games a.k.a. angry birds on chrome). The extra GPU is for people that either want to do some graphics processing or play some more graphics intensive games. So yes, it is important for the future to have a good and strong integrated GPU and a good CPU. Therefore, I think AMD will win this round. I hope they continue to compete at each other's throats so we see better and cheaper products from both sides.So as I understand it right now: Go for AMD if you want better GPU, go for Intel if CPU is more important for you. Trinity might narrow the CPU gap however and greatly increase the GPU one. Only time will tell. Reply

"I don't understand this. We're talking about power consumption, not TDP. Heat-wise, Ivy Bridge is hotter, so if you're paying for the AC, it should be a negative impact."

Power consumption is TDP. 100W of power is 100joules/second of heat to be disipated; it doesn't matter if the heat's coming off a large warm die, or a small hot one. 100W is 100W.

My current i7-9xx boxes are 130W chips; so just looking at TDP somewhere between 60 and 90W less power at stock (~50 just from the CPU TDP, the higher number the chipset's a theoretical 18 more, probably a lot less in practice, and then whatever cut of the IB's TDP is for the GPU). Probably a wider gap when OCed, but I don't have any stock vs OC power numbers to look at. With AC costs added, cost savings would probably be between $100 and $200/year per box.

Up front costs would be ~$400-550 for CPU + mobo pairs depending on how high up the feature chain I went; probably fairly high for my main box and more bang for the buck on the 2nd.

Looking on ebay for successful auctions it looks like I could get ~$250 for my existing cpu/mobo pairs less whatever ebay's fee is. The very rough guess would be a 2 yearish payback time which is somewhat better that I thought (closer to 3 years).

Not sure I'll do it since I have a few other PC related purchases on the wishlist too: replacing my creaky Core One Duo laptop with a light/medium gaming model or swapping out my netbook for a new ultra portable after Win8 launches might give better returns for my dollar. The latter's battery isn't really lasting as long as I'd like any more. Also, my WHSv1 box is scheduled for retirement this winter.

I am going to have to give it some serious thought though. Part of me still wants to wait for Haswell even though preliminary indications are that it won't be a huge step up; the much bigger GPU and remaining at dual channel memory makes a mainstream hex core part unlikely.Reply

On the desktop, you are correct, especially if one overclocks. On the mobile front, IVB is a definite step up on the graphics front. My main reason for the responses to this thread was that it seemed premature for the original poster to imply that this site is being unfair to AMD/Trinity before we even know how much the improvement will be or read a review. Reply

I read other press about 22nm 3D transistor as 11 years in the making. 11 years! Did anyone remember a article Anandtech posted a long time ago. It was about 3D transistors and Die Stacking. I did Google and Site search but could not find it. I cant record when was the article written but i was a long time. We have been waiting forever on these tech. We thought we wont see it for another 5 years.... and this is 11 years since then!

Bit About Haswell Monster Graphics. Charlie also pointed towards CrystalWell, or a piece of silicon L4 SRAM Cache that is built for Graphics. Could Die Stacking be it, a piece of SRAM Cache on top or under?I hope we do get more then 300% increase in performance. These way Ultrabook can really do get away with discrete graphics.

Well Ivy Bridge QuickSync wasn't as fast as we first thought. 7 min to transfer to iPad is fast, but what we want is sub 3 min. I.e the time transcode 1080P to portable format should be the same time to transfer 2.5 GB File from a USB 2.0 to iPad. Both Process should be happening in the same time. So when you "transfer" you are literally transcoding on the fly.Reply

I'd say most of the same things to you. If you think the 15% clock speed increase of the CPU in Llano MX chips will somehow magically translate into significantly faster GPU performance, you're dreaming. Best-case it would improve some titles 15%, but of the 15 games I tested I can already tell you that CPU speed won't matter in over half of them--the HD 6620G isn't fast enough to use a more powerful CPU. The 10W TDP difference only matters for CPU performance, not the GPU performance, as the CPU clocks change but the GPU clocks don't.Reply

No, I think they're equal because these are the parts that are being sold, and they perform roughly the same. Actually, I think that the laptops most people buy with Llano are actually WORSE than Ivy Bridge's HD 4000, because what most people are buying with Llano is the cheap A6 chips, but that's not what we compared.

But let's just say that we add DDR3-1600 memory to Llano, and we test with 8GB RAM. (Again, if you think 8GB actually helps in gaming performance, you don't understand technology.) Let's also say that every single game is CPU limited on Llano for kicks. With an MX chip in our hypothetical laptop, the best Llano would d would be to average 15% faster than HD 4000.

That's meaningless. It's the difference between 35FPS and 40FPS in a game, or 30FPS and 26FPS. Congratulations: you GPU might be 15% faster on average buy your CPU is half the speed. That's not a "win" for AMD.

Here's the facts: What was a gap of 50% with mobile Sandy Bridge vs. mobile Llano is now less than 5% on average. AMD has better drivers, but Intel is closing the gap. Trinity will improve GPU performance, and likely do very little for CPU performance. The end.Reply

Currently there are pages 'A8 is xx%faster than IvB' and their are pages ivyB trails A8 performance by .. or something similar.My assumption is (since english is not my native language):Trailing by 55% means that a A8 122% faster or vice versa. (e.g. it is is 55%slower than the A8)Achieving 55% of the A8 means that A8 is 81% faster (e.g. it has 55% of the A8 score. if A8 scores 100, it scores 55).

Would great if the reader knows which one you use an can stick by it instead of having to recalculate it after they read every sentence twice. (and assume the understanding of the sentence is correct). I believe the general use would be part A is x% faster than part B or use the 2600K as a baseline and calculate all others as faster than compared to it.Reply

I'll bet you $100 I can put 8GB RAM in the Llano laptop and it won't change any of the benchmark results by more than 2%. If I swap out the RAM for DDR3-1600, it will potentially increase gaming performance in a few titles by 5-10%, but that's about it.

Anand's testing on the desktop showed that DDR3-1600 improved performance on the A8-3850 by around 12-14%, but the A8-3850 also has the 400 cores clocked 35% higher and can thus make better use of additional memory bandwidth. It's similar to DDR3-1866 vs. DDR3-1600 on desktop; the 17% increase in RAM speed only delivers an additional 6%, because the 600MHz HD 6550D cores are the bottleneck at that point. For laptops, the bottleneck is the cores a lot earlier; why do you think so many Llano laptops ship with DDR3-1333 still?

If you'd like to see someone's extensive testing (with a faster A8-3510MX chip even), here's a post that basically confirms everything I've said:

If I have Ivy Bridge on the desktop, and have my monitor plugged into a dedicated GPU can I still use Quick Sync?Or do I still have to plug the monitor into the motherboard and be using integrated graphics?

Frankly quick sync is useless on the desktop if it doesn't work with a GTX560.Reply

Why the hell can't you see that comparing i7 3770K with 3,5 GHz to i7 2600K which runs at 3,4 GHz is POINTLESS?! Pretty much every other site got that point and used 2700K. Sure the 3770K will be faster than 2600K, duh...Reply

32 -> 22 nm, transistor dimension reduced by 31%,75% of die size, 20 % increase in transistor count. This means for the same die size there will be an increase of transistor count by 26%.

Projection22 -> 14 nm, transistor dimension reduced by 36%.Applying similar pattern we may get roughly 30% gain in transitor count.However the gain may be lesser since the gain in IVB could have been due to 3D transistor tech.So at best 30% and worst around 24% just for the decrease in transistor dimension.This is by no means a precise calculation taking all factors into consideration.

A staggering 18 Million working dies per month with 1.8B transistors at 160 mm2, with plant capacity of 50000 wafers/month, plant efficiency 75% and yield 50%, with 1 plant

Lets not forget the partly defective dies will be fused off to become some low end part which means the yield could touch 60%, taking the working dies to 22 Millions per month!!This means Intel is going to make really cheap processors. 450mm wafer + 14nm = Game changer. Of course the fab is super expensive. But from what came out from Intel those first few batches of chips are paying for the ramp up to 22nm.For an ultra mobile processor like Atom, in 2014, even a massive redesign of chip would still keep it well under 100 mm2. At 100 mm2, an Atom in 2014 will have ~1B transistors!!! Take that ARM.My faith in Intel is rekindled. :-). AMD needs to be around to shove Intel whenever it gets too lazy. ARM is now helping AMD too in shoving Intel. Reply

You have a unified shader GPU with similar performance to the x1900 series but more flexible, and something like a Geforce 7800 with some parts of the 7600 like lower ROPs and memory bandwidth...7 years later, if even an IGP didn't beat those, it would be pretty sad. Those were ~200gflop cards, todays top end is over 3000, a lower-mid range chip like this I would expect to be in the upper hundreds. Reply

Better yet, why not just put the graphics on a chip like the CPU? That way the "upgrade path" is a lot clearer, not to mention "possible". It will also offer the possibility of having those chips in different flavors, for example good video transcoder or good gamer. There is room for that on the motherboard now that the north bridge is gone. Or they can review the IBM boards from 286 days and learn from their clean, and very efficient design.

Unfortunately the financial model of the IT industry from a collective viewpoint entails throwing a lot of good hardware away just for a small advantage. Just the way many will throw away their HD3000 IGPs without having ever used it. The comparison is cruel but that should not be what distinguishes them from the toilet paper industry.

As of late, Anand has taken to reminding us that technology has taken leaps ahead of our wishes and that we need time to absorb it. That is not the case. No wish is ever materialized. We only have to take whatever is offered and marvel at the only parameter that can be measured: speed. Less energy consumption is fine, but I suppose that comes with the territory (i.e. can Intel or AMD produce 22 nm chips that consume the same watts as 65 nm chips with the same number of transistors?).Reply

Cost, size, power draw. All reduced by putting everything on one chip. I'm not sure if AMD wants to get rid of discreet graphics cards considering that's their one profitable division, but Intel sure does :)Reply

I've tried every setup possible and have never got quicksync to work at all. It won't even work with discrete graphics enabled and my monitor hooked up to the intel chip on my Z68 board.

I have tried mediacoder (error 14), media converter 7, MEdia Espresso. I have downloaded the media SDK, I have tried the new FFmthingy from the intel engineer. Nothing, nada. It has never ever worked. AMD media converter will convert a few limited formats that went out of fashion 5 years ago (of all my 1.5TB of video the only thing it would touch was old episodes of Becker).

All in all I have got nothing whatsoever from any video accelerated encoding and I have always had to go back to my tried and trusted handbrake.

I don't think it actually works - I've never heard of anyone getting it working and the forums on mediacoder are full of people who have given up.Reply

What input format are you using? I've only tried it on laptops, and I've done MOV input from my Nikon D3100 camera with no issues whatsoever. I've also done a WMV input file (the sample Wildlife.WMV file from Windows 7) and it didn't have any trouble. If you're trying to do a larger video, that might be an issue, or it might just be a problem with the codec used on the original video.Reply

cause how I see it : " If you're constantly transcoding movies to get them onto your smartphone or tablet, you need Ivy Bridge. In less than 7 minutes, and with no impact to CPU usage, I was able to transcode a complete 130 minute 1080p video to an iPad friendly format—that's over 15x real time."

The majority of trancoding is going the other way- from an ipad friendly format to another friendly type format (device or not). Cell phone to pc,ipad to youtube etc. I like the concept of the transcode,encode,but as much as with the bandwidth necesary to utilize them they are as they are 'bandwidth hungry'. And here is the first I have noticed that of telling there is hardware to facilitate the h.264 type encoding. "Hardware" to do this. The screens from all of those cell phones in high resolution 'still'require bandwidth to utilize them. .. and so on.

I find this review enormously confusing. You mix IB and SB parts/sku's throughout, without clearly highlighting which is which. On page 2, at first I thought the chart was of IB CPUs, but then realized it was a mishmash of both. Likewise on all the benchmark charts, and I couldn't even distinguish various SB/IB/core counts apart.

Was this article written solely for people who have a PhD in Intel sku's ???Reply

According to the article linked below from today, it seems that Intel may be using TIM paste instead of fluxless solder in the IB chips, and if so, does this seem like a reasonable explanation for the high temperatures that people are seeing?

I'm interested in your thoughts on this... and if it's true, and if the article is correct, then Intel deliberately handicapping their chips (to possibly preserve SB-E sales?) is quite troubling to me.

Hey guys!Good review overall, thanks for that!But the Quicksync section was kinda strange. No way that an x86 software solution generates worse IQ than any accelerated stuff. Unless you were comparing IQ with settings that have the x86 do recode the file in the same time it takes the other solutions. Anyway, I don't understand what you did in that section. :-)Reply

I would like to say that they do not seem to say what they are running for GPU, also I own the fx chip in this review and would like to say that with the 7970 driect cuII asua vide card, I get way over those FPS in dawn of war maxed out gfxs I am getting 134 average fps

I also own dragonage and when I am running fraps I am getting maxed res max AA getting 139 average fps

cyris I get 139 fps and I get civ 5 225 fps

be honest the cpu has little to do with FPS its video card mostly anyways I am running eyeinfity setup on top of this and getting these fps in game according to frapsReply

I have the AMD FX 8150 and I also own Crysis: Warhead, Civilization V, Dawn of War II, and Dragon Age Origins and I get way better FPS then they claimed to have gotten I mean my FPS are almsot double that, sometimes tripple and I am using Farps

for exampel Civ 5 I get average of 190 FPS in game maxed out res with eyeinfity using Fraps

double post, I almost did same thing lol, becuase I noticed my post was tooken down or did not show up right away..

I have to agree with you I own your CPU also, one they dont claim to be using on board GPU meaning one built into CPU and if they where far as I know teh AMD fx dosnt have a built in GPU so it would suck and there actully called APU i belive ?

anyway I was also wondering what GPU they where using for the test becuase when I read this page it dosnt say it any where and page before the two buttons say review back and review http://www.anandtech.com/show/5771/the-intel-ivy-b... dosnt say anything about what GPU there using, but I do own amd FX and I also own a 7970 GPU and I will say that the FPS ratting on here a AMD FX comboed with 7970 you will get ammzing FPS in game it blows these fps they claim out of the water, the FPS 50 FPS I mean I use to get that with my 5k series amd card I hell I use to get 45 to 50 fps with my 6k series eyeinfity card this 7970 gets like use above says double or tripple preformance of whats claimed in this review with fraps.Reply