209 Comments

Disappointing, to say the least. He's even comparing it to 2-3 older generations, just to be able to write some non-embarrassing numbers in the review, considering Haswell is only like 5-10% faster than IVB.

The only big improvement seems to be in idle power consumption, of about 30%, which seems to "impress" Anand, but it just means that if your laptop had a 12h idle time, now it gets 16h.

It won't do much in ACTIVE power, which is really what matters. So much for all the "Haswell will totally dominate tablets in the near future" hype from Anand. Yes, this is not the mobile version, if Haswell really were an impressive design for power consumption, you'd see it here, too. It actually consumes 5-10% more than same clock speed IVB chip, which means its extra performance is almost completely negated.

This means that my predictions that Intel will try to "trick" us into thinking Haswell is ready for tablets will soon come true. Because if Haswell is not that efficient to warrant being used in "normal" tablets, then they'll try to dramatically lower clock speed and performance to even achieve 10W TDP (still too high for a tablet).Reply

I'll still upgrade from my i5 750, and we won't get anything interesting until Skylake on the desktop (apparently, it wont be BGA), as I'm expecting motherboard OEMs to force us into buying their high end motherboards with any of the high end i7s Broadwell.Reply

I'm using an i7-950 (over 4 years old) and its funny seeing how still-competitive it is to Intel's newest chips. It seems Sandy Bridge brought the bang and its just been trickle down performance since...Reply

Ahh I had one of those until this time last year. The E8400 found a happy home with a friend and I went to Ivy Bridge. Even small improvements like Sandy -> Ivy -> Haswell are useful, so don't feel too bad for having waited so long.Reply

haha. I'm still running an OC'd E6600 (amongst others) for my desktop, which I hardly touch anymore, and am unsure whether to upgrade. I'm curious to see how Haswell performance vs power consumption vs price, is for NAS systems.

Aside from natural degradation in one of the components (cpu, memory, psu, or gpu), which brings on an error every once in a while, the E6600 still does 98% of what I want of it and 100% of what I need it to do. So I suppose my needs are to get my electric bill down.

I need to read more because I was hoping this chip would bring me to buy a Surface Pro.Reply

You will get benefits upgrading from E6600 no doubt about it. For you its worth it as you are going to get a lot less power consumption, cooler operating chip and less noisy as well, with a significant performance increase especially in multithreaded applications, but for those thinking upgrading from Sandy or Ivy bridge to Haswell its worthless.

I mean upgrading from 2500k to the new 4770k is useless. At best you are going to see 40% performance improvement which calculates into 5 seconds faster decoding or 5 seconds faster unzipping, but at worst you are going to get 2% performance increase which amounts to milliseconds of faster decoding and stuff.

You are not going to get less power consumption and it seems you may even get worse power consumption at loads. The new chips don't even overclock as well either, so its a waste of money to upgrade. If you have 3-4 generations older chip its worth upgrading, but else your money is better spent elsewhere.Reply

These are some very good processors. What is misplaced though is the graphics part. It is ok for mobile processors to have such a part, but for high energy consuming desktop processors, it is irrelevant and more of a burden. In the desktop world you expect a cpu with the highest possible processing power. It is 2013 and quad core is kind of old, come on intel don't be shy bring 8 and 10 core processors, you have the room to accommodate it. If I want graphics I 'll buy a monster graphics chip and will not be bothered by "optimal" power consumption.It is sad to see the energy efficiency and the computing efficiency improving only to accommodate a miserable graphics card. If you want us to buy graphics together with the cpu(an apu) make a chip with a tdp 300-400 because this is a mainstream energy budget for the desktop.If it is impractical(which is probably the case) build your own discrete graphics.Don't vandalize your cpus this way. If you want the mobile go for it, just don't contaminate your desktop products.Reply

same with me!I'm currently running an E8400@4,5GHz since 2008, and now seems a good time to upgrade.I hope the haswell chip will overclock just as good. From what I've read that means that I may have to exchange the thermal paste under the IHS for some coolaboratory stuff...Not decided on i5 or i7 yet though...Reply

I built a new from the case up Z77 rig with a 3570K processor running 24/7 @ 4.4GHz last fall. I seriously considered waiting for Haswell to hit the market before upgrading my machine (circa 2007). I'm glad I didn't wait any longer since, I have been running a damned fast machine for 8 months now. I think I may have felt perhaps a little bit underwhelmed if I had waited for Haswell. Just my 2 cents worth.Reply

While you are right that load power consumption is higher, the system is not loaded all the time.Not for the common users any way. In that case, idle will serve as a better power performance indicator as opposed to load consumption.Reply

Pulling up this website and browsing my CPU utilization is under 20% and under 10-5% the majority of the time. Now I'm 5 years back on a Core2Duo. I'd imagine by now, if I was browsing this page on a Haswell, it'd be even lower considering processors are much faster as well. So mobile side I'm guessing that average battery life increased quite a bit. We'll see with the review. Using LOAD power consumption to judge mobile though is just plain ignorant.

This is why some people need to stick to reading reviews, and wait for full reviews come out, rather than jumping to early conclusions and misinterpreting information.Reply

Haswell isn't just about the CPU, it's about the entire platform of chips, even the tiny ones on the motherboard no one really cares about. And I'm not sure what you were expecting in terms of performance gains, as there isn't any competition in the desktop arena for it to be worth pushing out 20-25% gains (and certainly not on a yearly basis).

As far as active power goes, the entire point of a modern CPU architecture is to "hurry up and go to sleep (HUGS)". Faster you go to idle and the better idle performance you have, the better battery life you get because lets face it, unless you're encoding a video or playing a game, your CPU spends most of its idling doing nothing.Reply

You're right. If anyone is to be blamed for the slight performance increase of Haswell, it's AMD. If AMD could actually compete on performance we would be seeing more gains with Haswell, and maybe FINALLY more than 4 cores on mainstream Intel platforms.Reply

And I wonder how successful AMD would have been if Intel never stole their revenue (when AMD chips were faster than Intel, around 2006) by making under the table deals with OEMs to use Intel only CPUs. Reply

I disagree somewhat. Especially with the fact that you think idle power consumption is unimportant. Keep in mind, that at least half of the time, when you are working, browsing the net, your cpu is in fact idle. So an improvement in idle performance consumption is a big deal imo. Reply

You've got it all wrong. Haswell won't be going up against ARM in tablets. Don't look for it in smartphones either ! LOL A Haswell i7 is serious overkill for those markets. ARM won't be getting into the serious laptop & PC market soon either --- they just can't compete well with Intel there. Intel is targeting their smaller & less power-hungry Atoms for the phone & tablet markets (surely you know this ?) that are driven by low power requirements. The press just leaked that Samsung's new Galaxy Tab 3 will have dual core Clover Trail+ AtomsReply

its not disappointing at all, its gains, and any gains matter on an annual schedule. As long as it beats Ivy by any percentage, its progress. You know Intel's goals are not IPC related as much as mobile, so dont rant when all the facts are in front of you.Reply

The point of Haswell is not to drastically improve performance. Haswell is designed to move x86 into the tablet and mobile market with drastically improved idle and low power performance. Skylake, in roughly 2015, will likely be the next big performance boost.Reply

Where this will probably shine is in mixed workloads. Not overclocking for gaming or production.It will be easier to put Haswell into (G)HTPC builds at mini-ITX and µATX formfactors and keep noise down while still having great burst performance. The 4770K seems to be not worth it for overclockers that have got good Sandy/Ivy chips.

I think i may upgrade my parents living room PC to something like a mini-ITX build with i3-42xxT, and just transfer the SSD (Force GT 120GB) and RAM (8GB 1600 SO-DIMM). It should be a substantial upgrade from E-350 and almost fit the same power envelope for their use cases.

I'm looking more forward to more info on Ivy-E. I'm happy with my 3930K with a decent OC, but if Ivy-E can bring the power/performance ratio down without bringing performance down or heat issues, i might upgrade :)Reply

i agree. its annoying that intel is designing their desktop cpu's to also compete with arm in mobile. why can't intel develop 2 different versions or architectures? is power efficiency the limiting factor and if TDP and power was no concern how much better could intel do?

i personally don't care about power on my desktop as long as the performance justifies it. desktop cpu's should be about performance not saving power.Reply

How can you be disappointed when Haswell gives you exactly what was promised? It seems like you should have adjusted your expectations. Or you just like to be disappointed.

I'm not surprised by these numbers, they are what was expected. I'm still running an i7-860 @3.8GHz and when I get enough money I'll upgrade to an i7-4770k and hopefully be able to run it at 4.5GHz, give or take some (water cooling setup here). Maybe IVB-E if it tests well and money is not too tight.

Anand really needs a Lynnfield for comparisons, because the i7-9xx was geared towards people running the enthusiast platform, whereas all the other CPUs tested here are geared towards mainstream high end.Reply

Of course they are comparing to older CPUs because the article made a point to say that it does not make a whole lot of sense to upgrade from Ivy Bridge. But still, 5-10% faster compared to Ivy bridge is pretty good i would say for the extra ~$10-20 dollar difference.Reply

Exactly. As someone who still has my P4 Northwood 3.06GHz (with HT) as a general use PC, I loved it. It served as my main gaming and photo/video editing PC back in the day, and was only replaced with a C2D E8400 overclock build four and a half years ago (which was replaced two years ago with a SB 2500k build). Anyone who says the P4 was terrible is either an AMD fanboy trolling or never had one at the time. Reply

By any reasonable metric, P4s were pretty bad. Glad you like yours but that's mostly because even back in the P4 days CPUs were already "fast enough" most of the time for most tasks and you probably would have liked a Pentium M or Athlon just as well. P4s started out with very weak performance and were improved a decent amount during the lifetime of the architecture, but they were never spectacular performers vs. the competition and they were always extremely hot and power hungry. Also Rambus memory was a joke.

More on topic, I'm not surprised that Haswell isn't significantly faster than Ivy Bridge. I said when Sandy Bridge came out that the x86 architecture would never get 50% faster per core than Sandy Bridge. With the combination of nearing the end of the road for process shrinking, the architecture itself already having been optimized to such a degree that any additional significant gains come at an extremely high transistor and R&D cost, the declining of importance of the x86 market as mobile devices become more prominent, and the "already much more than fast enough" aspect of modern CPUs for the vast majority of what they're used for, it's pretty clear that we'll never see significant increases in x86 speed again. There just isn't enough money available in the market to fund the extremely high costs necessary to significantly increase speed in a market where fast enough was achieved years ago.

I'll stand by my statement of ~2 years ago: x86 will top out at 50% faster than Sandy Bridge per core.Reply

@bji: Totally agree. We are in the halcion days and I can't see the likes of the 4770k getting significantly more powerful any time soon. I believe it will take a huge technology breakthrough in terms of fab materials, along the lines of optical or biological chips. At least 10 years away.

The corollary to this is that we don't actually really need any more power. We already have the level of "good enough" for the GPU (in gaming terms). In terms of compute power, that is definitely continuing in the concurrency paradigm - which is where it should be, it makes sense. Programmers (like myself) are proceeding along these lines to get more power.

I think we are at either a pivotal point or a point of divergence again in computer technology. It's very exciting and interesting for me :)Reply

Wait what... I must be an AMD fanboy then (although I love Intel and never owned an AMD >.<, lol)...

Honestly, the P4 platform was terrible in many aspects, and yes I did own one, several actually (2.266, 2.4, 2.8)... But having a Dual Pentium III 1GHz at the time as well made it pretty obvious to me how bad the P4 really was... Granted all those P4 was at lower clocks than yours...

But nothing is bad not to be good for something, after all intel's after the P4 generation has all been pretty amazing...

More in the topic though, I am a bit dismayed and disappointed that the power consumption goes up compared to the last generation under load... Great that the idle power goes that much down, but I would rather see the exact same performance as 3rd gen and a huge power reduction... After all, performance wise I am still over satisfied with my i970... I don't feel like i need more juice, so I would rather save some bucks on the electrical bill... Obviously there will be different minds about that part... Just saying what I feel...Reply

Weird how you keep saying how "bad" it was in it's time, yet you present no actual facts to back that up. About the only bad thing I ever saw with the P4 were high temps, which any decent HSF fixed.Reply

It was so bad that Intel had to pay vendors not to buy the competitor's chips, an action that they were later sued for and settled to the tune of $1.25 billion.

The P4 started out very badly; it was very power hungry and had weak performance compared to the competition. Intel was also the only company able to make chip sets for it (can't remember if there were technical or legal reasons behind this or both), and they refused to support any memory but Rambus (for a long time), further hurting their cause by propping up a company that is pretty much the dregs of submarine patent lawsuit filth.

I can't think of any way in which the P4 was better than its competition of the day except that it had Intel's sleazy business practices behind it, if you consider that "better". It certainly played better in the marketplace, ethics notwithstanding.

You may have been happy with your P4 because it did what you needed it to do. Awesome. Nobody is saying that the P4 didn't work or that it couldn't actually fulfill the duties of a CPU, we're just saying that compared to its contemporaries, it kinda blew chunks.Reply

I had two P4 chips (2.4 Northwood and 3.0 Prescott) along with many Athlon XP systems (Palomino, Thoroughbred and Barton) and the Athlon's beat the P4s in nearly every metric. Then came the Athlon 64 to solidify AMD's crown. It wasn't until the original Core (Conroe) chips when Intel came screaming back and have held it since.Reply

"Anyone who says the P4 was terrible is either an AMD fanboy trolling or never had one at the time. "

+5

My Northwood 3GHz was as fast, stable and solid as any CPU I have ever owned. Performed slightly slower than an equivalent A64, but nothing noticable to the human eye. Maybe these people who bag on it have bionic eyes.Reply

I read with some concern that the TSX instructions aren't going to be available on all SKUs. This is the main thing that I've been looking forward to on Haswell! Not providing the capability across the family is reminiscent of the 486SX/DX debacle. TSX could be huge for game physics as it would allow for far more consistent scaling. I know it is supposed to be backwards compatible, but what's the point of coding to it if it isn't always there?Reply

Agreed, TSX is one of the most interesting parts of Haswell so I'm sorry not to see it get more discussion. And as you say (and like with VT-d or other tech) I think Intel is being stupid and self-defeating by trying to make it an artificial differentiator. Unlike general basics of a chip such as clock rate, cache, hyperthreading or raw execution resources these sorts of features are only as valuable as the software that's coded for them, and nothing kills adoption amongst developers like "well maybe it'll be there but maybe not." If they can't depend on it, then it's not worth spending much extra time with and tremendously limits what it can be used for. That principal shows up over and over, it's why consoles can typically hold their own for so long. Even though on paper they get creamed, in reality developers are actually able to aim for 100% usage of all resources because there will never be any question about what is available.

For features like this Intel should aim for as broad adoption as possible, or what's the point? They can differentiate just fine with pure performance, power, and physical properties. Disappointing as always.Reply

Definitely, I was a bit puzzled reading the review to find barely a mention of TSX when I thought it was meant to be one of the ground breaking new features on Haswell. Even if there was only a synthetic benchmark for now it would be extremely interesting to see if it works anything like as well as promised.

TSX is so esoteric in its applicability that I think you'd be very hard pressed to a) find a benchmark that could actually exercise it in a meaningful way and b) have any expectation that this benchmark would translate into any actual perceived performance gain in any application run by 99.999% of users.

In other words - TSX is only going to help performance in some very rare and obscure types of software that "normal" users will never even come close to using, let alone caring about the performance of.

However I am intruiged by your speculation that TSX will be beneficial for physics simulation, which I guess could translate to perceivable performance increases for software that end users might actually use in the form of game physics. I found a paper that described techniques for using transactional memory to improve performance for physics simulation but it only found a 27% performance increase, which is not exactly earth shattering (I wouldn't call it "huge for game physics" personally).Reply

One of the main (and already implemented) uses of TSX is hardware lock elision. I'd guess the hypothesis is that physics code takes locks defensively but rarely actually have contention because they're working on different parts of the world. In this scenario more fine grained locks on sections of the world would let you scale better but that is a lot of work and HLE gives you the same benefit for free. Reply

No. HLE (XACQUIRE and XRELEASE) do nothing by themselves. They reuse REPNE/REPE prefixes and on CPUs that do not support TSX are ignored on instructions that would be valid for XACQUIRE/XRELEASE if TSX were available. It is a backward compatibility method. Since all of those instructions may have a LOCK prefix, without TSX capability, a normal lock is used, NOT the optimistic locking provided by TSX that allows other threads to see the lock as already free.

Without TSX the code is still (software) lock-free, but there is no possibility of multiple threads accessing the same memory simultaneously (as there is with TSX), so one or more threads will see a pipeline stall due to the LOCK prefix.Reply

I can't imagine that lock elision is that beneficial to very many applications. Lock contention is almost never a significant performance bottleneck; yeah there are poorly designed applications where lock contention can have a more significant effect, but proper multithreaded coding has the contended sections of code reduced to the smallest number of instructions possible, at which point the effects of lock contention are minimized.

In order to take advantage of transactional memory and get the full benefits of TSX you have to write such radically different algorithms that I doubt that it's worth it except in the most unusual and specific cases. OK so you can use TSX instructions to make a hashtable or other container class suffer slightly less from lock contention, but that is oh so very rarely a significant aspect to the performance of any program.Reply

As a programmer, I am pretty sure that the benefits of TSX are limited to a very unusual and uncommon set of problems the performance increase of which will mean very very little to 99.99% of users 99.99% of the time. Also fully transactional memory algorithms require significant rework from their non-transactional counterparts meaning that taking full advantage of TSX takes developer effort which will not be worth it except in very rare circumstances.

The HLE instructions may have some very minor benefit because they can be used with algorithms that don't need to be reworked at all (you just get a little bit more parallelism for free), but even then you're going to be avoiding some lock contention; even if you completely eliminated lock contention from most algorithms they would only be fractionally faster in real world usage. Lock contention just isn't that big of a deal in normal circumstances.Reply

Exactly. It would be the ubiquity of these features that would cause them to be useful - splitting them into segments defeats the adoption and use of said features. Intel are pushing segmentation too hard (too greedily?)Reply

loved the q6600 though, since i still have one. and the 8350, since i have my eye on it.

be interesting if this pushed 8350 prices down enough to be more attractive (it's currently only 180 on newegg). if not i'll probably go with i5 4670 (even though i'm getting tired of these faux msrp's, bet money that chip will be 229 on newegg forget 213)

Similar boat, but I told myself I wouldn't wait for Intel's E platform anymore. X58 may be the last great E platform, mainly because it actually preceded the rest of the mainstream performance parts. Intel seems to sit on these E platforms for so long now that they almost become irrelevant by the time they launch. Reply

Because for 99.9% of the population what's out there today is more than fast enough. Hell, the Core2Duo Conroe/Penryn processors are fast enough for most people today. I'm still using one in fact.

On the mobile side however, we have tons of applications that could use more power. My galaxy S3 takes a little to load up some games, and while the data may have been downloaded to the phone through wifi, it still isn't on my screen yet.

I think it's pretty obvious why you see mobile land having to progress so fast while desktop processors are focusing on power consumption as the AVERAGE consumer (not people who are techies) would prefer smaller PCs and pushing more power efficient processors into smaller and smaller things like the intel NUC is what the consumer desires.

Mobile is having the same performance renaissance that desktop chips had from 2004-2006 when we went from a hot, bloated Pentium 4 to a cool, efficient Core 2 Duo. And certainly we've had performance gains since then, but eventually the gains won't come so easily. You can start to see that a bit now with how the Exynos 5250 in the Nexus 10 is thermally throttled to 4W such that CPU and GPU can't be both running full tilt at the same time.Reply

You're disappointed because your understanding of physics and Moore's Law is poorly developed. The scenario you've provided is a blatant false equivalency.

According to your desperate desires, the roughly 4GHz processors that launched with Sandy Bridge should be running at twice the clock speed today.

When you understand that leakage power grows exponentially as transistor geometries shrink, and that power consumption raises exponentially as clock speed rises, you will realize that even the 10% gains that Haswell makes here are a big deal.Reply

Homeles, I really appreciate your well said comment, im taking a business degree with an accounting major, but ive always loved building PC's as a hobby. When some of my computer science/engineer friends try to show me the stuff they are learning, i am baffled as its not my area of expertise. I can only imagine how challenging it is to combat the shrinking processes and make performance gains as you said. I have deep respect for Intel and AMD, always trying to utilize their research and engineers to try and make any gains for society. These forum people are just so ignorant sometimes and it baffles me.Reply

Hey, similar path as me. :) Don't worry about lack of understanding now, stick to it, keep reading great technical sites like AT, keep an open mind, and you'll get a really good grip on the industry, especially if you are an actual user/enthusiast of the products. Reply

The other big problem with the CPU space besides the problems with power consumption and frequency, is the fact Intel has stopped using it's extra transistor budget from a new process node on the actual CPU portion of the die long ago. Most of the increased transistor budget afforded by a new process goes right to the GPU. We will probably not see a stop to this for some time until Intel reaches discrete performance equivalency.Reply

Not per core, these parts are still 4C 8MB, same as my Nehalem-based i7. Some of the SB-E boards have more cache per core, 4C 10MB on the 3820, 6C 15MB on the 3960/3970, but the extra bit results in a negligible difference over the 2MB per core on the 3930K.Reply

I'm merely pointing out that, in the past 2½ years we've barely seen any performance improvements in the 250-300$ market from Intel. And that is in stark contrast to the developments in mobileland. They too, are bound by the constraints you mention.

And please, stop the pompous know-it-all attitude. For the record, power consumption actually rises *linearly* with clock speed and *quadratically* with voltage. If your understanding of Joule's law and Ohm's law where better developed you would know.Reply

Exactly. And it won't change until we see optical/biological chips or some other such future-tech breakthrough. As it is the electrons are starting to behave in light/waveform fashion at higher frequencies if I remember correctly from my semiconductor classes (of some years ago I might add).Reply

Yes, but we will first see hybrid approaches. Intel, IBM, and others have been working on them and are getting close. Sure, optical interconnects have been available for some time, but not as an integrated on-chip feature which is now being called "silicon photonics". Many of the components are already there; micro-scale lenses, waveguides, and other optical components, avalanche photodiode detectors able to detect a very tiny photon flux, etc. All of those can be crafted with existing CMOS processes. The missing link is a cheaply made micro-scale laser.

Think about it. An on-chip optical transceiver at THz frequencies allows optical chip-to-chip data transfer at on-chip electronic bus speeds, or faster. There is no need for L2 or L3 cache. Multiple small dies can be linked together to form a larger virtual die, increasing productivity and reducing cost. What if you could replace a 256 trace memory bus on a GPU with a single optical signal? There are huge implications both for performance and power use, even long before there are photonic transistors. Don't know about biological, but optical integration could make a difference in the not-so-far-off future.Reply

Me too. I have a 2500k @ 4.3Ghz @ 1.28v and I am starting to wonder if even the next tick/tock will tempt me to upgrade.

Maybe if they start doing a K chip with no onboard GPU and use the extra silicon for extra cores? Even then the cores aren't currently used well @ 4. But maybe concurrency adoption will increase as time goes by.Reply

I'd consider it but the IPC gains alone since Nehalem make the upgrade worthwhile, and the rest of the X58 specs are already sagging badly. It still does OK with PCIE bandwidth due to the 40 total PCIE 2.0 lanes, but 20 PCIE 3.0 lanes is equivalent with PCIE 3.0 cards. The main benefit however is running SATA 6G for my SSDs and gaining some USB 3.0 ports for enclosures, etc. They have been bottlenecked for too long on SATA 3G and USB 2.0.

I would consider waiting for IVB-E or Haswell-E but Intel always drags their feet and ultimately, these solutions still end up feeling like a half step back from the leading edge mainstream performance parts.Reply

Intel's artificial segmentation of some features are more irritating then others, but not including VT-d everywhere really, really sucks. An IOMMU isn't just helpful for virtualization (and virtualization isn't just a "business" feature either), it's critical from a security standpoint if an OS is to prevent DMA attacks via connected devices (and just helping increase stability also). It should be standard, not a segmentation feature.Reply

I'm curious about something. If you say Intel got rid of legacy PCI support in the 8 series chipsets, why am I still seeing PCI slots on these new motherboards being released? Are they using third party controllers for PCI?Reply

Haswell is movinig voltage regulators that were already on the motherboard on die, so power consumption hasn't changed, it's just that the CPU cooling system has to deal with that extra heat now. Remember that those power ratings are NOT about how much power the chip uses, but how much cooling is needed.Reply

System power consumption with Haswell is, in fact, higher. Take a look at page 2.

Still, when you're running at these kind of frequencies, 10% more performance for 10% more power is a big deal. If you were to hold back the performance gains to 0%, power savings would be greater than 25%.

The only reason Piledriver was able to avoid this was because it was improving on something that was already so broken. AMD's not immune to the laws of physics -- when they catch up to Intel, they will hit the same wall.Reply

Great review, but I have a question about this rather cryptic comment for bclk overclocking:

"All CPUs are frequency locked, however K-series parts ship fully unlocked. A new addition is the ability to adjust BCLK to one of three pre-defined straps (100/125/167MHz). The BCLK adjustment gives you a little more flexibility when overclocking, but you still need a K-SKU to take advantage of the options."

Does that mean you cannot do bclk overclocking on the non-K series parts? For example, are you saying that a 4770 (non-K) part cannot be used with a bclk overclock? Or are you just saying that the K-series parts give you all the options including unlocked multipliers? Can you clarify this?Reply

Now that AMD has mostly fallen back to the mid-range and low-end, this is a similar situation to where the new Geforces landed.

You get a bit more performance for about the same money. For the GPU side, the benefit was mostly in superior cooling solutions all (supposedly) having to be equivalent to the excellent Titan Blower. For the CPU side, the benefit is that we have lower idles. These chips stay in idle a lot, so it's a gain, but this isn't a chip that's going to light the hobbyist world on fire.

Just like with the GF770, you get more performance and a few fringe benefits (that should have been there all along, ie., 6 SATA3 connections) for the same as you would have paid for the equivalent part last week.

I don't see much here to make me want to upgrade from my IVB 3750k, though. I'm leaning toward picking up a used GF670 and SLI'ing now, given all the givens.

The truly disappointing part of all this is if this is truly the last new desktop release for two years. Imagine me going 3+ years before I even FEEL an itch to upgrade my processor. I sincerely pray that AMD gets its act together and puts some competitive pressure on Intel at the mid-high end (ie., 2500k, 3570k) with a truly great CPU. I live in hope that the 8350 successor (based on Steamroller?) will be that part, but AMD needs to update their chipsets big time.

Until then, I think all we can expect from Intel and nVidia is more of the same, which is the worst part of both the 700 series and Haswell. Neither felt compelled to do more than offer minor improvements in performance because neither is feeling any competitive pressure of any kind.

That's why Intel IS pushing the power argument and fighting that fight hard. Because ARM *is* applying competitive pressure.Reply

Even without competition, Intel is still by economics to keep pushing transistor sizes and die sizes smaller and smaller --- it still lowers their costs and they make more money. This also means they keep getting faster and require less and less power. What competition does, besides lower prices, is drive architectural changes that add more die size (like an integrated GPU and FIVR)Reply

Keep in mind that an increasing percentage of desktop/laptop PCs are now in the business world (since light-use consumers have often moved towards tablets and smartphones). If you're doing office work, then lower power use on idle/light load is a big deal. Office PCs almost never run balls-to-the-wall. In fact, usually the only time the CPU even comes close to being completely pegged is when the mandatory virus scan runs (and even then, it's often HDD-bound).Reply

I'm still on a Lynnfield 750 (as are a few other commenters, I note) and this system is now 3.5 years old without me having had the itch to upgrade or even overclock the CPU. I have been eyeing Haswell because I know I will be making a fresh build at the end of the year, but that's due to circumstance and not need. 30% clock increase in four years is nothing like the old days... but frankly it's nice to be able to keep up in everything just by swapping out video cards.Reply

I doubt we will see large increases in future. We need new algorithms. (Current ones are the limit) Why? Because major performance increases would require significant increase in complexity and GPU showed what that causes.And AMD won't and cannot change it.Reply

Haswell met expectations in terms of IPC increases and power reductions. Both of those are good things overall. However, I feel disappointed and that comes down to how Intel has segregated their product line up: GT3e and TSX are only available on select parts. Ideally on the high end I'd like to get a socketed chip with an unlocked multiplier, GT3e, TSX, and Hyperthreading. Of those five criteria, at best I can get three of those. I suspect that this is due to Intel keeping several possible configurations reserved for their Xeon lineup but those chips won't have an unlocked multiplier.

I'm currently an owner of a Sandybridge i7-2600K and the current performance of the Haswell parts aren't that tempting to jump the configurations Intel is selling. So I'm left waiting another year for a future desktop refresh before making the jump. Oh wait, Broadwell is going to be strictly a mobile refresh (and possibly a desktop BGA) refresh. So the best upgrade path for me for the next couple of years is to wait for a cheap i7-3770K on clearance. Otherwise the price/performance gains are radically higher as to not be worth it (also would need to get a new motherboard for socket 1150). I guess I'm left waiting for Skylake, get lucky that Intel adds several SKU's that I want or see what AMD can produce for the desktop.Reply

Intel is likely keeping TSX away from any desktop part with eDRAM. I suspect that having a massive L4 cache and TSX may make these quadcore chips very competitive with some of their socket 2011 parts based upon Sandy bridge/Ivy Bridge cores. These would happen in heavily memory bound applications like some database operations tend to be. Intel hasn't updated ARK with all of the Haswell chips yet so I'm be curious to see if their will be a mobile part with GT3e + TSX. I'd love it if some enterprise DB's supporting TSX were tested on this platform to see if this idea pans out.Reply

The i7 4950 was likely seeing a strong benefit from an open air test bed (it is a mobile part) so I suspect that it can reach its 3.4 Ghz four core turbo relatively often. I still would expect slightly better performance if the 2.4 Ghz base clock was raised. The i7 4950 is further handicapped as the L3 cache was cut down to 6 MB and was using SO-DIMM memory with loose timings. The kicker is that this is with legacy code with any AVX2, FMA or TSX benefits.

I really, really want an unlocked, noncripppled socket 1150 part with GT3e.Reply

Well.. your already on a i7.. and tbh I'd say the 2600K is better than the 3770K. .I purchased a 2700K even though the 3770K was out. Why? Well.. lower heat and miniscule gains in the 3X line.. plus when I decide to OC I'll likely get more out of my chip (as will you..) than they get on the IvyBridge processors. Reply

I currently have my i7 2600K running at a conservative 4.2 Ghz. I've gotten it to boot at 4.6 Ghz with ease and probably could go further if felt like increasing voltages and got better cooling (though it is in a 4U server case so cooling options are a bit more limited).

Price drops are happening for the i7 3770K. Probably need to wait a bit more and might pick one up if they drop further. Microcenter has dropped them to $230 already.Reply

Nice review Anand, it's pretty much what I expected from Haswell. 5-15% over IVB with all the bells and whistles of Lynx Point Z87 (6xSATA6G, more USB 3.0 etc.) This will make a nice upgrade for me coming from an OC'd i7-920 and X58 platform, now to see what deals MicroCenter has on the 4770K.

I would have liked to have seen normalized clockspeed comparisons in the 5-gen Intel round-up but understand this does not reflect real-world results, given SB and above have much better turbo boost and base clocks. I think it would've given a better idea of IPC however, for those who have been overclocking their older platforms to similar max OC levels.

I also would have liked to have seen more gaming and OC'ing tests but understand this first review needed to cover most of the bases for a general audience, look forward to more testing in the future along with some looks at the Z87 chipset nuances.Reply

You're right in that particular test. +12% power for +13% performance. Still disappointing. Most of the other benchmarks are showing less than 10% improvement, but we don't know the power story. Overall disappointing. With all the talk about power efficiency, I was hoping for +5-10% performance at the same or lower power consumption. All the power benefits seem to be at idle.Reply

I'm glad I went ahead and built my 3770 system a few months ago instead of holding out for Haswell. Nothing about Haswell was worth waiting for (for my needs). Damn...based on this and Intel's roadmaps I may be on IVB for a looooong time.Reply

I'm just wondering what the rationalisation for using a Core 2 Duo for comparison benching is? Surely a Core 2 Quad (eg Q6600) would be a more accurate representation seeing as all the other parts in the benchmark are quad core.Reply

You'd think Anand would have covered something as important as this.I did not see this in _any_ of the reviews.Also, the wording on the BCLK overclocking is a little odd. So bottom line - can we OC the 4770 using BCLK or not?Reply

The actual BLCK changes will be pretty much inline with what you'd be able to do on Z68 or Z77, about 110 Mhz max.

Socket 1150 and Z87 add another bus multiplier to feed the CPU like socket 2011 parts have. So you can have a 100 Mhz clock feeding the PCI-E controller with a 1.25x multiplier a 125 Mhz clock will feed the CPU cores before the CPU multiplier. Increasing the BCLK to 108 Mhz and a 1.25 bus multiplier would equate to a 135 Mhz clock before the CPU multiplier is applied.Reply

I notice the gains over Nehalem are pretty much the same that you can *easily* get (just increase VCore and BCLK) out of overclocking Nehalem. So now they're neck and neck if you don't overclock the Haswell part. Maybe my system will be good for another year now! (i7-920 @ 4.0).Reply

I've been sitting on a first-gen Core i7 and this is still a little dissapointing. It is significantly quicker than mine, but I'm not sure that the upgrade is quite worth it for me yet as everything I use runs reasonably well. I am interested in the new socket type, though. I wonder if upgrading the motherboard with a decent Haswell would benefit me later on with a Skylake. It's hard to say; I haven't seen any firm specs on the socket-type for the Skylake chips.Reply

At this point I think there are more interesting places to put your tech dollars. A high-DPI display or triple monitors if you're into gaming, more SSDs if you need the space, a NAS or a nice tablet, etc. I'm in your camp with an i7-870 and nothing I do pushes it enough to test my patience.Reply

Agreed -- unless you're loaded, there are far more interesting ways to spend your money in the PC arena than on the overhyped and underwhelming Haswell SKUs. Unless you're still stuck on Phenom II or Core2Duo.Reply

Hey - I'm still "stuck" on a Phenom II (in my desktop; laptop is 15 inch rMBP with Ivy Bridge) and it's not that bad. It's significantly slower than the Bridges in single threaded apps but even so it's fast enough that I never notice, much like jwcalla never notices with his i7-870. And the Phenom II x6 has 6 real cores that do decently well in heavily multithreaded apps, really extensive compiles being the only thing that I ever do that would benefit at all from extra speed. 6 real cores holding their own reasonably well there. Even if an Ivy Bridge were 50% faster per core on the compiles (and it probably is), the x6 has 50% more cores so it kind of evens out ...Reply

Anyone know if x264 is properly optimized for AVX2? Naiively, one might expect a 2x speedup for SIMD integer dominated code over the AVX1-only 2700k provided that the rest of the system is able to shuffle around data.http://com3.tv/wp-content/uploads/2012/09/Haswell-...

I am very curious as to the real-world performance gains of integer (and floating-point FMA) SIMD code in Haswell compared to previous generations.Reply

I am actually more interested in the new 8 series chipsets than Haswell. I have a Sandybridge work desktop and Ivybridge gaming desktop and would switch my Sandybridge motherboard if the chipset were backwards compatible with SB CPUs.Reply

"As we’ve seen in the past, the K-series parts (and now the R-series as well) omit support for vPro, TXT, VT-d, and SIPP from the list."

As this is the official Haswell review and since TSX are not included in the K-series parts, I believe this is a huge omission from the review, especially since transactional memory is a revolutionary technology with a lot of potential. I find the lack of mention misleading and it should be corrected as soon as possible.Reply

Thanks for letting us know this. Checking the specifications on ark.intel.com, I find:

i5-4430 no TSXi5-4570 has TSXi5-4670 has TSXi5-4670K no TSXi7-4770 has TSXi7-4770K no TSX

So TSX is missing from half of the current Haswell models (ignoring the low power S and T chips). So far we only have quad core chips; I expect that TSX will be missing from most of the dual core chips. It doesn't seem that TSX is intended to be used by developers writing general purpose code.Reply

Buying a 4770K means I'm wasting 33% of the die on a GPU I don't want, and because it's a "K" chip it will also lose VT-d capability. Six-core "K" chips retain VT-d support, but buying SB-E seems silly given Haswell offers much better IPC and single-threaded performance, and later even against IB-E it would feel like paying more for less.

Intel has taken a perpetual 2-year break between updating its high end, so if IB-E launches Q4'13 then it seems Q4'15 would be the earliest Haswell-E will show up?Reply

Haswell GT3e graphics come at what price? Certainly a lot more than Haswell GT2 or Richland A10 graphics. And Intel is right, Haswell GT3e graphics beats old, entry level discrete graphics cards. BUT, Haswell does not do DUAL GRAPHICS! Pair a Haswell with a dGPU and you get only the graphics horsepower of the dGPU. In short, you waste all that die area that Intel devoted to GT3e and eDRAM. On the other hand, pair a AMD Richland A10 with a dGPU and both GPUs work in tandem to give performance scores well beyond Haswell GT3e and for dare I say a lot less money. That's true even with the cheapest entry level Radeon GPU. So, Intel has created a monster graphics engine and put memory on the die - but to claim its free or the best solution is just being a fan boy for Intel.Reply

this my friends, is a joke, the mighty Haswell K models will not support VT-x, LOL this is going to be the first nail in Intel's coffin in favor of AMD, and I bet AMD are going to take advantage of this (and they already take advantage of Intel's "K" marketing strategy - why buy an incredibly expensive Intel "K" model without VT-x support, when you can buy an AMD both unlocked and with VT-x support) (the joke is that a 5-year old Core 2 Duo has more to offer than the latest and greatest Intel parts [it supports VT-x]). I am running a 4300 mhz Wolfdale (Penryn) and an another 3000 mhz Conroe, none of what I see here (and in part due to the lack of VT-x support) from Haswell is of any significant enough value over my current hardware, I would have considered it if I had a Pentium 4, but even then I would rather go to the AMD side, since they present no obstacles.and I still have many functional PCI cards and I am not about to throw them away just because Intel decided not to support them (yet another nail in Intel's coffin) (in my opinion hardware has to comply with the user's wishes, not the other way around).Reply

Your comment doesn't make sense. AMD chips obviously don't have VT-x support; they have a competing standard, AMD-V. The first generation of virtualization extensions didn't make much of a dent in perf. The larger gains came as a result of SLAT which both AMD and Intel added around the Nehalem timeframe.Reply

Based on the published Haswell 4C GT2 die shot I believe that your estimates for the graphics area are quite high. It's relatively simple to derive the graphics area on 4C GT2 now that we know the total die size - should be somewhere in the vicinity of 58mm^2. Double that and you get 116mm^2 for GT3.

As for the remaining 29mm^2 delta between 4C GT2 and 4C GT3 die sizes... I'd chalk that up to both inefficiencies due to going with a more square die instead of the long and skinny that's been with us since SNB and the extra logic/IOs necessary for the eDRAM L4.

Regardless, there's no question that the 174mm^2 figure for GT3 is incorrect as the 4 cores, associated L3, and system agent on the 4C GT2 die take up approximately 119mm^2, and adding 174mm^2 to that would yield a 293mm^2 die size.Reply

It doesn't take a genius to figure whats going on here. Competition spurs innovation. There is no competition for mid-high end desktop processors, or even laptop processors really. AMD only keeps up with mid range because they have to set very high clocks on their inferior architecture.

Where the competition is is in mobile. Smartphones and tablets. By moving Haswell down this generation, Intel is getting themselves closer to be a true competitor in that space. This year we should see a power and battery life combination that ARM cannot reach.

There are also huge gains for gaming and gaming laptops. A 14-15" mid-ranger cost $500 with Iris graphics should be able to run most games decently now without all-minimum settings. Especially given that the PS4 and XBO are running x86 low to mid range graphics, I can easily see Haswell notebooks keeping up, especially Iris pro laptops, which I would assume can come in under $1000 and in relatively slim form factors.Reply

I think they should benchmarks this new CPU with 2-3GPUs. I don't think 1 video card is enough to really test the strength of new cpus, which is why the difference between ivy bridge and haswell is so small. Give them 2-3 GPUs to work with and see if they can really step-up.Reply

YAWN... with no competition from AMD, Haswell offers none to just a few % improvement over the previous generation (depending on who's benchmarks you believe), higher power consumption, and fewer features (ie. no virtualization extensions on the higher end models) all at a higher price that requires one to purchase a new motherboard.... No Thanks!Reply

That was covered in a previous article.. it was established that the 3770K was the go to cpu for multiple video card systems. (altho.. I'd say this would do in a pinch as would the previous 2600/2700K)Reply

YAWN... with no competition from AMD, Haswell offers none to just a few % improvement over the previous generation (depending on who's benchmarks you believe), higher power consumption, and fewer features (ie. no virtualization extensions on the higher end models) all at a higher price that requires one to purchase a new motherboard.... No Thanks!Reply

So, can someone catch me up with why Haswell isn't being compared to the older Core i7-39xx Sandy Bridge chips with 6 cores (Sandy Bridge E series)? Is it because they are based on the Xeon architecture and thus are not directly comparable? Will we see an Haswell-E (or Ivy Bridge E) series-based Core i7 with more than 4 cores as a follow-up to the Core i7-39xx?Reply

Probably because it'd be embarrassed in some cases. For lightly threaded workloads, the i7 4770k would come out on top. The six core i7 39xx chips need heavily threaded applications to really shine. Also of note is that GT3e versions of Haswell have 128 MB of L4 cache which further improves IPC. A hypothetical 3.5 Ghz fully functional Haswell with GT3e and recompiled software will likely out run an 8 core socket 2011 Xeon.Reply

Dangit Anand! Why are you still using 2-pass encoding? Everyone and their dog have switched to the faster and more effective CRF. It may have some obscure use as a synthetic benchmark, but it's certainly not a real world one! Reply

Other reviewers are getting the more standard 10-15% difference on average across the board typically seen with Intel and a new gen. introduction over their previous gen. If it's not being seen here, maybe there are benches not included that should be.Reply

The CPU does not control the FPS you're talking about. Beyond a certain point of CPU power (long ago passed) FPS depends on the external graphics card. A Haswell i7-4770K CPU will, according to the Anandtech benchmarks, be about twice as powerful as your QX9650. At 1/3 the cost, and far less idle power. That's not "pathetic".

You may very well not want to upgrade, because you won't see a difference in your games - but that isn't Intel's fault. In fact it's a problem for them and the desktop market in general. Gains in CPU power are becoming irrelevant. As are PC games, BTW, which is why graphics cards are improving so slowly now. In two more years... it will be the same story, except more so.Reply

What is intels response to the overheating/throttling issues with stock cooler haswell (not OC).Can you investigate this issue more (as you can as ususal).Seems there are some manufacturing quality issues with the heat spreader and thermal paste.Reply

I have just assembled an i5-4670K on an Asus Gryphon Z87 mATX mobo, with 8GB of Corsair Vengenace RAM (1333MHz) and a GTX 650ti graphics board. This is with a 250GB Samsung SSD and a old 500GB mechanical drive. The PSU is a Corsair CX600 80-Plus.

==> At stock clocks, idling on the Windows desktop, this PC pulls only 39W from the wall!

I am amazed. Idle power is important to me since the box is in a poorly-ventilated room. Upping the AC to improve that area would freeze the rest of the house AND cost a bundle. The time periods spent at full load are negligible in comparison. But the performance is there when needed... this CPU is nearly twice as powerful as my Yorkfield build, which idles at 170W, with HD 5770 gfx and an SSD.

So I'm very pleased with Haswell on the desktop. And yes, I could have bought a lower level chip and mobo for this use, saving money and a few more Watts, but I wanted the good stuff and the PC may someday be moved to where an OC would be practical.Reply

Well I certainly can wait for AMD to truly boost their competition from Sony and Microsoft's budget here. Let's face it, multi-year contract for console apu's should economically boost them massively.Reply

VERY disappointing CPU, if you disregard ALL of the fake synthetic benchmarks, and use only the real world ones, the gains over the 3770K are 0.03%, max 8.8% and average at about a 3% Improvement. 3%? Not worth it. Motherboard? Z87 was suppose to drop PCI but all the motherboards still have it and nothing outrageous. So basically lower power consumption, better on board video performance and no discernible improvement over the 3770K. Reply

Face it, A consumers market, not a true imagine of technology evolution. Low power with high temps? Just doesn't ring true, unless the chip has been purposely ' Restricted ' with only one cause in mind sales and protection of future sales. In layman's terms Greedy bastards more concerned with markets and money and not the Desktop guy who's impressed with performance. Blame Mac, Windows 8. And your tiny fucking phone.Reply

This test is a little bias against the Core 2 duo I think. What they have done is essentially pitch the very top end on Intel's last 4 generations against a mid to low end core 2 duo. It would be more fair if that generation was represented by a qx9770 or even a Q9650.Reply

I have been involved in computer hardware, software and programming since 1986. There has never been a more reliable processor than Intel, at any time. The current ones still have to be tested over time, but I would not buy one. If you put enough case fans with the inferior processors, they might last 2 or 3 years. With an Intel processor, even overclocked with no case fans, provided you used the intel chipset, nearly 100% of them kept running for 5 or more years. Without overclocking, I still don't know when they will die, since none have to my knowledge. I have built at least 100 computers and have only built a couple non intel ones. It was obvious right away how inferior they were to intel. Also, I love my i7 haswell on an asus z87-pro motherboard. There is no need to overclock, the processor and memory score 7.8 out of 7.9 on the windows scale and the onboard graphics scores a 6.8. Not bad ratings. Also, the CPU only uses 7 to 8 watts of power most of the time.Reply

I'm quite happy with my undervolted Athlon 620 and Radeon 4200 graphics. More computing power than I need 99% of the time, estimated tdp of about 55-60 watts (1% of use) and system idle of about 40w (80+% of use). And if I ever play games it takes a few seconds to bung in a suitable graphics card. I doubt it will go belly-up in the foreseeable future, but if it does it only cost £50 second-hand about 3 years ago and would be even less to buy now.Reply

I am planning to get a new i5 4690 system for 1080p transcoding. I am not interested in GPU assisted transcoding. My focus is on QuickSync assisted transcoding. It is said that latest Quicksync is almost comparable to CPU based transcoding.

It is annoying to see all sites compare only obsolate systems with a vague spec. (including AnandTech). Then, they omit any screenshots. Some are so weird thay put the video at youtube!! How can one compare the quality after youtube again transcoded it?

So far I am looking for a scientific, specific comparison on -

1) i5 4690 or atleast 4570 with CPU only & with QS

2) Screenshots in PNG3) Specifying what settings were used? What QRF / what QCF? What speed presets - 'slow' or medium? What profile - high or main?. 'faster' or 'very fast' settings speed presets is worthless.

2) What is the source format - 1080i or 1080p? How long? What was the target size.3) What QuickSync performance settings were used? I heard that haswell supports 7 performance vs quality settings. Also never found if QS supports any other parameters.

Merely mentioning FPS or time won't help.

I request you to pl. provide us with such an comprehensive comparison with will help many users like me, to settle all doubts for good. Thx so much in advance...​Reply

They dropped I5 750 and I7 920 from the gaming benchmark because both the processor will put up decent FPS which will take away most of the lime light from the haswell. A 4.0 ghz clocked i5 750 or i7 920 is still capable of keeping the modern GPUs running at 100% .I would be more than happy if someone can prove me wrong. Reply