All the AMD fanboys will be shocked! Chuckula is calling Haswell a disappointment! Well, don't get too excited.. I am going to be ordering one as my new main system and I think it is money well spent. I'm not saying that Haswell is slower than Ivy Bridge, and while I'm not expecting overclocking miracles, I think it should do better than Ivy did (which isn't saying too much). At the same time, anybody looking for the "glory days" of jumping from a Pentium 4 to a Core 2 or a Phenom, well those days are long gone. Also, when I say "disappointment" I mean it from the context of the TR readership who really like to build their own desktop systems and often shoot for big overclocks and high performance. In other areas Haswell is actually very impressive. So without further ado, my reasons:

Intel has abandoned desktop CPU design! Yes, I said that even though I'm eyeing desktop Z87 chipset boards right now. It's not that Intel has stopped making CPUs that fit into desktop motherboards, it's that the desktop is secondary and has been secondary for quite some time. For example, I could technically take the Samsung Exynos chips, add in extra chips to implement all the peripheral I/O I need like PCIe, and slap it onto a desktop motherboard too, but nobody would claim that those chips are designed as desktop chips either.

In the old days, Intel designed chips for the desktop, then neutered them until they could sort of be used in notebooks. That began to change with the Core & Core 2, which were actually derived from the older Pentium-M Banias/Dothan core chips due to the train wreck that happened with the Pentium 4. It continued with Sandy & Ivy and has been taken to an extreme with Haswell. Sandy & Ivy took power consumption seriously, but targeted "fat" notebooks at about 35 watts while being cut-down to Ultrabooks if needed. Haswell appears to *really* be targeted at the 15-25 watt power range with more cut-down models going for tablets... and expect Broadwell to be retargeted at even lower TDPs so that high-end tablets will run quite nicely with broadwell chips at sub-10 Watt TDPs with average active power usage under 5 watts.

You'll note that I skipped over Nehalem in the discussion above. I would go out on a limb and say that the last real desktop platform and chips that Intel designed were the original Nehalems with the X58 chipset. They were clearly designed for high-end performance and features and while they obviously weren't designed to be power hogs, power efficiency was on a performance-per-watt basis and not on an absolute power consumption basis. Oh, and as for Westmere and Sandy-E, those chips are coming from the opposite direction: instead of being scaled up desktop chips, those are cut-down server chips. Unfortunately, while they have excellent multi-threaded performance, the single-thread performance is not any better than the overclocked mobile parts, the prices are expensive, and the platforms lag behind because servers don't really care about a lot of the features that performance desktop users really want.

So what does that really mean? It means that when you buy a Haswell desktop part, you are basically buying a factory overclocked mobile chip. Don't get me wrong, you can get some very nice performance out of that chip, but at the end of the day it is still a mobile chip that is being re-purposed for a different platform. You will lose something in the translation and that is why there is so much disappointment over Haswell. Is it faster than Ivy? Sure, but not by enough to excite the desktop enthusiasts.

EDIT: And some more points

Intel Is playing catch-up in areas other than CPU power. Part 1: The ARM threat. Intel faces threats on two fronts. The first (and by far biggest) threat is from the ARM licensees who are dominating mobile and are looking to jump out of mobile. Obviously Atom is Intel's primary competitor against ARM, but Haswell plays a very important role too. The lowest-power Haswell parts will have active power envelopes that are just a tad higher than Cortex-A15 parts (and likely have idle power envelopes that are just as good as the ARM parts). Haswell handles higher-end tablets and convertible notebooks and raises an effective barrier to ARM creeping up any higher. Of course, the upshot with obsessing over power consumption is that Haswell isn't designed with extremely high clockspeeds in mind, and it ain't getting more than 4 cores.

Intel Is playing catch-up in areas other than CPU power. Part 2: The IGP problem. Intel's own marketing for Haswell has placed a lot more emphasis on the IGP. The highest-end derivatives of Haswell will have some pretty nice IGPs, but then again on the desktop we really just don't care. AMD owns this space right now, especially on the desktop where Trinity can run its IGP at full speed. The one piece of good news is that Intel's desktop parts pretty much all get the same IGP, and that it is not the super-highest end IGP. First, it means no more limiting the good IGP to the high-end parts where it makes even less sense, and second it means that the higher-end desktop parts are not devoting huge resources to the vestigial IGP. Of course, Trinity (and especially Kaveri) will have much better desktop IGPs than what Intel can offer. However, Intel's IGPs in mobile products will be quite strong and since Intel gets to devote more resources to the CPU, it's desktop CPU has nothing to fear from AMD (short of an upgraded AM3+ platform with server-based Steamroller chips... we'll see if that threat materializes).

The downside is this: a real dedicated desktop CPU would ditch the IGP entirely and slap in 2 extra cores. Imagine a somewhat watered-down version of the 3930K with Haswell cores, a somewhat smaller L3 cache, dual-channel memory, and fewer PCIe lanes. Probably not quite as scalable in extremely multithreaded loads, but cheaper to make and still packing more punch that a quad-core Haswell. Enthusiasts would lap up these chips BUT.... who else would? Remember that Sandy-E and Ivy-E can afford to be niche products because Intel is making a killing at making the exact same chips for servers. Where would the economies of scale be for these chips?

Basically: IGPs have lots & lots of repeated functional units. Intel is devoting more & more chip real estate and other resources to the IGP and fortunately IGP performance is improving. However, that means you aren't devoting those resources to the CPU. AMD has already voted *heavily* in favor of the IGP (mostly because that is where it has an advantage), but Intel is gradually moving towards more beefy IGPs too. Something needs to give, and it means that the CPU cores don't get crazy increases in available resources, and you don't get to add more cores either.

Tick-Tock has messed with our expectations. Think back to the good old days when CPUs were improving by leaps and bounds. One thing we tend to forget is the gap between releases of major CPU generations since they all tend to mush together in our memories. Think about comparing a 386 to a 486. The 386 was launched in 1985 and the 486 was launched four years later in 1989. Continuing on that line, the original Pentiums launched in 1993 and the Pentium II launched in 1997. If you waited 4 years between generations, you definitely got a big performance boost in the process.

Here's the trick though: How many people on this site are eager to compare Haswell to Intel's (or AMD's) top of the line chips from mid-2009? To refresh your memory, we'd be comparing Haswell to first-generation X58 platform Nehalems (The I7-950 launched on May 31, 2009, which is almost exactly 4 years before Haswell's launch date). Now think about how many people are calling Haswell a failure because it isn't destroying the 3770K, which was launched in April of last year. The tick-tock pace has given people false memories of the past when they thought that you were getting major new CPUs every single year for the last 30 years, when this has not been the case. [EDIT: Another more recent example is the jump from P4 to Core 2. That was another very artificial gain since the P4's performance and heat output were so terrible to begin with. People just assumed that should be the standard instead of being the exception.]

In fairness to the critics, I will agree that we aren't seeing the same torrid pace of CPU speed development that we saw back during the 80's and 90's, but at the same time, there are some very strong improvements being made in CPU performance. Unfortunatley, most of the low-hanging fruit has been picked. Multi-core CPUs are great, new vector instructions are awesome. Unfortunately, both of those improvements need properly written software to show noticeable improvements, and the software has lagged badly. AMD and Intel would desperately love for every application on your computer to scale beautifully to 16/32/64/etc. cores, but it ain't happening in a pervasive way. That's not to say that there are no applications using the resources well, but there are a huge number of everday applications that do need the performance but aren't written to take advantage of the computing resources very well (the web browser I'm using right now is an example of on such application).

Last edited by chuckula on Sun May 19, 2013 2:49 pm, edited 6 times in total.

Intel has being moving away from desktop scene since the end of the Netburst dyansty. The last *desktop* CPU that Intel developed was Lynnfield units. Everything since then has been geared towards the enterprise and portable markets. SB, IB and Haswell desktop chips are just silicon that failed voltage testing for laptop and ultra-portable markets.

Intel has being moving away from desktop scene since the end of the Netburst dyansty. The last *desktop* CPU that Intel developed was Lynnfield units. Everything since then has been geared towards the enterprise and portable markets. SB, IB and Haswell desktop chips are just silicon that failed voltage testing for laptop and ultra-portable markets.

At stock the i7-2600 get 25GB from dual channel, the FX-8350 11GBOverclock the i7 scale so well that it hits 32GB, the FX-8350 cant barely do 13GB(...)If AMD can fix whatever bug they have in their memory controller/cache, they can be on parity for Intel for gaming benchmarks.

Ohh? ( =①ω①=) This is veeerrrry interesting! I had heard that AMD's memory controller/interface was subpar, but I didn't know it was because of a caching issue. That's enlightening! Do you have any sources for further reading on this? (Sorry to hijack your thread, chuckie!) (=｀ェ´=)

At stock the i7-2600 get 25GB from dual channel, the FX-8350 11GBOverclock the i7 scale so well that it hits 32GB, the FX-8350 cant barely do 13GB(...)If AMD can fix whatever bug they have in their memory controller/cache, they can be on parity for Intel for gaming benchmarks.

Ohh? ( =①ω①=) This is veeerrrry interesting! I had heard that AMD's memory controller/interface was subpar, but I didn't know it was because of a caching issue. That's enlightening! Do you have any sources for further reading on this? (Sorry to hijack your thread, chuckie!) (=｀ェ´=)

Yes. Ironically, while AMD integrated its memory controller onto the die first, Intel's memory controllers have had superior performance for a very long time. When TR did its first review of Bulldozer, they reviewed it using DDR3-1866 memory as recommended by AMD. In benchmarks against Sandy Bridge which was only using DDR3-1333, Sandy won pretty much across the board.

Yes. Ironically, while AMD integrated its memory controller onto the die first, Intel's memory controllers have had superior performance for a very long time. When TR did its first review of Bulldozer, they reviewed it using DDR3-1866 memory as recommended by AMD. In benchmarks against Sandy Bridge which was only using DDR3-1333, Sandy won pretty much across the board.

Well, I knew that; I've read that also in the last few years. I was just wondering specifically what about their memory controller was bad; why they perform so much worse. I guess if anyone knew that, AMD could fix it, huh? Hehe. ╰( ´◔ ω ◔ `)╯

You know, it's kind of interesting; a huge number of the problems that Radeons have had over the years have also been related to memory, very frequently due to texture caching (which is what gave Rage such issues, and likewise with some other OpenGL apps that STILL don't work properly on Radeons years later.) I started thinking about this wondering if perhaps some of the expertise from ATI could help AMD out with this, but they actually have the same problem. Very interesting! (;´・`)>

It could very well be that the on-die memory controller made up for the fact that AMD had mediocre cache performance, back when the Athlon64 was kicking P4's butt. Now that everyone is using on-die memory controllers, they've lost that advantage.

(Aside: Why am I still seeing NCIX Mother's Day ads in the forums?)

The years just pass like trains. I wave, but they don't slow down.-- Steven Wilson

It could very well be that the on-die memory controller made up for the fact that AMD had mediocre cache performance, back when the Athlon64 was kicking P4's butt.

That is spot-on. The integration of the memory controller was a major improvement, but it was also papering over a multitude of weaknesses in other areas of the chip. While the Core 2 certainly lagged in synthetic memory benchmarks due to still being saddled with the FSB, its cache was so much better that it could still win in many real-world cases that weren't primarily bound by the memory bus. Nehalem then finished the transition.

It's not exactly that Intel isn't focusing on the desktop anymore, it's that they realize that adding more raw performance to a CPU is meaningless to all but a miniscule number of hardcore system-building enthusiasts because there just isn't anything out there that realistically requires more than a "good enough" quad core. Hence the focus by both Intel and AMD on features like power efficiency and passable on-chip graphics processing, and future releases by both chipmakers will translate to small performance improvements and much more substantial feature improvements.

It's not exactly that Intel isn't focusing on the desktop anymore, it's that they realize that adding more raw performance to a CPU is meaningless to all but a miniscule number of hardcore system-building enthusiasts because there just isn't anything out there that realistically requires more than a "good enough" quad core. Hence the focus by both Intel and AMD on features like power efficiency and passable on-chip graphics processing, and future releases by both chipmakers will translate to small performance improvements and much more substantial feature improvements.

I'd say the focus on power consumption is driven by HPC more than anything else.

It could very well be that the on-die memory controller made up for the fact that AMD had mediocre cache performance, back when the Athlon64 was kicking P4's butt.

That is spot-on. The integration of the memory controller was a major improvement, but it was also papering over a multitude of weaknesses in other areas of the chip. While the Core 2 certainly lagged in synthetic memory benchmarks due to still being saddled with the FSB, its cache was so much better that it could still win in many real-world cases that weren't primarily bound by the memory bus. Nehalem then finished the transition.

Intel has been using a 256-bit cache bus since the Coppermine days. K7 only had a 64-bit cache bus, and K8's cache was 128-bits wide. I haven't really been following AMD since the Socket AM2 days, so I'm not sure how wide K10 and Bulldozer caches are.

Interestingly enough, memtest86 reports that my 1.8GHz (overclocked) PIII-S has a faster L2 cache than my K8-based Opteron X2. Like all older Intel CPUs, the P3 is held way back by its very slow FSB.

Maybe Intel will feed more of their desktop market from the server CPU designs in the future. It would make sense if they want to continue to lower TDP and focus on mobile applications for their mainstream designs.

If it’s true than an architecture can scale by a single magnitude in terms of power used then a range of roughly 8 to 80W covers what Haswell offers nicely. So I don’t see it as a compromised design for desktop usage just one that fits into the TDP limit that Intel has been using for the mainstream desktop & laptops for a while even before they were looking at extending the lower range to below 10W.I think this was a very sensible decision and one they can afford to take because they have the high end platform to take the slack above 80W.I don’t see Haswell as disappointing overall but if IB-E is stuck on 6 cores then that is a kick in the teeth to those looking for top class desktop performance.

The bullet points for me are:

Excellent idle power consumptionDisappointing over-clockingFurther fragmentation of features – Iris Pro on desktop is BGA only, no TSX in K seriesIris Pro offers very good performance and especially per watt and image quality is reportedly good.

Most of the fun is with the mobile parts these days and ultimately that makes sense as that is the area that is more constrained due to thermals and power.Looking forward to an in depth review of the mobile platform.

I’d be interested in the BGA desktop i5 chips if made available in a decent board at a reasonable price. Has anyone announced that they are offering such a combo?

I just skimmed through Anandtech's review. That review seemed to me to be very rushed, in terms of depth of coverage.

Regarding overclocking, Anandtech was able to hit 4.7 GHz on air. That's plenty respectable as far as I'm concerned. I would not plan on taking it that high even if I got one.

Regarding die temperatures, it important to remember that the voltage regulators are on the die now.

Regarding the silly suggestion that Intel has "abandoned the desktop", well, that's just nonsense. Intel has the best desktop platform in the new 8 series and the best desktop chips with the new Haswell chips. They're an incremental improvement over Ivy Bridge, and that's perfectly reasonable.

If you plan on upgrading, go ahead and upgrade now, kids, because we all know Intel won't be dropping prices -ever- and we all know that AMD will release another chip that will not fail to disappoint.

Regarding the silly suggestion that Intel has "abandoned the desktop", well, that's just nonsense. Intel has the best desktop platform in the new 8 series and the best desktop chips with the new Haswell chips. They're an incremental improvement over Ivy Bridge, and that's perfectly reasonable.

If you plan on upgrading, go ahead and upgrade now, kids, because we all know Intel won't be dropping prices -ever- and we all know that AMD will release another chip that will not fail to disappoint.

You know me.. I'm Mr. Controversy

I just ordered my 4770K & Motherboard today actually, so I'm by no means that put off by Haswell. I'm also replacing a Core 2 Duo system that I've had for over 5 years, so I think I'll see some performance gains too.

I just ordered my 4770K & Motherboard today actually, so I'm by no means that put off by Haswell. I'm also replacing a Core 2 Duo system that I've had for over 5 years, so I think I'll see some performance gains too.

I'm strongly considering letting my X4 955 stay in place and saving up for a camera lens instead. I'd love to have a Haswell, but I really don't need it.

I just ordered my 4770K & Motherboard today actually, so I'm by no means that put off by Haswell. I'm also replacing a Core 2 Duo system that I've had for over 5 years, so I think I'll see some performance gains too.

I'm strongly considering letting my X4 955 stay in place and saving up for a camera lens instead. I'd love to have a Haswell, but I really don't need it.

Ahh.. it's nice to know that people have hobbies that are much much more expensive than mine I actually build systems that are relatively expensive (not $4000 expensive, but in this case $2500 expensive*) and then keep them for a long time. My main PC is now over 5 years old and my notebook is almost 5 years old. Next year is the notebook upgrade.

* Interesting note: Assuming I had gone with an FX-8350 and a motherboard that is $50 cheaper in this build, I would have saved a grand total of $160 on the CPU + $50 for the motherboard = $210 or a little under 10% of the overall system cost. That's why I've never been overly convinced at the purportedly huge savings of going with AMD. Maybe if the only things you are buying are the CPU, but not for the whole system.

Heh, with Intel's slow progression I wouldn't be surprised if AMD catches up in performance this year or next. I don't see them surpassing Intel, but getting very close is a real possibility especially from a value perspective.

Speaking of disappointments, has anyone here seen the boxes for Haswell chips? I've seen them on Tigerdirect.com and quite frankly, I think it's a little freaky, with that human head. The rest just screams "70's" or "80's", not sure which. I can't believe these are Intel boxes. Intel used to make really awesomely designed boxes. Haswell's box design borders on disturbing.

Last edited by ronch on Sun Jun 02, 2013 11:03 pm, edited 1 time in total.

Heh, with Intel's slow progression I wouldn't be surprised if AMD catches up in performance this year or next. I don't see them surpassing Intel, but getting very close is a real possibility especially from a value perspective.

They are too far behind for that to happen. Maybe in Intel literally did not improve speed via IPC, extra instruction sets and clockspeed, they'd catch up in 2016. They're about three or so years behind in overall performance for stuff that isn't super duper heavy on integer code and heavily threaded.

Assume you are building a $1000 system with the 4670K vs. building it with the FX-8350.

Total price delta for brand-new launch-day 4670K vs. the 8350 that has been through several price cuts since launch: $60.Motherboard price delta: That's variable, but remember that there are launch-day motherboards with the Z87 chipset for $130 from Asus (which is typically a more expensive brand).While it's 100% possible to buy a more expensive AM3+ motherboard than LGA-1150, let's assume that your AMD motherboard is $40 cheaper, for a combined savings of $100.

So, taking into account launch-day inflated prices for the supposedly overpriced Intel parts your relative percentage savings are about 10% of the overall system cost. Can the performance advantages of the 4670K and lower power requirements justify a 10% markup? I would say yes.