Intel Haswell-E review: The best consumer performance chip you can buy – with some caveats

Share This article

Today, Intel is launching its long-awaited update and refresh of its top-end enthusiast platform: Haswell-E. It’s a refresh that’s been a long time coming — the old X79 chipset was long in the tooth 12 months ago, while the top-end six-core Ivy Bridge didn’t overclock well and wasn’t a huge improvement over the six-core Sandy Bridge. Last year’s Core i7-4690X wasn’t panned, but it didn’t do much to move the ball forward for hardcore enthusiasts — and the debut of Devil’s Canyon, with its significantly faster clock and far cheaper price eroded that difference even more.

Today, Intel is changing that with a new eight-core CPU — Haswell-E — a new lineup of LGA2011 products, and the X99 chipset. Will the new hardware put a fresh coat of paint on a lackluster lineup? Let’s take a look.

The CPUs

In the past, Intel followed the same pattern for product launches. The SNB-E and IVB-E series both debuted in six-core and four-core configurations at approximate price points of $1050, $600, and $330 respectively. The top two chips would be hexa-core while the third was a quad-core. Today, that changes — Intel’s quad-core enthusiast platform processor is going away altogether, as shown below:

Intel’s new Haswell-E lineup

This shift introduces some significant changes to Intel’s total product stack. The price for a six-core desktop chip is coming down sharply, from roughly $580 to $380. The clock drop isn’t quite as significant as it seems, but there’s an important difference between the Core i7-4790K and the Core i7-5000 family. These new chips clock up to full Turbo mode and stay there, even when running a program like Prime95. The Core i7-4790K and Core i7-4770K, in contrast, will tend to top out around 4.2GHz or 3.8GHz when running all cores simultaneously.

Thus, the Core i7-5820K offers a 50% core increase for an effective 16% frequency decrease — along with more PCI Express channels and twice the RAM channels. Overall, it’s probably the strongest part in the lineup for pure CPU work — assuming you don’t want to add a second GPU. The Core i7-5960X , in contrast, is a bit harder to pin down. Intel’s octa-core chip still adds 33% more cores compared to the hexa-core variety, but trades off 12.5% of its frequency to do so and runs about 17% slower than the effective top frequency on the Core i7-4790K.

That’s still more than enough frequency to beat out Intel’s other chips on multi-threaded workloads, but single-threaded is going to be another story. Before we dive into that comparison, though, let’s take a look at the new chipset.

Intel’s X99 reclaims the feature crown

The X99 chipset is an evolutionary upgrade to the X79, but it’s significant enough to reclaim the overall chipset crown. Unlike the X79, which was limited to just two SATA 6G ports, the new X99 packs 10 SATA 6G. X79 also lacked integrated support for USB 3.0, whereas the X99 has six USB 3.0 ports and eight USB 2.0 ports.

Next up, there’s the most obvious upgrade — the DDR4-2133 support. Intel’s marketing documents are tilted to make this appear like a bigger jump than it actually is; the X79 may have only formally supported DDR3-1600, but manufacturers regularly validated DDR3-2133 and even DDR3-2400 in X79 motherboards. The good news is that this is somewhat replicated on the X99; the motherboard supports up to DDR4-2800 (we tested 16GB of DDR4-2667). Unfortunately, support for this standard is still a little flaky — the Asus board we benchmarked could only run its multiplers at DDR4-2667 if we increased the CPU base clock to 125MHz and brought multipliers down to compensate.

Now, the real-world impact on benchmarks should be negligible — the base clock boost won’t impact PCI Express frequencies or other peripherals — but early support is still, well, early. Also bear in mind that DDR4 still commands an enormous price premium, even if DDR3 prices are up sharply from their trough nearly two years ago. 16GB kits of CAS 11 latency DDR3-2400 can be had for around $180, compared to $400 for our Corsair DDR4-2667.

Post a Comment

1). Due to an address-shipment problem, I had barely three days with the entire system. I had to prioritize where to do my work.

2). There is no benefit to using Haswell-E or Ivy Bridge-E in a gaming system unless you intend to try something like multi-GPU in 4K gaming — and even THEN, the benefit may or may not be significant. See here:

Note that the Core i7-4790K delivers higher frame rates at less than half the price.

I didn’t have time to configure a system for 4K multi-GPU gaming with highest-end cards to determine if the additional PCIe lanes would make a difference or not — not and still do any kind of general testing.

ronch

Haswell-E may be the finest desktop processor money can buy, but if you’re like the rest of us poor people you know you want the best processor for the money. And that processor is the AMD FX-8350.

Joel Hruska

It’s really not. You can buy Ivy Bridge and Haswell quad cores starting at $189 and $200. I’d take a Core i5-4590 (3.3 base, 3.7 Turbo, 84W) over an FX-8350. Newer chipset, better SATA performance, much better single threaded performance — which matters when a game doesn’t spin off eight threads.

The FX-8350 has the same problem here that Intel does, ironically. Most games don’t use 8 threads, much less 16 (for the 5960X). That means perf is determined by efficiency and clock rate — and the FX-8350 isn’t clocked high enough to compensate for the Core i5’s sizeable performance advantages.

The recent price cuts and reposition on some of AMD’s APUs might make them a good deal in lower price points, but not above $140.

george law

That is true if all you are buying is the processor but when you add the decent system board, memory & video card it is still cheaper to go with AMD most of the time.

Matt Menezes

If you have a Microcenter near you, check out the Intel prices. They sells CPUs cheaper than Newegg, and then you get money off a mobo. Then go to Newegg for your GPU and RAM.

There really isn’t any competition at the high end for Intel.

Joel Hruska

Cheaper? Sure. I picked a $200 chip over a $180 AMD chip. AMD boards tend to be cheaper than the Intel boards, so there’s no question of the price advantage.

If you know your workloads are all multi-threaded, an FX-8350 may compete against an i5. But for single/multi-threaded mixtures, the i5 will still win out.

That doesn’t make the FX-8350 *bad.* If $180 was the very top of my range, I’d probably take an FX-8350 over a cheaper Intel dual-core.

http://www.flickr.com/photos/catchphotography/ H23

New benchmarks are showing 8350 with i5 4th gen i7 3rd gen while tons of people are happy with their core 2nd gens and are waiting for skylake.

BtotheT

Skylake is gonna clean up, Hard. As will Broadwell-E(for high enders). I personally am going GPU free(other than the intergrated) with a Skylake labtop(GT4/e) and broadwell will power my mediaplayer. Unlike most americans I take efficiency over expedience and believe in weighing demand of tasks. I have plenty of money for a gpu but I’m not gonna dump a pound of coal into the clouds and ocean over it every month. I read the benches of the games I want to play, aim just high enough to play at 25-40 frames in ultra, and hit my standard, if I want higher frames I drop to the high setting. Haste makes waste, waste not need not.

VirtualMark

You think that Broadwell will run games in ultra? I’d be surprised, unless they’re really old games.

Joel Hruska

No integrated GPU will ever be able to push Ultra framerates at high resolutions compared to high-end desktop cards. If they could, we wouldn’t *need* high-end desktop cards.

And the GPUs that can push those framerates, even the cutting-edge and most advanced parts, have TDPs of 200W or more.

BtotheT

Skylake GT4e will hit 25 frames ultra 1080p on the games I play(blizzard entertainment, Heroes of the Storm currently). My Intel 4000 can pull medium with the occasional studder under 12 frames, I know this because I have my Geforce GT 650m 1GB only set to run when I’m plugged into my power source. I will need no better than GT4e for my personal gaming demands, I see it being in approximation to a GT 620m 1gb.

No(read closer), I only mentioned Broadwell as a mediaplayer and Broadwell-EP for the fact it will be a big winner for Intel’s video/graphical/production applications sales(18cores and low power). I mentioned Skylake GT4/e integrated as game capable. The GT4 and GT4e Skylake will more than likely be 40% better than GT3e(Broadwell) which will be just below capable. On PC I play some blizzard entertainment games(have for a decade) they aren’t all that demanding, and short of console it’s all I enjoy(well maybe C&C or Half-life in the late 90s).

ronch

I made my post with regards to overall computing resources. The i5 and i7 are obviously better for single-threaded or lightly-threaded apps but in terms of sheer brute force, the FX wins over the i5 and comes close enough to the earlier i7 SKUs. But I do agree with you, Joel. The Core i5-4590 is a terrific choice especially if you’re a gamer and don’t really care about TrueCrypt or Winzip, apps that most people probably don’t use all the time.

anolesoul

NOT most …but ALL of the time;at least, so far as of now. Still waiting for AMD to get on the band wagon—for support of DDR4 ram.

Ace Korneya

Games and app are going to start using more thread as years go by.. just because today’s games don’t require that much CPU load. it doesn’t mean developers are going to start pushing future game to use more power out of PC as years go by..

Joel Hruska

That’s possible, but the FX-8350 is already two years old.

By the time the games you’re referencing exist, we’ll have Mantle and HSA as mainstream features. Even if games become more multi-threaded friendly it’s not going to be in time to make a major difference to the old FX chips.

ronch

I think what he’s saying is that basically, choosing an FX-8350 right now is not a mistake. It’ll handle today’s apps well and will provide more compute resources for future apps that can better utilize the FX’s resources. I said the FX-8350 is the finest CPU for about $200, and I realize that’s stretching it a little bit. The i5-4590 is of course the better choice if you’re gaming and it’s a lot more energy efficient to boot. I made my claim with regards to the FX-8350’s sheer brute computational power with cursory regard to energy efficiency. Nonetheless, I don’t think it’s a mistake to get the 8350 especially if multi-tasking and running tons of VM’s are your thing.

Ace Korneya

yeah that’s correct.. like for example i have a i7-2600k @4.9Ghz pair with gtx 770 OC.. what I’ve seen in games especially in newer games, that hey are using all 8 threads but the total load on my CPU on most games is between 25-35 percent. Certain games especially from Ubisoft tend to used between 45-50 percent of my CPU, i bet is going to keep on climbing as years go by..

Joel Hruska

Not unless Intel pushes up their core counts.

Even the consoles, with 6-7 dedicated CPU cores (1 core is typically reserved for the OS) aren’t likely to change this. Kabini is a low-power core running at 1.6-1.7GHz. You can take a game designed to run on seven threads @ 1.7GHz and almost certainly spread it out across a high-end quad-core clocked at 4GHz with Hyper-Threading.

7*1.7GHz gives you 11.9GHz of effective processing power. 4 * 4GHz gives you 16GHz of effective speed — and since Kabini runs its L2 cache at half speed (~800MHz) you can use vastly increased cache clocks and decreased latencies to hide the impact of thread stalls or data swaps.

Plus, on a high-end chip with Hyper-Threading, you’ve got the approximately equivalent of about 5 CPU cores (based on how much additional performance HT tends to offer).

Bottom line is that I don’t expect consoles to shove multi-threading up much this generation.

Joel Hruska

Ronch,

I think we have fundamentally different views of the Piledriver family. From my perspective, Bulldozer is fundamentally broken at a hardware level — broken in a way AMD can’t seem to fix. Agner Fog (the CPU analyst) thinks the problem may be in the Fetch unit since doubling up on decoders didn’t help as much as we expected it to. It’s also possible that the ALU design is poor (each ALU has four pipes, but only two of those four are for actual instruction execution).

I don’t think there are programs that can make “better” use of the FX family’s resources because I think the problems with the FX family are baked into the core itself. I hesitate to draw the P4 comparison that’s so popular with people, but the truth is, Prescott was a *mostly* fine chip that suffered from crippling performance degradation when it had to flush its pipeline. Bulldozer does seem to be in a similar position as far as IPC is concerned.

I think it’s also worth noting that if Bulldozer had met its original planned launch window when the design was first laid out, it would’ve debuted at 4.2GHz and against Nehalem rather than at 3.9GHz against Sandy Bridge. If you pull the data sometime you’ll find that Piledriver is quite competitive with the first generation Core i7 products.

I think $180 is an ok price for the FX-8350 — I just wish AMD’s SATA support and I/O performance were better all the way around.

http://www.flickr.com/photos/catchphotography/ H23

PC mag has a recent article saying that the vast majority of new games will be multi-threaded. Big releases this year are mostly multi-threaded, next year even more. 8350 overclocks better, and SATA performance really. Tech report has a sweet review of this Intel processor and accurately shows where the 8350 stand today. You might be surprised how well the 8350 is aging….

Joel Hruska

They show the FX-8350 as tied with the Core i5-4590 in non-gaming benchmarks and modestly outperformed in gaming benchmarks.

So yes. Between the two, the Core i5-4590 is the better chip. Then toss in SATA performance and nearly 2x the single-threaded performance and it’s really no contest at all.

http://www.flickr.com/photos/catchphotography/ H23

lol you’re so annoying Joel.

8350 is the better multi-tasker and great for price to performance. It’s the better performer with a modest overclock, or not so modest. i5 4590 if averaged out has about 35% better single thread at stock clocks. I’m a multi-tasker, most people are, always lots of software going or idling in the background.

I think if you look at the huge amount of benchmarks provided by tech report the 8350 comes out looking nice, and that’s at stock clocks. It’s 2 years old now, not bad at all.

Joel Hruska

I think TR is a good site run by excellent people. — but AMD continues to lag in power efficiency, in I/O performance, and in single-threaded perf.

As I’ve said: Let’s say you are doing multi-threaded workloads and $180 is the MOST money you can spend *and* you want to buy new. In that case, the FX-8350 may well be your best bet. But if you care about performance primarily in tests with four threads or less (including gaming), then no, it’s simply not going to cut it very well.

Ultimately, I’d recommend the Intel solution for general workloads and users. There are cases where AMD still puts up a good fight in this segment, but gaming isn’t really one of them.

VirtualMark

AMD is a couple of years behind Intel. Intel offer the best CPUs on the market right now.

Although I’d love it if AMD made a comeback, as we desperately need some real competition. CPU performance is stagnating at the moment, it sucks.

Asdf Ghjk

” I’m a multi-tasker”

Exactly what kind of tasks do you do in the background?
Because if you don’t do rendering whilst playing games, it’s really not going to matter. It’s only faster in *certain* multithreaded applications, and I really doubt there’s a lot of people using them.

This is pretty much the only single threaded benchmark i care about, due to the more “objective” nature of the Phoronix Test Suite.

Joel Hruska

It’s hilarious that you pick POV-RAY of all tests. Back in 2004, POV-RAY put out the first 3.6.0 executable. (This was their own version of the executable, not the code you got if you compiled POV-RAY yourself).

If you ran their benchmark, Prescott was the fastest chip on the market. If you compiled the benchmark yourself, it lost to Northwood. The Athlon 64’s performance also took a crippling hit.

So I was curious the other month. I have a Perl patch that can scan an executable and rip out the Intel-specific compiler code that prevents the executable from running optimized SSE2 on an AMD product. Would you like to know how much POV-RAY 3.6.0 performance improved on the A10-7850K?

It increased by nearly 50%.

Granted, I have never gone back to run the benchmark again on a K8 of that era — but it goes to show you that once upon a time, the POV-RAY team absolutely did cheat. Intel compiled that 3.6.0 executable — and POV-RAY distributed it.

PS.

The FX-8350 has eight cores and scores a 252.1. Per-thread result (simplistically) would be 31.5 score / core.

The 4770K is a little trickier. It has four cores but eight threads. Hyper-Threading rarely gives Intel more than a 20% advantage, so the assumed score with four threads would be around a 310.

310 / four cores = 77.5.

You’re right. Haswell is nowhere near 2x the IPC of Vishera. It’s more like 2.46x.

kiko

The thing is that Cinebench was known to cheat, and I’m pretty sure that, unlike Pov-Ray, they’re still doing it.

I mean, according to that benchmark Westmere destroys Vishera by like 20%, and then you go to the open source benchmark, and they’re close (well, the 8320 scores 241, vs the 990X that scores 229 -the 8150 scores 225.6).

When you go to Pov-Ray multithreaded the 8350 follows the 4770k like a shade and the 8320 follows the 3770k (that’s pretty much clock for clock).
In Cinebench R15 the 8350 only manages to equal the 2600k, and that’s running 600Mhz faster, and the 9590 has problems to catch up with the 4770k at stock -.-

And then you have benchmarks like X264 and John the Ripper, where AMD is king (Intel will need another generation of quad core I7’s to beat the 8350 in those benchmarks).

According to the open source benchmarks, Haswell is like 40-50% (1.4/1.5x) faster than Vishera in single threaded benchmarks. A bigger advantage than that, and smells like crippling to me…

I really would like to see what kind of performance CMT would have with IPC parity (it’ll wreck the quad core I7’s for sure), but that won’t happen, at least not in the near future.

Joel Hruska

Kiko,

Cinebench does not appear to cheat, at least not with compiler-level patches. I tested every public version of the program Maxon has ever released.

The A10-7850K’s scores do not change between patched and unpatched versions of the program. I will grant you that some of them were compiled with the Intel compiler, but tearing out the “Cripple AMD” function just doesn’t impact the scores. I still have the data and can show you if you like.

If you can point me to the benchmarks where AMD keeps up with Intel in H.264 encoding I’d appreciate it,. It does not do so in the two-pass x264 test. Even if I use newer executables with better AVX support, AMD’s performance improves (but does not match) Intel.

But I did found the X265 4k benchmark, in this one the 8350 kicks some @ss (much more expensive @sses, to be more specific ^^):http://images.anandtech.com/graphs/graph8316/66032.png
Sadly Thuban (I own a 1090T) performs horribly in this one, but taking into account how old it’s intructions are, I can’t complain.

Well, you can’t expect 4 integer units to beat 4 full cores, although that I5 does have very low frecuencies, even less with the poor IPC of Steamroller and Piledriver. With IPC parity I would expect the 4 integer unit CPU to be right in the middle of the I3 ( instead of barely beating it) and the I5, maybe closer to the I5.

Joel Hruska

I don’t think it’s poor optimization — I think it’s fundamentally poor performance from the BD-class of architectures. There are some serious low-level oddities with the core. L1 performance drops when two threads are simultaneously running on two different modules. There’s no explanation for this. It shouldn’t be true. But it is.

L1 cache contention and eviction remain a major problem for Steamroller, and Piledriver was even worse. Because the cache is write-through, L1 performance can be limited by L2 writes, and L2 writes are slow and high-latency.

Now you’re right that CMT as a design concept isn’t invalidated by Bulldozer’s problems, many CPUs have historically used CMT on some level. But whatever the cause (I suspect L1 cache contention, slow caches overall, and possible Fetch bottlenecks, plus poor allocation on integer pipelines) I don’t think AMD can fix it. If they could’ve, they would’ve already.

Joel Hruska

I wanted to actually drop back in and respond to the benchmarks you posted. Thanks for that. I agree that AMD is more competitive in the second pass than in the first when using the x264 test.

The Hybrid x265 video results are very interesting. They show a much larger-than-typical gap between Piledriver and Bulldozer, even after accounting for clock speed. (PD is running about 20% faster than BD — and only about 10% of that is CPU clock).

I agree that AMD puts up a much stronger showing in this test.

The John the Ripper results are also interesting. The fact that they include a Core i7-4960X clocked at 4.3GHz means we can establish a comparison between the FX-8350 (4.2GHz Turbo). If we multiply the FX-8350’s results by 1.02x (to account for the 100MHz clock speed gap, we predict a score of 7247 for the FX-8350’s eight cores as compared to 10,371 for IVB’s six cores.

But wait. Comparing the Core i7-3770K against the Core i7-4960X at stock speed, we see that adding two additional CPU cores increases the core count by 50% — and increases performance by virtually the same amount. We can therefore predict the Core i7-4690X’s performance if we boosted from six cores at 4.3GHz to 8 cores at 4.3GHz.

Projected score for a hypothetical eight-core IVB at 4.3GHz? 13,482.

In other words… IVB-E is 1.86x more efficient than AMD when we normalize clock speed and core count.

My point is not to bash AMD by creating a hypothetical scenario to compare its products and then basing my opinion on that rather than the real world, but I think this illustrates the difficulty the company has faced. With its clock-for-clock comparisons coming off so badly, it simply cannot afford to price or sell its parts at the level it would need to seriously challenge Intel above the midrange market.

But I *do* agree that these results paint the FX-8350 in a better light.

kiko

No problem.

Turbo is suppose to work when only a couple of cores are being loaded to improve single threaded performance, John the Ripper is an integer benchmark which means the 8350 should be running at stock (unless my understanding of Turbo Core isn’t good).

Funny you should mention the “core vs. unit” discussion. One of the (many) issues with this design is that while each core has a four-pipeline integer unit, only two of the pipes are actually used for any kind of integer execution — the other two are reserved for memory reads.

The issues, however, just keep piling up.

1). Piledriver takes an enormous hit when trying to do 256-bit AVX, despite the fact that this is technically supported — so much so, that the added clock cycles destroy any benefit of ever using that data format.

2). The L1 data cache cannot perform two operations in the same clock cycle if the target banks are spaced by a multiple of 100h bytes *unless* it’s doing two reads from the same cache line. This conflict happens frequently.

3). There’s a 25-26 cycle penalty for reading from a location you just wrote to if the read is larger than the write.

4). When multiple threads are running across the processor L1 cache throughput is much lower — even if the threads are running on completely different modules with zero cross communication. (This issue has been raised to AMD as there’s no known explanation for it. It just shouldn’t happen. Ever. But it does).

And so on, and so on.

I used to think that there was one big reason for the BD architecture’s problems, but I’m not sure that’s true anymore. Instead of one giant smoking cannon, I begin to think it’s death of a thousand cuts.

To hop back to your point about Turbo Mode — multiple motherboard vendors now quietly override Turbo settings in BIOS and set their own clocks. It’s not at all uncommon for a vendor to lock all CPU multipliers to the maximum Turbo value, such that eight cores actually run at say, 4GHz rather than the 3.7GHz Intel or AMD specified for the chip.

I’m going to look into some of the tests you’ve raised here. It’s worth re-examining whether there are encoders that are better for AMD users.

kiko

How many problems could be solved by disabling 1 core per module?.

I saw benchmarks on this site, and the 4 FPUs/4 ALUs (pretty pretty much end up with a more modern Deneb ^^) configuration does perform better than the 2 FPUs/4 ALUs one. I would’ve like to see single threade benchmarks and CPU bounded games by the way :)

Also, the 4/4 configuration could be very useful to calculate the current CMT (Piledriver) scaling when compared vs the stock 4/8 (which is 60% more than enough to beat SMT, but it gets destroyed by the god old CMP) configuration.

I hope the new architecture give us a plecent surprise, I have a lot of faith in Jim Keller ^^

Joel Hruska

So, I didn’t do much testing of Kaveri in one core per module mode, mostly because the motherboards I had access to hadn’t implemented the option properly.

What I can tell you is that the old 20% scaling penalty that BD and PD took is reduced to about 10% in Kaveri– which is to say that an eight-core Kaveri would perform like 7.2 “full” cores. Eight-core Piledriver performed like 6.4 full cores which is one reason why AMD initially struggled to pull away from Thuban.

Joel Hruska

Let me actually follow up on this a bit more clearly. AMD’s single-threaded performance, adjusted for clock speed, has more-or-less fallen back to 2004 levels where integer code is concerned. And it’s not 100% clear why. Again, this is the “Lots of little smoking guns, not one giant cannon” theory.

The FPU on Kaveri is actually an exception. The three-pipeline redesign that AMD did for Kaveri often outperforms Piledriver clock-for-clock, sometimes significantly. The FPU isn’t the problem with this chip — not really.

Phobos

Might as well use the high end APU, it performs very similar as the FX-8350 on some tests.

massau

the FX8350 is getting old as is its process node thats why it isn’t the best core. but i would recommend it if you want to learn to concurrent program.
an I5 will be onpar to the FX850.
but a good AM3+ board will be cheaper and you will have more PCIe lanes than the I5.

Mark Scott

Bit of a nit pick but 802.3 is ethernet and 802.11 is wireless networking.

BtotheT

You forgot USB 3.1(2xspeed)(only on one MB so far) if you’re gonna next gen go all out :)

massau

i think they cut prices of the X6 cores because there server equivalent had a lower price up until now.

Joel Hruska

Not at the same clock speeds, I don’t think. I haven’t seen any six-core Xeons at the same clocks.

massau

yes there is the difference. maybe more enthusiast got smarter and accepted that a 600$ CPU+ a more expensive motherboard is just not worth it compared to an normal core I7 which can overlock even higher and keep the 300$ for an other GPU , a faster upgrade or just liquid cooling.
i really wonder what there sales figure was on the I7X6 cores.

Quenepas

What will be AMD’s response to this?

BtotheT

Tears. And diversification.

Joel Hruska

AMD has not competed against Intel in the $1000 market for at least six years. Maybe a bit longer.

ronch

Not true. They did sell the FX-9590 close to a grand for a little while. /s

Joel Hruska

True! They did.

NoldorElf

This review says it all for if this CPU is “worth it”.

For gaming, this CPU makes no sense. A 4690k is the best way to go (4790k is another option but strictly for gaming does not offer much for the extra money) with a good air cooler (like the D15) and you aggressively overclock it.

For other applications though, it is a step forward. So if you do a good deal of file compression, photo editing, CAD, video editing, crytography etc, this may be just what the doctor ordered. Personally, I am going to be doing a great deal of video encoding, so I need something like this.

What’s interesting about all of this is that the DDR4 and the larger cache do not seem to have an impact in most applications. I would presume workstation apps again would be the most likely to take advantage of it.

I’m going to be waiting a few months to see which motherboards are stable and are the best all around.

Joel Hruska

The TSX lockout is unfortunate. Long-term it could be a very important feature.

VirtualMark

I think that for most people, a 6 core would be a much better buy. Even then, 6 cores is probably overkill for 99% of our workloads – I’d much rather have the faster clock speed as I’ll see more benefit from it. The 8 core may beat the 6 with synthetic benchmarks, but I doubt that there will be much difference in day to day use.

Joel Hruska

VirtualMark,

That’s really not true (regarding synthetic benchmarks).

If you have 3DS Max, you’re going to see a speed increase.

If you encode H.264 video you’re going to see a speed increase.

If you edit video using high-end professional software, you’ll see a speed
increase.

If you run multiple multi-threaded programs simultaneously you may see a speed increase, even if both are only capable of using up to 4 or 6 threads.

If you use a distributed computing program that utilizes the CPU, you’ll see a speed increase.

If you use high-end audio processing software you’ll see a speed increase.

Now, it’s entirely possible that you do NONE of these things — and therefore, would see no speed increase at all. But the tests that show benefits aren’t synthetic. Maxon’s Cinebench is based on its own Cinema4D software. I promise you that 3DS Max, Maya, Lightwave, and all the other myriad 3D rendering apps would show similar distinctions.

You’re still right about eight cores being overkill for the vast majority of people and six cores being unnecessary, but that doesn’t mean the tests that show benefits are synthetic and manufactured without real world meaning.

torjs99

next to nobody in the consumer space uses 3DS Max or encodes H.264

Joel Hruska

I’m a consumer. I encode video with Handbrake, which uses H.264.

torjs99

‘next to nobody’

Joel Hruska

Heh. True. Though I don’t know if that’s true among the enthusiasts this chip targets.

VirtualMark

Yeah I’m not saying it doesn’t have it’s uses, just that you’d need specific software to take advantage of it. If you’re doing the stuff you mentioned sure, but how many people do a lot of that?

I do a lot of audio processing, and I can tell you from experience that the audio usually drops out way before I get my CPU to 100%. Which is why I’d rather have a faster clocked 6 core than a slower 8.

Joel Hruska

I’ve heard that AVX and AVX2 can actually specifically help with this but suspect it also depends on the quality of audio input / output and a host of other things. I looked into high-end audio processing once and was surprised how much seemingly minor details matter. It’s sort of its own animal.

In the big picture though, you’re absolutely right. Remember — AMD *has* 16-core Opterons that it could’ve brought to desktop. But both AMD and Intel know that quad-core + HT for Intel and eight-core for AMD is really the top of the sweet spot.

The multi-threading situation for games has continued to slowly, slowly improve. Back in 2008 or so, a dual-core was really good enough for games — very few games picked up a benefit from >2 cores. Now, it’s common for quad-cores to have benefits compared to two — but with both Intel and AMD holding the line at the quad-core mark I don’t expect to see that changing much.

People who think that console titles will automatically change this are forgetting that Kabini is a small core meant for low-power operation. Two Kabini threads can plausibly be combined into a single “big core” thread when said big core is running at 2-2.5x the clock speed with far more horsepower under the hood.

30011887

I cant wait to make the switch to Intel and get the i7 4790k and the asus z97 deluxe next week, Im currently on the amd fx 6300 with the m5a99fx pro r2.0.

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2015 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.