This site may earn affiliate commissions from the links on this page. Terms of use.

When AMD announced it was going to build a 5GHz version of its FX-class processors this past summer, reaction from the enthusiast community was mixed. On the one hand, AMD was simultaneously reaching back to its enthusiast roots and offering a chip significantly faster than anything else in its own product stack. On the other, even at 5GHz, the CPU was going to have a hard time competing with Intel’s Haswell and Ivy Bridge-E. We recently spent some time with a Maingear Shift equipped with the FX-9590, and decided to put the chip through its paces in a head-to-head gaming showdown against Intel’s Core i7-4960X.

We’ve chosen to focus on gaming for two reasons. First, gaming was always one of the areas where the original FX processors shone, and it’s a question that’s come up several times in our past Piledriver reviews. Multiple readers have asked for a head-to-head gaming article, and the FX-9590’s launch, combined with the R9 290X’s debut this month, made it an ideal time to examine the question of how well Intel and AMD platforms compare. The second reason is that outside the gaming arena, the FX-9590 admittedly struggles. While it runs at a higher clock speed, it still has trouble in lightly-threaded workloads where Haswell and Ivy Bridge can rely on better single-threaded performance and superior scaling across 2-4 cores.

Price, availability, and the Intel Ivy Bridge-E

When I began this story, the FX-9590 was only available from OEMs and priced at $800. Frame pacing driver issues delayed my testing, as did multiple GPU launches from AMD throughout October. The price gap that’s opened up between the Ivy Bridge-E and the FX-9590 means that the $1000 Intel CPU is no longer the best point of comparison. This is taken into consideration in the final analysis. On the other hand, the delay allowed us to add performance figures for the R9 290X, and compare it against the Radeon 7990 that the Shift shipped with by default.

Speaking of the Shift, Maingear’s vertical enclosure with a custom Rosso Scuderia paint job is a beautifully designed system that ships with a custom 180mm Maingear Epic CPU cooler and as tight a cabling job as you could ask for. In the significant amount of time we spent with the rig, we had no problems — no lockups, no crashes, no hardware issues of any kind. Obviously when you pay top dollar for a boutique system, you expect this kind of custom equipment and attention to detail, but we were well pleased with the system’s configuration.

This article is primarily focused on the FX-9590 CPU — for full details on the Shift itself, hit up our PC Magazine review of the Maingear Shift. Our high opinion of the Shift didn’t change after spending significantly more time with it. We built an Intel Core i7-4960X Ivy Bridge-E comparison system using 16GB of low-latency Mushkin DDR3-2133, an identical SSD (Samsung 840 Evo), and tested both systems on a Radeon 7990 and Radeon 290X.

Each game was tested at 1920×1080, but we varied the visual settings somewhat depending on the game in question. We’ll break these down as we go. The goal was to preserve a testing environment that wouldn’t be completely GPU-bound, but to test games at graphics levels that were representative of how high-end gamers would configure each system.

We’ve also examined performance in terms of both frame rates and frame latency using the handy open-source tool FRAFS, which analyzes Fraps output. Frame timing is a metric used to examine latency and stutter in a game, and it’s an area where AMD’s Radeon cards had a rough time of it early this year. (See: After almost 20 years, GPU benchmarking is moving past frames per second.) AMD rolled out a frame-pacing driver this summer and has been updating it regularly since then, so we can also look and see what kind of performance difference it makes as compared to the single-GPU Radeon R9 290X. To do this, we’ve included a graph of the worst 1% of frames we tested. This is a lower bound for frame latency.

Here’s the rule of thumb when it comes to evaluating frame latency figures. Lower is better than higher, 16.7ms is the boundary for 60 fps gaming, and 33.3ms is the 30 fps mark. But — and this is key — the differences matter more, the higher the figure. The difference between 12ms and 16ms is much less noticeable than the gap between 25ms and 33ms — even though the difference is only 1.33x. Frame drops below 30 fps are more noticeable than 60 fps, and anything that pushes you into the teens or below 10 fps is particularly egregious.

The FX-9590 at 4.7 – 5GHz Turbo would equal or outperform the 4770K at 4.5GHz — suggesting that yes, it might be faster than the 4960X, since that chip was clocked at stock speed.

Gallrog

Thanks for the link – I didn’t know about this test :D Cheers!

Joel Hruska

Based on what I saw from gaming, I think we can safely conclude there are only a few titles where the difference between AMD and Intel was noticeable at the 5GHz mark. Crysis 3 was one such game. Part of the reason this article took so long was because I held off, waiting for different drivers to test on the AMD system, to ensure the frame latency discrepancy I saw couldn’t be resolved.

Crysis 3 plays better on Intel with a 7990, no doubt.

But that appears to be somewhat more the exception than the rule. I should note that these results only reflect my thoughts on AMD at the highest end — I know there are cases where the FX-8350 leaves performance on the table compared to Intel CPUs.

marx

people,are you really stupid or what???for ONLY 2-3 FPS more would you buy a processor that costs min. 300$ (even much) higher??? I choise AMD,is cheap and give you the same,on some games even better perfomance, or better perfomance than extra expensive Intel CPU… well,we will se what be in 2016. when comes out NEW AMD CPUs with ZEN arhitecture :) and with HT!!! HT wicth is 20% better than Intels HT… and have 16 cores and 32 threads,DDR4 memory,very low TDP etc……this is NEW ZEN arhitecture.. bye Intel… enjoy until 2016. :))))

you dont think ahead. amd uses up alot more power to get there. this is why intel has such an edge over amd everywhere atm. in order for amd to get where intel is they have to have cpus overclock to 5ghz and above consuming 220W+ where a normal i7 would not exceed 95W

marx

true ,Intel has an advantage but very small (in the gaming are the same) Only FX (octa core) processors consume so much power and not all FXs… but,when I look,I do not care for power consumption..I have PSU for 5 FX CPU of 220W+. If you do not have a good power supply ,the entire PC is crap. PSU is computer “heart”. ok,AMD consume much watts (only FX) but Intel is abnormally expensive. “perfomance per dollar”… If I give so much money, then I expect a lot of evening performances… here is GREAT example,860K (consume only 95W) witch cost only 56$ on Amazon!!! here is Intel i5 4430 (TDP is the same) and cost 187$ on Amazon. now,look the benchmark,have the same score like ultra cheap 860K! (very much is like it is with i5 2500k and with extra expensive i7 2600k,link below) here is link of 860k vs i5 4430:http://cpuboss.com/cpus/Intel-Core-i5-4430-vs-AMD-Athlon-X4-860K

now look very good how is small differnce in perfomance and how hughe is differnt in price!!! why then I buy ex. i5 4430 is have the same perfomance like 860K, and cost much more than 860K… of course I will for the same performance pay much less and buy 860K. I’m not pay for the same performance pay 3 times more. but,this is not a topic that we talk. we talk about new ZEN CPUs witch comes with 20% better Hyperthreadind of Intel HT,with low TDP and how you see,comes with 16 cores and 32 threads!!! with DDR4 memory!!! HBM etc.. is enough to see how it would be powerful CPUs? and this is NOT server CPUs!!! This CPUs comes with new AM4 socket. up I published an comment where write all the detail,look this. when you seethis,then tell me,witch Intel CPU will be bettter and stronger of this beast? you dont think ahead,not me…

Brian Bahbah

I would have to say for people who don’t have the extra $300 or so for the i7and get the AMD instead, IS thinking ahead; forget 3-5 extra fps when some people don’t even make the price difference in a weeks pay.. And that’s just the “difference” in price.

marx

dude,I have 4 computers (+one laptop) whitch I use and parts for at least 5 or 6 computers to do… this is my job,I am computer technician and have my own service. I daily spend more money on the computer than you spend in 3 months… and I have and AMD PCs and Intel PCs… but I am talking about price-performance. this is example, 860k and i5 4430 or i7 2600k… 860K and 4430 have THE SAME perfomance,the same score (link below), 2600k is LITTLE better and cost 5-6 times mores than 860k… 4430 is 3 times expensive and have the same perfomanse!!! PERFOMANCE PER DOLLAR!! you understand what I’m saying or do I have to draw you? for much,much less money you got same or better perfomanse… I know very well about I talking. and how I said,I have and AMD and Intel and for gaming is better AMD… AMD is not for benchmark! His strength you see when you use it. and AMD is full optimized for gaming and Intel not… is also very good for gaming,but is not optimized for gaming. now,I show you AMD vs AMD (link below) ex. 860K VS 8320E (newer version of 8320,have lower TDP) and 860K vs 8300… perfomance is also the same and all this CPUs have 8 cores and 860K only 4… then why is better or the same? SINGLE CORE PERFOMANCE!!! and now enjoy in benchmark… (I forgot,what you say on new AMD CPUs with ZEN architecture and with Hyperthreading technology. 16 cores and 32 thread,low TDP,with DDR4 memory etc… and this is NOT servers CPUs… comes in 2016. I wonder ,whitch Intel CPUs will be stronger than this AMD CPUs..hmmmm..) LINKS(look price of both CPUs):
860K vs i5 4430:http://cpuboss.com/cpus/Intel-Core-i5-4430-vs-AMD-Athlon-X4-860K

How are you stupid and not informed,OMG… what kind of school you completed? plumber maybe???

amitoj singh

And what did you graduate from huh?

marx

computer technician.. and you??? you hodja? witch school have?none… no school in your village…

marx

are you have school in your village? do you even have electricity in the shack?

amitoj singh

You know you are same as those console lovers trying to prove console is better than PC xD Butthurt AND fanboy!!! LOLOL

marx

Why you change the subject? what school you have? none,methinks… from CPU on console :D how you stupid!!! OMAllah!!! only what you can have is SAIGA from 87th :))) go away fucking pig…

amitoj singh

Shit Inside xD LOLOL Fucking pig shit inside your head for sure

amitoj singh

you fucking pig
Computer technician LOLOL!!
Screwing and unscrewing PC cases in a shop right!! xD
copy posting the data
Do you even know what the fuck that thing you just copy pasted means
Worthless worm xD
And i am a computer science engineer with specialisation in IT infrastructure you pig

ok,then you fucking Buddhist!!! even worst!!!!!!! yeah,you have a great technology!!

amitoj singh

LOL fighting by religion
Why not fight like a man you bitch!!

marx

like man? how? you are monkey!!!

marx

turban would not be help you when I’d started beating you. looked you,what you can??? you are pussy,only here you are FIGHTER!!! :)) wait,are you gay??? you look like gay,beacuse I ask… do not worry,my sperm,Allah,Budda,God, our sperm will feed you and your family. Taliban, peace with you!

amitoj singh

How are you dick sucker!
Your mom is doing a pretty good job sucking my cock!

amitoj singh

Your mom’s pussy is so cool that i inserted an 6950x up her pussy slot and overclocked it to above 6 ghz and it still remained below 40C. Really nice job!

amitoj singh

And never disrespect any religion you uneducated fuck!

marx

FUCK RELIGION!!!! :D I am Ateist.. fuck you Budda-Allah

amitoj singh

Yeah you will obviously hide behind god!!
you scared piece of shit

marx

in Allah ass I hiding :D

amitoj singh

Yeah you are hiding in allah ass so basically you are shit( proved) you worthless scum
Shit inside LOLOLOL xD

marx

LOLOLOL :))) like little monkey!!!! Allah will help you… your turban of camel dung will be blessed with holy Allah sperm… And I blessed your mom with my sperm. your whole family I will blessed with sperm in mouth. Now,say to me: Thank you. :D

Is the FX-9590 a good gaming CPU? I’m deciding between the Fx-9590 and the i5-6600k as they are around the same price for the system I’ll be getting (Fx-9590 will have high level cooler whilst i5-6600k will have mid level cooler). Which would be best to buy? I plan on overclocking that’s why I included the coolers. Any advice very much appreciated. Thank you.

Andrew

Now that these chips can be purchased for under $400 bucks :)
Power usage is insane though.

Joel Hruska

Power usage is definitely higher than Intel. Can’t say much different.

Andrew

for someone that already has an AMD rig and want’s to upgrade the CPU might buy one of these but I really doubt people will be doing fresh builds with these as the core.

IlMortuatore

I can’t figure out a thing: FX 4100 with Titan in 1080p at Ultra performs about 43 fps, while my FX 4170 and HD 7950 would make 20 fps at same settings. Titan is a monster respect to the HD 7950, but if I use a FX 4100 with a Titan, this cpu should make a huge bottleneck on the system for the Titan. How the hell does this happen?
I mean FX 4100 should be too much weak for a beast like Titan, but it performs 43 fps at ultra in 1080p anyway… Dafuq?

Joel Hruska

Clearly, differences in driver performance or the game isn’t as CPU bound as you think. Though I’d never pair a Titan and an FX-4170.

marx

NEW KING COMES!!!
According to Fudzilla, the new CPU will offer up to 16 Zen cores, with each core supporting up to two threads for a total of 32 threads. We’ve heard rumors that this new core uses Simultaneous Multithreading, as opposed to the Clustered Multi-Threading that AMD debuted in the Bulldozer family and has used the last four years.

Each CPU core is backed by 512K of L2 cache, with 32MB of L3 cache across the entire core. Interestingly, the L3 cache is shown as 8MB contiguous blocks rather than a unified design. This suggests that Zen inherits its L3 structure from Bulldozer, which used a similar approach — though hopefully the cache has been overhauled for improved performance. The integrated GPU also supposedly offers double-precision floating point at 1/2 single-precision speed.

The CPU layout shown above makes a lot of sense. We’re clearly looking at a modular part, and AMD has defined one Zen “module” as consisting of four CPU cores, eight threads, 2MB of L2, and an undoubtedly-optional L3 cache. But it’s the HBM interface, quad-channel DDR4, and 64 lanes of PCIe 3.0 that raise my eyebrows.

Here’s why: Right now, the highest-end servers you can buy from Intel pack just 32 PCI-Express lanes. Quad-channel DDR4 is certainly available, but again, Intel’s high-end servers support 4x DDR4-2133. Server memory standards typically lag behind desktops by a fair margin. It’s not clear when ECC DDR4-3200 will be ready for prime time. That’s before we get to the HBM figures.

Make no mistake, HBM is coming, and integrating it on the desktop and in servers would make a huge difference — but 16GB of HBM memory is a lot. Furthermore, building a 512GB/s memory interface into a server processor at the chip level is another eyebrow-arching achievement. For all the potential of HBM — and make no mistake, it’s got a lot of potential –that’s an extremely ambitious target for a CPU that’s supposed to debut in 12 to 18 months, even in the server space.

Nothing in this slide is impossible, and if AMD actually pulled it off while hitting its needed IPC and power consumption targets, it would have an absolutely mammoth core. But the figures on this slide are so ambitious, it looks as though someone took a chart of all the most optimistic predictions that’ve been made about the computing market in 2016, slapped them together on one deck, and called it good.

I’ll be genuinely surprised if AMD debuts a 16-core chip with a massive integrated graphics processor, and 16GB of HBM memory, and 64 lanes of PCI-Express, and a revamped CPU core, and a new quad-channel DDR4 memory controller, and a TDP that doesn’t crack 200W for a socketed processor.

Joel Hruska

Why are you posting my own story in a comment?

Rod4Tech

Imitation is the highest form of flattery? :)>

Joel Hruska

*laugh* Fair enough.

Rod4Tech

:)> Joel, if you get a chance, please check out my other post to this thread. I’d really appreciate you guys input. Thanks.

marx

your story??? dude,tdis is srtory from another site… you see where write “Fudzilla”??? and how say that is your story!!! go away noob… and 959o dont go higher than 5.0Ghz?? do you need proof? go on google little…

AMD is not for benchmarks… AMDs perfomance you see when you play some game… AMD made gaming CPUs,not benchmark CPUs… I have same FPS (the same all components,only CPU is not the same) and I have same FPS with 860K (1080p) like guy with i5 4690… belive you or not… your problem,but,this is a fact… same fps,same game,same maps,same components…

It would be interesting to see them compared in 2k resolutions, now testing them at that resolution will it put more pressure on the video card and the CPU as well? What’s the tdp of the fx and the 4960x? Now if they can test a setup of Intel/AMD vs AMD/AMD vs AMD/NVIDIA vs INTEL/NVIDIA.

Joel Hruska

The 9590 has a TDP of 220W. The 4960X has a 130W TDP. But if you’re buying a $3500 – $5500 computer, you probably don’t care. ;)

Also, a consideration of Nvidia performance on both platforms was far outside the scope of this project.

Phobos

That is true a $3500-$5500 pc will be an overkill for me and many others, I’m happy with my $700 pc. Hope to see those 2k reviews soon.

marx

are you know any other CPU????? in ALL posts,you saying only about FX 9590 and 4960X!!! on all post!!! no your post with another CPU! this you learn 2 years about this two CPU??? :D OMG!!!!!!!!!!!!!!!!!!!!!!!!

amitoj singh

shut up!
no one is listening to you!
Don’t you get it you uneducated piece of shit who doesn’t know how to talk to others

marx

go fuck a pig!!! dont read if you dont like it… fck Inttel fanboy. are you jealuos??? Intel sinking like Titanic… Why are you disturbed if no one listening me?? little pig is disturbed :D LOL

marx

and fuck you!!! fuck off out here… taliban

amitoj singh

Good things come at a cost bro
surely i7 4960x is expensive
but what about i7 4790k and even i5 4690k they perform better than this

amitoj singh

Taliban LOL bro xD

amitoj singh

everyone with a beard and a pagri is not a taliban
don’t know who told you this

marx

no way!!!!!!!!!!!!!! really?????

marx

ALLAH INSIDE!!!!!!!!!! :D with little piggy :)))

marx

you need education Taliban!! now I se how much you know about CPUs… ZERO!!!!

amitoj singh

yeah shit inside you are correct!
Guide me please!

Tequila_Mckngbrd

Whoever “just games” on these CPU’s is an idiot. Anyways, it’s not difficult to get 4.8GHz on a 3930K (below 1.4V’s on most chips), not sure why it’s being compared against the overpriced 4960X at stock.

It would be another story if the 5950 could get to 5.6GHz (within voltage spec limits)…

Joel Hruska

It was compared against the 4960X because when I began the article, that was the best comparator for it. A death in the family somewhat delayed the piece, which is why I added the R9 290X results. Falling back to compare against an entirely new CPU was not possible.

And the piece compares two chips at stock speeds, meaning the speeds they ship at. The FX-9590 is not an overclocked chip — it is a chip AMD sells at that clock speed.

Tequila_Mckngbrd

Sorry for your loss. Good review regardless of it being stock. AMD stepping up their game is always a good thing!

Joel Hruska

I can tell you this: I wasn’t able to push the chip at all past its stock speeds, and the EPIC 180 that Maingear ships is excellent — top of the line, for air cooling. Water or freon might have done somewhat better, but these chips don’t have any headroom to speak of.

xostrowx1991

This is a super old post but I just have to respond to it. So you think that just because both chips are at stock it’s fair to compare? No it isn’t lol. The FX 9590 (by your own admission I might add) can’t be overclocked AT ALL beyond that stock turbo 5.0ghz; whereas the 4960X can easily hit 4.6-4.7ghz with a good cooling setup, and would tear the hell out of the 9590 in EVERY single test at that speed. So it’s much more fair to compare both processors at their max speed, 4.7ghz on the 4960X and 4.7ghz w/5.0 boost on the 9590 since neither can go higher than those figures typically.

Joel Hruska

“This is a super old post but I just have to respond to it. So you think that just because both chips are at stock it’s fair to compare?”

Yes, for several reasons.

1). This wasn’t a component review but a *system* review. Maingear shipped me a system configuration to test. While I was willing to swap out the video card to compare the impact of single vs. dual-GPU, making crazy modifications to the Maingear rig would ruin the point of reviewing the configuration as it was shipped to me. If I’d been using a regular AMD motherboard + CPU, I’d have included more overclocking data.

2). The motherboard Maingear shipped me had no end of trouble with overclocking settings. It wasn’t related to the CPU — touching the CPU voltage or clock speeds in *any* direction resulted in system instability. Lowering the CPU multiplier to run *slower* actually made the system crash.

I communicated with Maingear throughout this process, but I would’ve needed a new motherboard to solve the issue I was having. Again, not worth swapping out the entire board. I wanted to overclock the CPU to test the 180mm cooler that Maingear included with the system, not just to push the chip.

3). Plenty of people — most people, in fact — don’t overclock. Even among enthusiasts, it’s a fairly niche market. So comparing chips at stock speeds for stock pricing is the only fair way to compare them.

4). The 4960X can demolish the FX-9590 in any number of non-gaming benchmarks. The fact that it wins in gaming tests by small margins in many cases is proof that the delta between AMD and Intel tends to be smaller in high-end gaming than, say, when encoding video or unzipping files. The relative impact of overclocking, therefore, is going to be diminished.

James Tolson

im not going to upgrade from my 2006 rig anytime soon.. it plays every game i throw at it perfectly well…

my next pc will be a 10ghz amd build with 128gb ddr5 ram, 12tb hard disk and Direct x 15 capable Radeon cad.. until then anything else will just be a waste of money

Scott Jackson

Your 2006 rig can run Crysis 3?

Joel Detrow

Pretty sure he’s joking. Hyperbole.

VectorRoll

He probably is just blowing smoke.

But for arguments sake….

It actually is very possible. Playing a game perfectly well is not really playing a game at full settings or getting great FPS. Basically… You really do not need great FPS and such to be able to play a game well. Most people can not even notice the difference in such very high performance (like 100+ FPS) compared to something at say 30 to 60 FPS. The human eye is just not that developed. It is only when games Stutter and Lag do they notice things. That is where the better performance is noticed.

Anyways in 2006 there was a Dual Core Athlon 64 X2 at 3.2 GHz. Not sure what Mobo and GPU combination someone would would have with that. CPU wise though that does meet the Minimum requirements for Crysis 3. So the game would play with that CPU.

That is just for Arguments Sake.

94xj

An M2N-32 SLI Deluxe (my personal board at the time, loved it) and a pair of 8800 GTX or Ultras…it’s plausible. Highly unlikely but plausible.

VectorRoll

I have a P52N-SLI Premium mobo with two EVGA 8800GTS 640Mb GPU’s that is still going strong. It has E6600 Core 2 Duo which is nothing special. (That is a 2006 CPU as well as a matter of fact. Not the best in that line but back then it didn’t matter.) I have played the first Crysis game with with it and even BC2, but I do not play much on it anymore since I have built other PC’s since that one. In fact it is just sitting here unused. Not sure what I’ll do with it.
ASUS makes some great mobos, and products in general. That is usually my first brand of choice. :)

94xj

My old system is down to just boring daily use. Gone are my 8800s and 6400, just a little 5200+ brisbane and some 9600 GSOs (g92) because I used it for Folding@Home for quite a while. Still a strong enough computer to handle new games at lower settings…but why bother with a 1090t and dual 480s or my newest 4770k and twin 780 box?

That was in 2007. AMD didn’t have anything better than 2.8 GHz dual-core in 2006. OTOH, 2+ Opteron 290/890 or 2220/8220 SE processors… My gaming PC still has a pair of Socket 940 Opterons.

The real problem is Crysis 3 requiring DirectX 11, which wasn’t in cards until years later. Of course, he may have upgraded his GPU or else just isn’t into Crysis.

Andrew

Yearly upgrades to play new titles at max settings is ridiculous.

zapper

If you have spare money
………Buy an i7 Laptop with latest Nvidia chips
Else go to AMD Laptop

Ken Luskin

AMD chip cost = $400
Intel chip cost= $1,000

A 2.5 fold increase in price is equivalent to comparing a Chevy Volt to a Tesla Model S

The 5% increase in performance of the Intel chip is completely devoid of any relevance to the 250% increase in cost.

Whereas as the Tesla Model S performance and user experience is 2.5 fold better than a Chevy Volt.

Joel Hruska

As I’ve said: This article was delayed somewhat by unavoidable circumstance. When I began it, the $1000 Intel chip was the best price comparator for the $800 FX processor.

That is now no longer the case. I had two choices: Keep the HD 7990 and compare against the 4960X, or retest the Intel system with a quad-core of more appropriate choice.

I decided readers would rather see cutting-edge single-GPU results with an explanation that the Intel CPU was no longer the best comparison rather than dropping back to a different Intel chip (where the performance delta would be negligible) but keeping the same 7990.

Ken Luskin

Joel, I was not criticizing your article.

I was simply noting that AMD’s chip at $400 is a far better value for performance than a chip that costs 250% more.

This is bad news for Intel going forward.

In the future, PC type equipment will be sold mainly for its GRAPHIC abilities.

In the cloud paradigm, there is really no reason for the average consumer or business to need an expensive Intel CPU based machine.

The next era of computing is all about the GPU!

AMD has a long term plan to profit from Huge Trends favoring their GPU and Server capabilities

I have written a number articles that discuss the nexus of a few huge long term trends, and AMD’s unique capabilities to profit from them.

According to Fudzilla, the new CPU will offer up to 16 Zen cores, with each core supporting up to two threads for a total of 32 threads. We’ve heard rumors that this new core uses Simultaneous Multithreading, as opposed to the Clustered Multi-Threading that AMD debuted in the Bulldozer family and has used the last four years.

Each CPU core is backed by 512K of L2 cache, with 32MB of L3 cache across the entire core. Interestingly, the L3 cache is shown as 8MB contiguous blocks rather than a unified design. This suggests that Zen inherits its L3 structure from Bulldozer, which used a similar approach — though hopefully the cache has been overhauled for improved performance. The integrated GPU also supposedly offers double-precision floating point at 1/2 single-precision speed.

The CPU layout shown above makes a lot of sense. We’re clearly looking at a modular part, and AMD has defined one Zen “module” as consisting of four CPU cores, eight threads, 2MB of L2, and an undoubtedly-optional L3 cache. But it’s the HBM interface, quad-channel DDR4, and 64 lanes of PCIe 3.0 that raise my eyebrows.

Here’s why: Right now, the highest-end servers you can buy from Intel pack just 32 PCI-Express lanes. Quad-channel DDR4 is certainly available, but again, Intel’s high-end servers support 4x DDR4-2133. Server memory standards typically lag behind desktops by a fair margin. It’s not clear when ECC DDR4-3200 will be ready for prime time. That’s before we get to the HBM figures.

Make no mistake, HBM is coming, and integrating it on the desktop and in servers would make a huge difference — but 16GB of HBM memory is a lot. Furthermore, building a 512GB/s memory interface into a server processor at the chip level is another eyebrow-arching achievement. For all the potential of HBM — and make no mistake, it’s got a lot of potential –that’s an extremely ambitious target for a CPU that’s supposed to debut in 12 to 18 months, even in the server space.

Nothing in this slide is impossible, and if AMD actually pulled it off while hitting its needed IPC and power consumption targets, it would have an absolutely mammoth core. But the figures on this slide are so ambitious, it looks as though someone took a chart of all the most optimistic predictions that’ve been made about the computing market in 2016, slapped them together on one deck, and called it good.

I’ll be genuinely surprised if AMD debuts a 16-core chip with a massive integrated graphics processor, and 16GB of HBM memory, and 64 lanes of PCI-Express, and a revamped CPU core, and a new quad-channel DDR4 memory controller, and a TDP that doesn’t crack 200W for a socketed processor.

marx

AMD is cheaper but have the same or even better perfomance than CPU witchcost 1000$!!! also look on perfomance per dollar ;)

marx

WHAT YOU SAY ON THIS??? 1000 BETTER THAN ANY 1000$ INTEL CPU
According to Fudzilla, the new CPU will offer up to 16 Zen cores, with each core supporting up to two threads for a total of 32 threads. We’ve heard rumors that this new core uses Simultaneous Multithreading, as opposed to the Clustered Multi-Threading that AMD debuted in the Bulldozer family and has used the last four years.

Each CPU core is backed by 512K of L2 cache, with 32MB of L3 cache across the entire core. Interestingly, the L3 cache is shown as 8MB contiguous blocks rather than a unified design. This suggests that Zen inherits its L3 structure from Bulldozer, which used a similar approach — though hopefully the cache has been overhauled for improved performance. The integrated GPU also supposedly offers double-precision floating point at 1/2 single-precision speed.

The CPU layout shown above makes a lot of sense. We’re clearly looking at a modular part, and AMD has defined one Zen “module” as consisting of four CPU cores, eight threads, 2MB of L2, and an undoubtedly-optional L3 cache. But it’s the HBM interface, quad-channel DDR4, and 64 lanes of PCIe 3.0 that raise my eyebrows.

Here’s why: Right now, the highest-end servers you can buy from Intel pack just 32 PCI-Express lanes. Quad-channel DDR4 is certainly available, but again, Intel’s high-end servers support 4x DDR4-2133. Server memory standards typically lag behind desktops by a fair margin. It’s not clear when ECC DDR4-3200 will be ready for prime time. That’s before we get to the HBM figures.

Make no mistake, HBM is coming, and integrating it on the desktop and in servers would make a huge difference — but 16GB of HBM memory is a lot. Furthermore, building a 512GB/s memory interface into a server processor at the chip level is another eyebrow-arching achievement. For all the potential of HBM — and make no mistake, it’s got a lot of potential –that’s an extremely ambitious target for a CPU that’s supposed to debut in 12 to 18 months, even in the server space.

Nothing in this slide is impossible, and if AMD actually pulled it off while hitting its needed IPC and power consumption targets, it would have an absolutely mammoth core. But the figures on this slide are so ambitious, it looks as though someone took a chart of all the most optimistic predictions that’ve been made about the computing market in 2016, slapped them together on one deck, and called it good.

I’ll be genuinely surprised if AMD debuts a 16-core chip with a massive integrated graphics processor, and 16GB of HBM memory, and 64 lanes of PCI-Express, and a revamped CPU core, and a new quad-channel DDR4 memory controller, and a TDP that doesn’t crack 200W for a socketed processor.

Master Troll

I think im gonna have to pick up one of these with 2 or 3 290x cards next year…I shopped around and intel and nvidias prices were too high for the top of the line products

Gromanon

I am puzzled of why such a terribly low frame rates from 7990?? My FX-8350 + 7970 get frame rates that are only 15% worse than this?

Is there a bottleneck in test hardware or something…??

Joel Hruska

It’s going to depend on the game and your detail settings. Scaling in most titles is considerably better than 15%, but if you aren’t running at the same detail levels I am, you won’t see the difference.

Also, Hitman is quite CPU-bound.

marx

low FPS with 7990??? this is dual GPU,one of most powerfull card even today… I think that is bottleneck with FX 8350…beacuse FX 8xxx CPUs dont have stronger cores and haved low Single Core Perfomance… only 9xxx CPU have strongest cores..after two 9xxx CPU (on PassMark site),belive you or not (check if dont belive me),after two 9xxx CPU, go 860K in order. 860K is on 3th place of AMD CPUs with the most powerfull cores (all FX 8xxx and some 9xxx have weakens cores than 860K) one quad is on 3th place on the list of AMD CPUs per Single Core Perfomance and this is most important for gaming… this can check on PassMark or on any other benchmark page..like CPU Boss ex… 860K is very powerfull and full optimized for gaming. beacuse all who have 860K has a big FPS in games… proved and personal tested…(and I forgot,this is most powerfull CPU for socket FM2+)

This whole market is just overkill anyways.
I had a 4100 paired with a 670 and it ran everything 1080p at 60Fps.
Upgraded to a 6300 and it boosted performance on higher resolutions,but on a 1080p monitor i only saw a 5% increase.

I cant imagine going up to the 9xxx. Especially since its not the best Cpu for non gaming purposes what kind of upgrade is milliseconds of latency and 1 or 2 frames .

Joel Hruska

Upgrade performance is always resolution and context dependent, but the FX-4100 to FX-6300 family is a very small upgrade. I would not expect more than a 5-10% performance improvement, tops.

marx

compare this CPUs with 860K… 4100,6300,6300,670,760K etc. is not even close to 860K… beacuse this CPU (for now) have the newest techonology and is not Bulldozder but Steamroller and this is reason why she have sooo strong cores and so big score in Single Core Perfomance (and this is most important for gaming) 860K even have better score than many i5 Intel CPUs… even better score of some i7… I saying in Single Cores Perfomance,but and in other thing is same like i5 CPUs… and cost only 70$!!!!!! (check this on PassMark or CPU Boss…)

nissangtr786

The irony on this is you would get similar fps with an i5 haswell. It would have been better to check frame latencies where amd struggle at. Also the fx9590 at 5ghz is equivalent to a haswell core at 2.4ghz and an ivy bridge core at 2.5ghz. All I am saying here is the fx9590 is really low end single threaded performance wise. Only have to look at GIMP superpi macro excel benchmarks to see how low poor ipc piledriver is. Anyway the i7 4770k beats the fx9590 at stock settings and destroys it when the i7 4770k is overclocked to 4.5ghz+ and at stock the i7 4770k consumes similar to an a6 apu while the fx9590 consumes around 150-200w more then a stock i7 4770k.

Andrew

It doesn’t beat it enough to warrant a 250% price difference.

nissangtr786

It does, as its much faster sips very little power on cutting edge 22nm mnf process. fx9590 is the worst value cpu in history when all it is, is an fx8320 at 5ghz. Its like intel releasing an i5 quad k series at 3.4ghz stock turbo to lets say 3.8ghz for £120, then adding the exact same cpu but stock 4.7ghz turbo to 5ghz for £200 more. Its like buying an r9 280x for £200 more then rrp price just because msi or whoever overclocked it for you to the max lol when its the same product. fx8320 £106.36 fx9590 £239.99 on amazon for near the exact same cpu, they still are expensive even though they gone down in price as an i7 488-k is about 30-40% faster in multithreaded apps nce overclocked something you can’t do with an fx9590 + the heat fx9590 gives out according to kitguru is ridiculous amount.

Its a bit like intel releasing an i5 at 5ghz and doubling the price. You still get the same transistors used just they clocked it to its max already. For amd it costs them no more extra money to make an fx8320 then it does to make an fx8350 fx9370 or fx9590 as they are the exact same cpu with same transistors used all overclockable to. The only thing extra needed is better cooling and better mobo to run and better psu.

Hugh Briss

You’re a complete idiot. All of your posts here are incoherent, information-free, and full of wild speculation and errors. Get off the Internet before you hurt yourself.

Joel Hruska

There is no irony in this. The PCMag review is linked and provides more general application data.

Also, I did check frame latencies. In fact, I refer, specifically, to frame latencies. They’re largely identical with a moderate gain for Intel .

Matthew

Look mate AMD is good for its price ok mate you are a bit of a f…..g Retard sorry nut i had to say this AMD is not build for gaming and BTW to all those retard out there AMD has a much lower wattage then the f….g INTEL maybe you look at the base power of the amd you will see that its much lower then the Intel and BTW AMD only uses 220 watts when its running on 5.0GHz and if you over clock the intel to that processing it would be like 350 watts or more.
So the AMD costs much much less and you still get a very good performance out of it for a much better deal i mean WTF is wrong with all of you guys looks at the specific specs of a AMD processor and the INTEL processor and compare them.
Im not a big fun of AMD i like INTEL much more as its lets say heavy duty but I still respect the AMD for its price and its cmpetitive processign piower

nissangtr786

lmao, the i7 4770k is cheaper then the fx9590 and faster then it at stock speeds in multithreaded benchmarks and the i7 4770k can oc to like 4.6ghz it will beat the fx9590 by 30% in multithreaded tests and still take 100w less electricity at that clock speed. Intel are much better bang for your buck, 22nm cpu and igpu. You say 5ghz, intel haswell cpu at 2.4ghz is faster then that 5ghz in per core performance.

Hugh Briss

The 4770K is $335 and the FX-9590 is $250.

marx

and perfomance in gaming is the same..same FPS… you have bechmark here… Perfomance per dollar=AMD wins…for much lower price you have the same or better perfomance…

marx

what is this?? 860K (qiad wich cost OnLy 70$) is so close to “big” i7 4770??? haaaa-haaa :d perfomance per dollar=860K wins

this is Intel fanboy and they will not accept the evidence and facts,they watch only the price,not perfomance. and facts is,Intel go down… Intel=Titanic Iceberg=AMD “ZEN” CPUs

Frank Ihrer

You can get a 9590 now for $279

c k

The conclusion of the authors misses the obvious which is simply that the Intel CPU is absurdly more expensive in comparison with the similarly functioning AMD.

Nelson

You’re an idiot.
The AMD being compared costs about 400 dollars.
The 100 dollar AMD processors are terrible.

Nicolas Edwards

? <~

Hugh Briss

The AMD FX-9590 can be had for as cheap as $220 if you catch a sale or promo code.

chlodzenienet

What is the max. temperature of FX-9650? ;)

GaZ

well i bet the lower TDP intel still runs hotter than the AMD!
All overclocked intels run hotter than intels on stock cooling or water…

JackD

Great article, intel is way too overpriced for the performance you get over the AMD boiler………..Its like there is just expensive stuff for people with more money than brains.

zayahv2

That kind of thinking is exactly why the wealthier people tend to dislike the less wealthy. Just because I can afford to pay tons more for a bit more performance without hurting myself financially does not make me stupid. I build super computers as a hobby. I enjoy it. Rather than blowing 1500 a month on golf I do this instead.

Nicolas Edwards

But you could build a more powerful computer for the same amount of money using amd so therefore the stupid comment stands.

zayahv2

This is the problem with stupid people. AMD is not more powerful. I proved this when one of my 4670k’s out bechmarked a 8350 and the 4670k is only $20 more. I will anticipate your unintelligent reply as something along the lines of the 8350 having better multithreaded performance and while that might be a selling point to the clueless, I need single powerful cores for gaming, not lots of pitifully weak cores that most games do not support. When they do, we won’t need to revisit this argument because the situation will be different won’t apply to the here and now. Putting absolutely everything I just said a side the 4670k is only $20 more and while I appreciate your going all out to save me $20 bucks, the 4670k wont lose by much to the 8350 on multithreaded and will still be more useful for applications that are not multithreaded and require more powerful single cores. Final point, the 9650 benchmarked a 812 on cinebench, my 3770k which is one generation behind benchmarked a 850 with only 4 real cores and hyperthreading, nuff said.

Nicolas Edwards

You stated you build super computers as a hobby but yet you don’t want more cores. This does not make sense to me. The cluster computers I work on do better with slightly weaker cores but with more of them. I don’t want to spend 100 dollars more on one chip when I can get the cheaper one with more cores and balance my workload and algorithms across them. As for your stating there is only a 20 dollar difference, I can find the amd 8350 for 190 and the intel for 239. Thats a 50 dollar difference.

I also did not understand your comment that you build super computers then you go on to say you only use your computer or care about gaming. You are not building a supercomputer if it is only excelling at single threaded applications.

You do not build super computers, you building gaming rigs. They are one trick ponies. Supercomputing would need the many cores of the amd and you can also set them up to multitask many more applications and projects.

I have 3 research projects running currently and am glad I do not have to close any of them just to clear up a core to run other things. The process of opening these apps and setting up the workspaces takes 10+ minutes of button clicking and finding the files needed. so to be able to leave them up and ready to go when I am is vastly better for me. I also like the ability to program my stuff so that it will dynamically distribute across the cores. I could do this with intel but with 4 cores there is less room for testing, you can’t see if it is truly scaling across all the cores.

zayahv2

The fact that you could not figure out what I meant by super computers when I provided all the details necessary to fully understand it puts your deductive reasoning skills in serious question. While tons of weak cores might be better for tons of multitasking that has a very specific function, for example mass virtual machine servers. For things outside gaming, such as video editing, encoding and mass number crunching intel still enjoys over 50% higher ipc than amd, so the tons of cores while good for massive multitasking does not make it better for any of those. Amd has fallen too far behind on ipc that even having twice as many cores it can not compete with intel. Toss in hyper threading and it’s icing on the cake. I have stress tested my rig by running multiple programs designed to max out all cores, encoded a video, copied gigabytes of information and ran a GPU benchmark or played a graphics intensive game such as crysis or battlefield and it was absolutely playable though the video encode would obviously take longer. All in all for the microscopic difference in price there is no reason to pick AMD unless your research projects funding is on a minimum wage salary.

Hugh Briss

Sorry, but the FX-9590 beats the i7-4770K on x264 benchmarks. You’re not just wrong, you’re either lying or pulling “facts” out of your ass.

zayahv2

Good job on up voting yourself troll. Lets say for the sake of argument I give you that. That’s one very specific function while it still gets out performed on absolutely everything else. I own an fx 8350 a 3770k and a 4570k. I know what I am talking about and you are clearly either a troll or a fan boy, pretty much the same anyway.

Hugh Briss

Unless you’ve dumped out $1000 or more for a recent generation X-series i7, the FX-9590 processes with x264 faster…here, have some proof, since anyone can argue bullshit theoretical crap and use ad hominem insults all day long and never back the shit-talking up with hard facts.

Let’s address your primary assertion. Regarding your argument that it gets outperformed on “absolutely everything else,” you’re not entirely wrong; depending on the workload, it often does end up benchmarking marginally lower than the top-end K-series Intel chips (i7-4770K, i7-3770K, etc.) but how much more cash does one have to pump out to get access to that marginal performance boost? Let’s find out.

As of this post, the retail price for the FX is $260 and the i7 is $335 (ignoring any promo codes and sales on both chips) which is a $75 price difference. What can you do with that $75? Get a better case, a nicer motherboard, a 120GB SSD, or 8GB of RAM, that’s what. You can pay $75 more for the Intel chip if you like; I’ll keep my $75 and not really care about the 7% performance hit in AES encryption, SuperPi, and wPrime it costs me. Hmm, I don’t sit around calculating pi digits and finding prime numbers all day long, so that sounds like a pretty sweet deal.

One more thing: there is no such beast as a 4570K. Either you have an i5-4570 or an i5-4670K. The 4670K currently (2014-10-29) sells for 10% cheaper than the FX-9590 but benchmarks out about 26% slower, so I’m going to ignore that you even brought it up.

zayahv2

Since you are using encoding as your main selling point, that small savings would be wiped out in under a year by the higher power requirements of the AMD. For someone who claims to be so smart you sure are ignorant to the 5 and 6 digits being right next to each other so I’m going to ignore that you even brought that up and voted yourself up again. Your entire selling point just crumbled. Next.

Hugh Briss

7.35 hours of continuous 100% load times 8 cores required to rack up one more kWh than the i7-4770K. Considering a transcode happens in 1-2 hours and power costs ten cents per kWh, you’re absurdly unrealistic and full of shit. You’re actually trying to argue that the TDP is all that matters now. If that’s the concern, I recommend you try a laptop with a ULV i3 and stop comparing the size of the TDP ePenis all day long.

zayahv2

Wow talk about being a complete and utter m0r0n. Your main reason for buying a system that can only out perform its competition in encoding yet you won’t be using it to encode much. LOL. What a tool. The math works out to $52.00 a year. Just the first year. I am not arguing that tpd is all that matters that’s just your unintelligent brain working in bad ways. You stressed price as a factor, well its still more expensive. You are letting your fan boy show.

Hugh Briss

Now you’re just being a troll. Your tears are so delicious. Keep crying.

AtariMaxiToriyama

Are you not going to respond to his assertion that after a year the price of the Intel is cheaper?

Its an expression. As in the baddest of the bad current tech for an enthusiast gamer. Not everything in meant in a literal sense and reading the atmosphere is an important part of life.

mrcead

AMD = Freight Train.
INTEL = Bullet Train.

And they do not compete directly, that would be foolish. It is more lucrative to dominate a sizeable niche than to beat out a rival.

Use the right one for the job that needs doing.

Sai Hatkar

i am updating my Desktop for a gaming PC, so what stuff should i go for…???

paco’s tacos

depends on what you currently have. If you’re using an Intel CPU such as an i3, sandy bridge, ivy, ect. then go for an upgraded version like the i5(i7 isn’t worth it.)
If you go for an AMD CPU you’ll have to get a new mobo.

Paul Omans

I stopped reading when I got to the Crysis 3 section. The correct word would be “breathe”, not “breath”. You “breathe” by taking in “breaths”. These two words are completely non-interchangeable, in any way shape or form. Why is this such a common mistake? I see this mistake repeated all over the internet. Did anyone pay attention in high school English class? I expected much better from you, ExtremeTech.

Joel Hruska

I’m sorry a typo so completely confounded you as to leave you unable to continue reading. Truly, you live a sainted life.

Paul Omans

I don’t understand why it’s such a common spelling error. It didn’t confound me, I just expected the writers at ExtremeTech to correct such a glaring error when proofreading the article. I’m surprised that I was the first and only person to notice and mention it. Can you elevate knowledge of the mistake to the appropriate people so they can fix it?

Joel Hruska

Because brains are funny like that?

I know the difference between “there,” “their” and “they’re” but on occasion will still type the wrong one. Occasionally I’ll be on the phone with someone while typing and I’ll inadvertently type a few words from the conversation into the story. These kind of burps happen.

Obviously we don’t *want* them in stories and we have editors, but typos can slip through — especially if they aren’t obvious ones.

“braethe,” for example, would be caught immediately. “breath” vs “breathe” might not be, especially if the editor was primarily checking for other kinds of errors.

Remy Dyer

I’d like to see those frame latency results on a power scale:
The idea is, human perceptions, particularly at short times, operate logarithmically anyway.

There should be a threshold of “good enough” below which improvements don’t make much perceptual difference, whilst times longer become rapidly more objectionable.
A graph scaled that way would be much like a VU meter with a log scale: Volume seems to be “linear” to our senses when we’re actually changing it logarithmically.

The maximum sharpness of human perception seems to fall around the 75 ms mark, but we can perceive fractional changes perhaps 1/3rd of that. (so a 15ms difference is noticeable). By sharpness I mean this: “a moment” in time seems to really be a gaussian shape around 150ms long, at its finest. But our senses are essentially deeply pipelined, so that “moment” slides along in much smaller steps, say around the 15ms mark.
In practise no accurate timing really applies to the human brain, it changes depending on alertness and other factors, like mood, energy and stress.

Blink, and it takes actually around 300ms, which your brain “edits out” of your immediate memory of time. This is why the “Stopped clock” illusion happens: Any rapid change of vision, a blink, a glance or a rapid cut in a movie, causes your visual centre to have to go to a lot of extra processing to pick up the thread again. For the delay while this happens, there is nothing for you to see, so rather than notice reality lagging, your short term memory effectively “retcons” out the gap.

This has application in a fight: It give you a moment to initiate a punch right as your opponent shifts his focus: He won’t see the blow coming, and by severely upsetting / changing his senses during that down time, his brain will become confused, and be unable to pick up the stream of consciousness: Like a damaged mpeg file missing a keyframe. This will be a knock out blow. His consciousness will have to start from scratch, the last few moments before the blow missing entirely.

Notice that our perception of time actually is inversely related to how much we’ve got going on: When we focus, time seems to flow rapidly. Perhaps this is because we most perceive time when there’s no interesting processing for our brains to do but to notice time passing…

Anyway, 3d graphics for games how they’re presently done are a huge bruteforce approach focussed on the wrong metric: Speed. They should be focussed instead on eliminating worst-case latency. (see idTech5: It always seems to run smoothly… despite mostly only going 30 fps!). Much lower framerates could be good enough with full screen post process motion blur… But the challenge/problem is, that the direction of that blur depends on the movement of the image across the user’s eye: It shouldn’t be assumed the user is staring motionlessly at the centre of the screen, in reality we (try!) to track interesting moving objects across the screen. If we’re locked onto one such, then that one object should be drawn sharply, and the motion blur applied in the opposite direction and speed to the rest of the screen! So low latency gaze tracking is not optional, for that to work…

Or we could just keep rendering everything at such a high pace that blur effects vanish…. for fast moving objects however, this is nearly impossible. Shake your mouse: How fast do you need to move it before it appears as a splatter of disconnected objects whilst in motion?

I estimate something like ~500 fps+ would be needed to make that effect vanish reliably, absent gaze tracking…

The problem is, that the gaze tracking has to be at least lower latency than our own perception of time, and this implies at least ~260fps for the eye-tracking camera. It must do so in order to accurately measure the speed of the eye, so a mere ~26fps is actually no good. (I’d actually prefer around 2.6k fps to really nail it). Unfortunately, high frame rate image sensors like that are not cheap :(

Sorry, getting sidetracked.

Anyway, latency performance metrics are a step in the right direction, relatively scaled frame latencies would be better. What I suggest is dividing the latency numbers by 15ms, then taking the result up to an exponent (maybe start with two), 7.5ms then becomes 1/4. 15ms become 1, 30ms becomes 2 and so on. This way the longer outliers will become much more noticeable. The exact “relative number” to use (15ms) as well as the exponent really ought to be found by social-science style research. Maybe get a bunch of people to watch a run and press a button whenever they notice lag, or something.

Perhaps another way would be 20log10(actual latency/minimim perceptable latency) like the way loudness is usually shown. A fraps run in that form would be interesting, but would need resampling to be properly graphed to be compared.

The few times I’ve seen those plotted up have been as just a series plot (one number per x pixel.) which makes higher frame rate runs longer than shorter ones, and it means that different parts of the run aren’t easily comparable, as their times don’t correlate.

ie, for each ~0.3 of a second along the run (a moment), show the highest of those numbers from the frames in that moment, OR, Just XY scatter plot the numbers, giving each X coordinate as time along the run.

The Y values are then scaled as suggested above. I’d find such results between different videocards/CPU’s to be much more indicated of the actually perceptual difference. Too much work?

Yogi Chandra Si YoowMa

how to upgrade to amd e350 processor amd fx 9590 ?

darren evans

The performance difference is not worth that price. Only an idiot would pay that’much more for a cpu for games. Im all intel but no point messing about the fx 9590 is the best price to performance cpu in this comparison.

robert

Hi I don’t know if I read this test or comparison right I am open to the fact I might be thick but your comparing a cpu Intel Core i7-4960X Ivy Bridge-E which is over £1500 (http://www.amazon.co.uk/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=Ivy+Bridge-E+4960X) to a cpu which is £173-£190 on ebuyer and its 5% slow I think the amd cpu is awesome for the money heat issues aside a h100i cooler would sort that so that’s £300 tops and it’s only 5% slower wow you guys are worse than me I criticise everything but that’s mad a cpu selling for £180 or there abouts gets hammered by 5% by a cpu which costs £1500 and the amd is a bad cpu because of the heat issues can’t believe what i’m reading why didn’t you put the price difference in the comparison article

Joel Hruska

You have to check the *dates* when an article was written, Robert.

This story is 16 months old. At the time it was written, the AMD chip was far more expensive than it is today. In fact, at the time it was written, the Core i7-4960X was the most appropriate Intel chip to compare against, as far as price was concerned.

DhruvMC

xD this would burn the hell out off your power bill…

marx

hmmm….interestingly :D cheap AMD VS extra unprofitable expensive Intel… what will be 2016. when comes AMD CPUs with NEW ZEN arhitecture… It will be very interesting :)))

According to Fudzilla, the new CPU will offer up to 16 Zen cores, with each core supporting up to two threads for a total of 32 threads. We’ve heard rumors that this new core uses Simultaneous Multithreading, as opposed to the Clustered Multi-Threading that AMD debuted in the Bulldozer family and has used the last four years.

Each CPU core is backed by 512K of L2 cache, with 32MB of L3 cache across the entire core. Interestingly, the L3 cache is shown as 8MB contiguous blocks rather than a unified design. This suggests that Zen inherits its L3 structure from Bulldozer, which used a similar approach — though hopefully the cache has been overhauled for improved performance. The integrated GPU also supposedly offers double-precision floating point at 1/2 single-precision speed.

The CPU layout shown above makes a lot of sense. We’re clearly looking at a modular part, and AMD has defined one Zen “module” as consisting of four CPU cores, eight threads, 2MB of L2, and an undoubtedly-optional L3 cache. But it’s the HBM interface, quad-channel DDR4, and 64 lanes of PCIe 3.0 that raise my eyebrows.

Here’s why: Right now, the highest-end servers you can buy from Intel pack just 32 PCI-Express lanes. Quad-channel DDR4 is certainly available, but again, Intel’s high-end servers support 4x DDR4-2133. Server memory standards typically lag behind desktops by a fair margin. It’s not clear when ECC DDR4-3200 will be ready for prime time. That’s before we get to the HBM figures.

Make no mistake, HBM is coming, and integrating it on the desktop and in servers would make a huge difference — but 16GB of HBM memory is a lot. Furthermore, building a 512GB/s memory interface into a server processor at the chip level is another eyebrow-arching achievement. For all the potential of HBM — and make no mistake, it’s got a lot of potential –that’s an extremely ambitious target for a CPU that’s supposed to debut in 12 to 18 months, even in the server space.

Nothing in this slide is impossible, and if AMD actually pulled it off while hitting its needed IPC and power consumption targets, it would have an absolutely mammoth core. But the figures on this slide are so ambitious, it looks as though someone took a chart of all the most optimistic predictions that’ve been made about the computing market in 2016, slapped them together on one deck, and called it good.

I’ll be genuinely surprised if AMD debuts a 16-core chip with a massive integrated graphics processor, and 16GB of HBM memory, and 64 lanes of PCI-Express, and a revamped CPU core, and a new quad-channel DDR4 memory controller, and a TDP that doesn’t crack 200W for a socketed processor.

Rod4Tech

I recently built myself a low profile desktop workstation based on the FX-8320. I must have lucked out on the silicon lottery because I easily achieved stable FX-8370 performance (4.3 GHz) overclock without having to increase v-core or result to elaborate cooling which I cannot do anyway in a slim form factor mATX HTPC type case. Furthermore, I’m running dual display complements of a DSM-59 cable connected to a low profile JATON Video-PX658-DLP-EX GeForce GT 630 2GB 128-Bit DDR3 PCI Express x16 Graphics Card. It’s certainly not a gaming rig, but it’s powerful enough for me to run other 3D rendering applications like Blender or even AutoCAD. I’m already set to receive the Windows 10 update when it is released. I currently have 16 GB of DDR3 1600 RAM in this rig. I love the power it has for the small footprint size. When I stress tested the CPU under full load using the valley benchmark (utilizing the unigine), the CPU temp hovered between 48-53 degree celsius. That’s not so bad is it? However, that GT-630 really got hot! The stock heatsink and fan that comes with the graphics card worked ok but man was I relieved when the stress test was done as temperatures really seemed to spike on the graphics card. Are there any other low profile 2-4 GB DSM-59 graphics cards out there you guys are aware of that run cooler for small form factor/footprint systems like mine where space is often cramped and air flow particularly challenging? Mr. Hruska, I’d appreciate your thoughts on the matter in particular. I’m loving ExtremeTech and learning a lot from the site. Thanks for all that you all do in making this a vibrant online community.

Joel Hruska

Let me see what I can find for you. How hot, btw, are we talking? Most GPUs these days can tolerate temperatures of 70-80C.

Looking around, I don’t see much better than what you have. DMS-59 is a very old standard.

Grats on the overclock. Most FX cores will step up a few grades, no problem, but getting above 5GHz is pretty tricky.

marx

no is not.. 5.0Ghz is nothing if you have good cooling system… FX CPUs go up to 8.0-8-3Ghz… STABILE.. but with very,vrey good cooling system. my 860K is not hot like FXs and have OC up to 6.3Ghz… I got 5.0Ghz without any problem and is stabile and temp is very low..beacuse i have good cooling system.. 5.0Ghz is nothing for AMD CPUs…for Intel yes… if witch Intel go up to 5.0Ghz… go,but is not stabile and is overheating even with good cooling… for OC,AMD is king..no doubt…

Joel Hruska

We aren’t counting liquid nitrogen, dude.

I own a single-stage freon cooler that takes a chip down to about -30C. Using that cooler, I can push an FX-9590 up to a 5GHz flat speed for all cores as opposed to 4.7GHz / 5.0GHz Turbo.

They don’t go higher. I’ve tested three. Not all of them could run at a full 5GHz.

Delidding might get you slightly more, but the power requirements for the FX-9590 skyrocket as voltage rises, and there comes a point when more cooling simply isn’t enough. If the chip could run at 6GHz on air, AMD would be selling one. The fact that the TDP rises from 125W at 4.3GHz to 220W at 5GHz should tell you something about how AMD made those processors and what the chances for faster clock speeds are.

Yes, liquid nitrogen and liquid helium can hit clocks of 8GHz. That says nothing about what normal users can expect.

marx

hmmm… then only you on world cant oc this CPU higher than 5.0Ghz… wooow,only person on world!!!! you are natural talent for overclocking :D

Hardware Canucks can’t manage 24/7 stability with all cores at 5GHz with air or CLLC.

The only valid overclock is 24/7 stable in all workloads and instances. I don’t care if OC institutions will validate a 30s run of SuperPi.

I can believe a handful of chips do 5.2GHz. You show me people hitting 5.5-5.7GHz on an FX-9590 on a *regular* basis, on air, and I’ll take back my statements.

marx

you leave this pages , look at the CPU-Z page!!! this is the most accurate overclock site… and come on,stop doing: “copy-paste”!!! write something original!!!

Rod4Tech

It was definitely within the 70-80C range that you posted. I believe the CPUID HWMonitor application I use showed the graphics card temp fluctuating between 77C-85C. The FX-8320 I got was the 125 TDP one with the Vishera core. I think its stock speed is 3.5 GHz with a turbo to 4.0 GHz. However, I tweaked the BIOS settings to turn off the Turbo and settled on a stable overclock of 4.3 GHz. I read some reviews posted about the newer FX-8320-E and FX-8370-E CPU’s which natively run lower TDP’s at 95w but the decreased stock clock speed didn’t set well with me.

Joel Hruska

I assume it absolutely has to be both DMS-59 and low profile? That limits you quite a bit.

Even so, a better cooler should be possible. 75-80C, however, *should* be ok. Most modern GPUs run that warm under heavy load.

“I assume it absolutely has to be both DMS-59 and low profile? That limits you quite a bit.” Yes because I need the dual monitor display mode (that comes with DMS-59) which allows for “extended desktop” functionality. Yes, the card has to be low profile because of the slimline mATX case & the Asus mATX logic board that I went with to build this system. I suppose that the route I took to build this system (small footprint/slimline) is what also ultimately limits my options insofar as performance & being expandable.

Rod4Tech

“I assume it absolutely has to be both DMS-59 and low profile? That limits you quite a bit.” Yes because I need the dual monitor display mode (that comes with DMS-59) which allows for “extended desktop” functionality. Yes, the card has to be low profile because of the slimline mATX case & the Asus mATX logic board that I went with to build this system. I suppose that the route I took to build this system (small footprint/slimline) is what also ultimately limits my options insofar as performance & being expandable.

Rod4Tech

“I assume it absolutely has to be both DMS-59 and low profile? That limits you quite a bit.” Yes because I need the dual monitor display mode (that comes with DMS-59) which allows for “extended desktop” functionality. Yes, the card has to be low profile because of the slimline mATX case & the Asus mATX logic board that I went with to build this system. I suppose that the route I took to build this system (small footprint/slimline) is what also ultimately limits my options insofar as performance & being expandable. BTW, here is a link showing my CPUID HW Monitor results after a successful stress test.
I was wrong about the my graphics card temps as you can hopefully see from the link. That GT-630 hit 96C! -sniffles- -wimper- lol

Rod4Tech

[IMG]http://i62.tinypic.com/aviixx.jpg[/IMG]

Joel Hruska

Yow! And that’s your idle temp? You’re not running a benchmark on it?

Let me know if you absolutely have to use DMS-59 and if you can get away with an adapter. We have more options if we can ditch that standard.

860K (wwitch is NOT FX CPU,go over 6.0Ghz,here is proof from CPU-Z site) and if 860K go so high,witch dont have so big possibility in overclock like ex. FX 6300,how big oc then have FX 6300??? want you see??? want you see 8 cores CPU overclocked up to 8.0Ghz? STABILE!!! on water-cooled system,not “nitro” like you say.. haaa-haaa :))))

Joel Hruska

Nope. “Validated” in the OC world doesn’t mean “24/7 stable.” it means “It ran long enough to get some screenshots and a quick test.

You’re incoherent and annoying. I’m done.

marx

Ok,if you say that.. I know i see every day on CPU-Z competition… you claim what you want…

Tim Tian

Sorry for feeding the troll, by the way. It’s just too hard to resist.

Joel Hruska

Oh, ok. whew.

Welllllll….have you thought about a DMS-59 adapter? Because that would let you get a better GPU.

Rod4Tech

Interesting proposal. However, wouldn’t the graphics card chip-set have to support it? I already have two DSM-59 cable Y-adapters but they only work with the proprietary DSM-59 pin out on the card along with the chip-set features. Unless I’m misunderstanding, are you proposing maybe there are some DSM-59 “converter adapters” out there that will work with more current low profile graphics cards GPU’s? That would certainly be a great option for me if it’s possible. I’ll check around. Please let me know if you find anything. Thanks Joel!

Joel Hruska

What monitors are you driving with this?

I’m wondering if some mini-display port or HDMI ports + adapters would work.

Hey Joel, I upgraded my rig with the aforementioned GeForce GTX 750 TI and got the following consistent results on my Valley benchmark. https://drive.google.com/file/d/0B7_WJW-ZvDugaW9penlDN1pSN2c/view?usp=sharing. Thanks for your help bro. I got the card on sale at the Micro Center for only $130 dollars. With the mail in rebate, it will knock off another $20 that I’ll get back. -smiles- Happy times!

Hey Joel. Here is a YouTube demo of that same Low Profile/Slimline card that I think I’m going to purchase. https://www.youtube.com/watch?v=7gtuAx2ivDY. BTW, my monitors are two Dell 17″ monitors running at 1280X1024 resolution in 32bit color depth.

marx

the same FPS!! :) and AMD cost 5 times less than Intel! then,why have so ultra high price???? and same perfomance.. “perfomance per dollar”,AMD alway wins…

AMD vs Intel – Our 8-Core CPU Gaming Performance Showdown! | Technology X
As a gamer and hardware enthusiast that spends a lot of time in various PC gaming communities, one of the longest on-going debates has been the best…
TECHNOLOGYX.COM|BY DONNY STANLEY

And??? This is not true,ha??? :)))))

Tim Tian

There have been exactly zero sensible person(s) who buy a 5960X for gaming. This will be updated when necessary. 4690K FTW.

marx

4690K VS 5960X??? really??? 5960X is the BEAST! you compare this two CPUs??? you are 100% CRAZY! and stupid…

Tim Tian

I compare the 4690K to the FX 9590. And win.

marx

and nobody buy 5960X.. why??? PRICE!!!! like ALL Intel CPUs… perfomance per dollar,Intel is total loser in this… FX 8370 give better perfomance and higher FPS even that 4690K! witch is little baby against 5960X

Tim Tian

Uh huh. I believe ya. You have the total credibility award. Now to compare the 4690K or Xeon 1231V3 against a FX 9590. Bring it on. You did look at the link I posted right? The one with the 3570K totally owning the 9590?

marx

Intel=Titanic Iceberg=AMD “ZEN” CPUs

Tim Tian

“AMD is going to be competitive” does not mean “Intel is going to have less market share than AMD” or “Intel will have less profits than AMD”

AMD vs Intel – Our 8-Core CPU Gaming Performance Showdown! | Technology X
As a gamer and hardware enthusiast that spends a lot of time in various PC gaming communities, one of the longest on-going debates has been the best…
TECHNOLOGYX.COM|BY DONNY STANLEY

Tim Tian

That is the third time you posted the same thing. Why not compare an Opteron 6370P to a Xeon 1231V3? Wanna bet the Xeon has better performance per $?

Let’s see, what do I know about CPUs?
I know that the 2012 Bulldozer design has approximately 50% the IPC of Intel’s Haswell.
I know that the primary bottleneck in gaming, as demonstrated by the SLI GTX 970s, is the GPU.
I know that AMD’s CMT design gives a maximum of 50% extra performance in the real world, none if you use the FPU.
I know that current games struggle to use beyond 4 cores. When they do, let me know.
I also know that you’re probably not as stupid as you’re pretending to be. But whatever.

Dean Vukovic

AMD did huge and great work with this CPu, but it is consuming too much power. Intel’s Cpu is little better but not much and I don’t think AMD will introduce another 8 Core CPU in 2015 anymore as they are preparing for new AM4 Socket, New CPUs and New Graphical Cards next year. AM3+ Socket is old and they should replace it 3 years ago and AMD CPUs would be much better.

Randy

Wow. I think I lost several IQ points just trying to decypher marx’s horrible grammar. It seems you like to belittle others and/or even offending others by calling them a “taliban”. This thread was actually in circulation at my current employer, Intel. I can’t tell you how many of us in the office laughed at all your claims. As a Process Engineer at Intel, let me ask you something… With that excellent grammar, and all the bullshit you’ve been talking about for the last 2 years what college did you attend? You’re quick you judge others on here without even knowing better. Grow up little boy.

Tom Rychlewski

So… What is the best performance for the dollar assuming its a gaming machine?

Lawrence Grossman

Marx, you are a bigot and a coward. It is easy to sound tough when hide behind the internet. If amitoj walked up to you, you would shit in your pants.
Sempre Vigilis

Kim Lassen

can you provide the hardware specks for both systems, so i can see what the hardware difference is, like bus speeds and pci express speeds etx

This site may earn affiliate commissions from the links on this page. Terms of use.

ExtremeTech Newsletter

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.

Email

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our
Terms of Use and
Privacy Policy. You may unsubscribe from the newsletter at any time.