In addition to ECC, how's support for VT-d (a.k.a. IOMMU) on Intel desktop platforms these days? I believe Intel finally offers VT-x on all (most?) of their desktop/laptop CPUs these days, but I have no idea what the state of VT-d support is. (Not that VT-d is terribly important to the vast majority of PC users... VT-x is the one you really want to watch for.)

The years just pass like trains. I wave, but they don't slow down.-- Steven Wilson

Well, comparing the Core i5-3570K to the FX-8350 is kinda like comparing apples to oranges. They're very different architectures and each has its own interesting features that, on the whole, somewhat puts these chips more or less on equal footing. I mean, I can't say my FX-8350 is better than the 3570K even if it can outperform it handily in some apps (to this I'm sure more folks will agree rather than not). If I had a 3570K, I'm not sure I can also bash FX-8350 owners because the FX-8350 also does have some big guns that can allow it to pull ahead.

These are two very interesting processing beasts, really. Never mind the Core i7-3770K, which sits about $100 above these two. Moving forward, assuming more and more devs are able to take advantage of as many cores as possible, the FX may well pull ahead. The potential is there, alright, but on the downside, people want performance now. It really is up to the specific user which sort of trade offs he/she is willing to take.

On the platform side, I think what Logan said in the video does make sense. Intel has PCIE 3.0, which most graphics cards can't fully utilize. AMD has SATA 6.0 all around, which most hard drives can't utilize. The differences are plenty enough that it can be hard for the inexperienced buyer to sift through the bullet points and decide which platform suits him/her best.

If people stick with you just because you have a Rolex on your wrist, you can bet losing them is as OK as losing an Invicta. And if they stick with you even if you only have an Invicta, losing them is as OK as losing a Rolex.

I wouldn't get an FX-8350 if gaming were my primary concern, and I say that as a guy who's mostly bought AMD for the last 14 years. Most games just don't scale gracefully to more than four cores... But as has been iterated countless times before, the eight core Piledrivers are amazing for heavily multithreaded workloads. I'm considering building one to fit the description of "snarling computational node" for tasks ranging from Blu-ray ripping with Handbrake to more esoteric scientific computing applications like TOUGH2-MP (http://www.tough2.com/index2.html). It'll also be pretty hard to beat for virtualization, especially when loaded out with enough RAM. It's a deeply weird product, but as it turns out, it's very well-suited to my needs. How I wish I could find a 95 watt FX 8300, though...

Concupiscence wrote:I wouldn't get an FX-8350 if gaming were my primary concern

agreed but if you bought an FX 8350 you'd still be able to game on it and to be honest if gaming was the main concern then FX 6300 and i3 should be at the top of the list not FX 8xxx or i5 or i7.

I am beginning to suspect you've had enough caffeine.

And yeah, it's capable for gaming. From the benchmarks I've seen it's at least on par with my Lynnfield i5 750 for those purposes. But for a large number of more weakly threaded tasks a modern quad Intel chip will be a better bargain. There's no shame in admitting it.

Concupiscence wrote:From the benchmarks I've seen it's at least on par with my Lynnfield i5 750 for those [gaming] purposes

I have a quartet of i7-860's sitting around with ASUS P7P55 boards and, I think 16GB of RAM each. Whilst I was tempted to set one up at home to try out as a Server 2012 Hyper-V playpen (we're on vSphere atm) a colleague has brought in an FX-8150 he's replacing with a 3770K and doesn't want much for it either.

I'm tempted to take it home instead of the Lynnfield just so that I have an AMD reference point of my own. I've built Llanos for other people but the last AMD processor I owned was Socket 939

Some people ask me why I have always enclosed my signature in spoiler tags; There is a good reason for that, but I can't elaborate without giving away the plot twist.

Concupiscence wrote:...Most games just don't scale gracefully to more than four cores...

This actually helps AMD since it only has four relatively weak FPUs, whereas the Intel has four strong FPUs and enough headroom in them to make hyperthreading perform above par for a lot of fpu-intensive loads.

The real issue for hardcore gamers is two-fold: Maximum frame latencies and overclocking. SB And IB overclock very, very well and already have lower maximum frame latencies almost across the board.

Someone mentioned that anything 60 fps or above is a "pass," but I think 99th percentile below 16.7 ms is more desirable (as well as much harder to achieve). I personally will take all the fps I can get; I too have a 120 hz monitor, and it needs to be fed!

1) The video in the OP - something seems off. I looked around a little bit trying to figure out if there was something about his setup that was different from other reviews, but it's not the OS (Win 7 vs Win 8 ) and it's not the games. The results for the same games are just quite a bit different from other reviews even when looking for systems as similar as possible.

2) Having said that...yes, the FX-8350 is fine for gaming. It's in the same league as Sandy Bridge CPUs broadly speaking (depending on game, settings, etc.) and by that I mean in actual use the difference probably isn't noticable. Although sometimes it falls behind a lot, it's still got more than good enough performance. As for the platform, comparing by features motherboards are generally a wash in price unless you need more than 2 SATAIII ports.

So, in summary...arguments can get a little silly, the video conclusions seem off because the results are notably different from every reputable site I looked at, and the OP shouldn't regret the purchase.

Last edited by MadManOriginal on Wed Feb 20, 2013 4:18 pm, edited 1 time in total.

Concupiscence wrote:...Most games just don't scale gracefully to more than four cores...

This actually helps AMD since it only has four relatively weak FPUs, whereas the Intel has four strong FPUs and enough headroom in them to make hyperthreading perform above par for a lot of fpu-intensive loads.

The real issue for hardcore gamers is two-fold: Maximum frame latencies and overclocking. SB And IB overclock very, very well and already have lower maximum frame latencies almost across the board.

Someone mentioned that anything 60 fps or above is a "pass," but I think 99th percentile below 16.7 ms is more desirable (as well as much harder to achieve). I personally will take all the fps I can get; I too have a 120 hz monitor, and it needs to be fed!

Is there a good reference guide to the Bulldozer / Piledriver FPU design? I heard its SIMD execution was 256 bits wide and not bad, other than the fact that it's 16 stages deep... which, clock for clock, still puts Piledriver miles ahead of the Prescott monstrosities of yore.

Concupiscence wrote:Is there a good reference guide to the Bulldozer / Piledriver FPU design?

There's some info on the original Bulldozer FPU design in this article. I'm not sure what the differences are between Bulldozer and Piledriver. I believe there were supposed to be some FPU efficiency tweaks in Steamroller, not sure if any of those made it into Piledriver.

Concupiscence wrote:I heard its SIMD execution was 256 bits wide and not bad, other than the fact that it's 16 stages deep... which, clock for clock, still puts Piledriver miles ahead of the Prescott monstrosities of yore.

Well, one would hope that it is a better design, given that it is nearly a decade later!

The years just pass like trains. I wave, but they don't slow down.-- Steven Wilson

It's good to see some people finally putting results into context. It's a shame it's taken four months for this to happen though. That's sadly time AMD didn't have.

The video author doesn't list his methdology, but we can assume it's similar to TRs level run throughs and recording. The game list is also interesting. If Trine 2 is such a demanding game, why don't we see it used more? Apparently it's great in Metro 2033? Sadly TR dropped this benchmark. It really makes you wonder how much benchmark numbers vary based on workload and how skewed results become if only a couple games are tested. Even if the method is sound, it doesn't matter if the items tested don't represent a broad enough spectrum of things you want to test. Anand only tested six games and TR only four in their article, they're all done in Windows 8 as well which is still silly IMO (the majority of users are still on W7 and I'm sure they will be for quite some time, just like with XP to Vista).

There is no mention of power consumption, which is great IMO. That issue seems to be hugely blown out of proportion and never put into context that is applicable to most users. The only place power consumption matters is in mobile applications where you rely on battery life or server farms. $40 can buy you a lot of power in most places. It's all about the $/performance.

I'm personally interested in the streaming benchmarks he mentioned which is very applicable to me (and a lot of other users from this area). Sadly no such results were available back when I bought my 3570k and I'm starting to second guess my purchase now. I suspected a 8350 would be better at balancing a streaming workload due to it being highly multithreaded, but had no definitive proof of it and I couldn't find any 'streaming' benchmarks at the time.

Bensam123 wrote:There is no mention of power consumption, which is great IMO. That issue seems to be hugely blown out of proportion and never put into context that is applicable to most users. The only place power consumption matters is in mobile applications where you rely on battery life or server farms. $40 can buy you a lot of power in most places. It's all about the $/performance.

It's not necessarily *just* the cost of the power though. A more power hungry CPU may also require a beefier PSU and HSF and better case ventilation, especially if the intent is to overclock. This potentially eats into some of the cost savings.

That said, I (mostly) agree with you. Unless the CPU is continuously being pushed hard (e.g. Folding@home, render farm, etc.), the power consumption under full load is less important than the idle power, since most of the cores are probably spending the majority of their time doing nothing. And once you start looking at idle power, the CPU is a smaller percentage of the overall equation.

The years just pass like trains. I wave, but they don't slow down.-- Steven Wilson

It's probably because the price difference isn't significant when considering price/performance (just doing a straight GHz ratio). If you're looking at a 3470, $15 more for a 3570 is almost a no-brainer.

Andandtech charts are a bit misleading since they are at lower resolutions since for the most part the only time a cpu becomes a GPU bottleneck is when the resolution is low along with low AA and AF. For single GPU tests I would much rather see the benchmarks with 4x @ 8xaa and 16xAF at 1080p and above. I believe that would even out most of the andand comparisons of Intel and AMD. I am not talking GTX690s and 7990s, Just true single GPU cards down to the HD7850 and GTX660 cards....anything slower then those i do not think there will be much difference.

When it comes to multi GPU setups....I am pretty sure the Intel CPUs fare much better even with sandy bridges PCIE 2.0 so we can take out the IVY's PCIE 3.0. I believe the reason for this is the PCIE lanes are coming right from the Intel cpus and not going through a Northbridge.

After being a AMD/ATI person since a tbrd 900 up to a 4800 x2 socket 939, 1 year and 11 moths ago I switched to Intel. I think i made the right choice. Who after pretty much 2 years can say there p67 2500k/2600k is still a top gaming cpu/platform...3-5 years ago you could not go 6 months without dropping at least 25% in performance. I Believe AMD's slacking helped me a heck of a bunch. Now if Intel would have soldered on the IHS to Ivy bridge cpu dies like they did with sandy bridge I think we would have a heck of a lot of 5 GHZ Ivys running around. TIM of any kind just does not compare with soldering. It just does not help spread the load/Heat across the IHS. I also think Intel did this so they could sell stock of Sandy and Sandy-E cpus...... I Just wish we could get intel to solder a dozen or so 3770k IHS's on just like they did Sandy cpus. So they can be properly OverClocked

The latency graph suggested that those that do only gaming based on fluid screen motion and also picky about the slightest bit of gaming latency but still on a budget and doing a new build (new motherboard) should be looking at a chip like a lower priced i5. Not the i3, in my book. I say don't waste money on a 2-core, spend the extra $50.

But if you do something else a lot, other than those kinds of games. Say chess analysis, for instance, then you could look for benches on the particular use you are doing to help choose.

One thing I do like about the 8350 is that it's multi-core chess analysis speed is very close to an i7 3770, but...cheaper. You can take the $100 or so, and put it into your pocket, or into something else, like an 840 pro.

I don't think a PSU or a larger cooler is as much of a problem as people make it out to be. I know TR has listed that a few times, but really almost all modern heatsinks can dissipate the 130w TDP of a 8350. Unless you're OCing, but that throws any sort of power efficiency lead out the window for IB due to leaky transistors. I mean you can still use a stock heatsink with a 8350... that in itself tells you how little that means.

If a extra 50w throws your PSU into a tantrum I'm pretty sure it has more problems then your processor. You could tack on other frivolous things justify buying a chip with lower power consumption like noise of said cooler, the cost of heating a house you cool with AC in the summer (which works inversely in the winter), another way of differentiating the results (it's really all it amounts to).

I honestly don't have any problems with people saying a 3570k is a superior chip for a lot of scenarios (if I made it seem that way), but for the money the 8350 is amazing. I do really appreciate the streaming results in this review. I have asked TR for some, but they have yet to oblige. it's a rather unique workload and streaming is growing quite fast.

Bensam123 wrote:I don't think a PSU or a larger cooler is as much of a problem as people make it out to be. I know TR has listed that a few times, but really almost all modern heatsinks can dissipate the 130w TDP of a 8350. Unless you're OCing, but that throws any sort of power efficiency lead out the window for IB due to leaky transistors. I mean you can still use a stock heatsink with a 8350... that in itself tells you how little that means.

If a extra 50w throws your PSU into a tantrum I'm pretty sure it has more problems then your processor. You could tack on other frivolous things justify buying a chip with lower power consumption like noise of said cooler, the cost of heating a house you cool with AC in the summer (which works inversely in the winter), another way of differentiating the results (it's really all it amounts to).

Grasping at straws here, eh? Look, if teh power consumtion difference doesn't matter to you - it's perfectly fine! Just like it's fine that it does matter to some people, regardless of their reasons - be it either an increased fan noise, an increased overall power consumption by fans in CPU/case and room A/C, an increased costs associated with spending extra $$$ on more powerful cooling hardware and PSU, or just a simple obssessive behavior related to "efficiency of things" (which has nothing to do with "saving $$$"). Can't we just leave it at that?

My subscription allows you people to exist on this site and makes me a better human being than you'll ever be

If people stick with you just because you have a Rolex on your wrist, you can bet losing them is as OK as losing an Invicta. And if they stick with you even if you only have an Invicta, losing them is as OK as losing a Rolex.

Can't. I'm playing System Shock 2 right now (it's making a comeback.) This game uses the Dark Engine, the same engine that Thief 1 and 2 use, and will not run properly with multi-core (or HyperThreading-enabled) processors. It's fun to set the game's CPU affinity to one of the FX's eight cores though.

PS - I actually used SS2Tool5.0 to install this game under Windows 7 64-bit because SS2 will not easily install on it. I think this patch automatically sets the game to run on only one core. Well, SS2 can pick from core 1 to core 8. Who would've thought of that back in 1999? I guess System Shock 2 loves my FX-8350 too!

If people stick with you just because you have a Rolex on your wrist, you can bet losing them is as OK as losing an Invicta. And if they stick with you even if you only have an Invicta, losing them is as OK as losing a Rolex.

I'd assume GOG made a patch to make it easier for gamers to install and run the game. I still have my old copy here so I don't really wanna grab it from GOG again. I just needed to do a bit of research and came across SS2Tool 5.0. I think there's this patch for SS2 made by some French guy as well. Not sure if that's the same with SS2Tool 5.0 though.

GOG may have the distribution rights these days but the game actually belongs to an insurance company called Star Insurance. That's pretty weird for a game title to end up in.

I highly recommend this title. Even by today's standards, I think it's awesome.

If people stick with you just because you have a Rolex on your wrist, you can bet losing them is as OK as losing an Invicta. And if they stick with you even if you only have an Invicta, losing them is as OK as losing a Rolex.

AMD64Blondie wrote:Although this is an AMD vs AMD price comparison.. back in August 2005 I paid $405 shipped for a brand-new AMD Athlon 64 X2 3800.

In November 2011 I paid $270 shipped for a brand-new AMD FX-8150.

(i had a few other AMD CPUs between those 2..a Phenom 9500,Phenom 9950Phenom II X4 955,and finally a Phenom II X6 1090T before the FX-8150.)

Amazing how far CPUs had progressed in 6 1/2 years.(Hard to believe that it was 6 1/2 years between buying those 2 CPUs.)

I usually get annoyed by necros, but I did enjoy reading through this thread again. I'm primarily a gamer when it comes to really using my computers, so I'm Intel (for now), but AMD's technology intrigues me.

The 'Bulldozer' architecture just shows tremendous amounts of promise. The single-thread performance definitely needs tweaking, and this will go a long way towards reaching parity with Intel CPUs on a number of workloads, while the power consumption definitely needs to be addressed since this affects OEM design wins considerably, as well as considerations for overclocking.

Price, of course, is really meaningless in the theoretical; it's based on the market, not what the part costs to produce, and we know AMD will (and needs to!) charge whatever they can get for these CPUs.