Share this story

AMD's new line of Ryzen 3000 desktop CPUs will benefit from the same 7nm manufacturing process as the company's new Navi-powered GPUs. Much of the tech community's hype is for the biggest and baddest of the bunch: the 16-core, 32-thread Ryzen 9 3950x. But there's an entire new line ranging from the $749 3950x down to a relatively-modest $199 3600X—and AMD is gunning for Intel every step of the way.

I don’t think there’s any reason people would buy an Intel processor after we [launch the Ryzen 3000 line].

Travis Kirsch, AMD Client Product Management Director

What's really interesting is, this time around, AMD is not just pitching cheaper parts and "good-enough" performance—the company is claiming top-dog stats, along with thermal and power efficiency wins. The Ryzen 7 3700x is listed at $329, while Intel's i7-9700k is currently available for about $410. But according to AMD's slides, the Ryzen part also outperforms the i7-9700k across the board, and it draws less power and produces less heat while doing so. Even when comparing absolute flagship CPUs, the monstrous 16-core/32-thread Ryzen 3950x boasts 105W TDP, while Intel's 32-threaded i7-7960x runs 165W TDP.

If the data here is reasonably accurate, the savings in power and cooling costs over the lifespan of a system will probably outweigh its already lower purchase price.

Cooler and quieter: in a reversal from what we've come to expect, AMD says its new flagship CPU is more power-efficient than Intel's.

One thing does remain constant in the Intel-vs-AMD wars: it appears that Intel will still enjoy a small single-thread performance advantage, while Ryzen runs away laughing with massively-multithreaded benchmark wins due to its greater number of threads at the same price points. (For example, the Ryzen 3700x boasts 16 threads to the i7-9700k's 8.) This generally is little or no help with gaming benchmarks, which tend to block on single-threaded performance and benefit very little from more than four CPU threads—but AMD figured out a way to make all those extra threads shine in a gaming benchmark anyway.

Enlarge/ Sure, you don't need a ton of threads to game effectively... but what if you want to game and stream at high res simultaneously?

AMD e3 Next Horizon Gaming slide deck

Either Intel's 8-thread i7-9700k or AMD's 16-thread Ryzen 7 3700x will play Tom Clancy's The Division 2 in 1440P at an effortless 90fps... but according to AMD's data, effectively streaming the experience live is a different story entirely. Twice as many threads are at the Ryzen's disposal for simultaneous video compression. Granted, AMD is stacking the deck here with extremely high-bitrate, high-quality compression that may or may not be strictly necessary for a game stream—but it's certainly desirable, and what's possible tends to set the standard for what's expected going forward.

More importantly—for those of us who want to play the games even if we don't stream them—this also hints at a tremendously improved experience gaming on an "everything box." Such a set-up may have email clients, Web browsers, anti-virus, and more running in the background.

For those of you who are already AMD fans, the news gets even better: the new product line still uses the AM4 socket, and the company says you can expect Ryzen 3000 CPUs to be drop-in replacements for existing Ryzen 2000 CPUs—no motherboard swap needed.

The new AMD chips support the AVX2 extension which should allow 256 element MADs. My question is, does every core support that? Or is it like intel's AVX512 units where there's only two per chip regardless of the number of cores?

Pointing a thermal camera at a heatsink doesn't prove which CPU is running warmer, rather which heatsink is pulling off more heat, which last I checked is what a heatsink is supposed to do (ignoring that pointing thermal cameras at shiny surfaces isn't a valid measurement as well). Considering we're looking at, converted into Freedom Units we're talking the difference between 83.7F and 80.4F the fact that AMD couldn't give a legitimate measurement from a thermal probe or read off the CPU should make everyone nervous.

Please, everyone and their brother should be encouraging AMD to supply updated math libraries for Windows. They already offer them for *nix systems. Using these Zen-optimized libraries has a huge impact where available. It would be nice if Windows users could benefit too.

Pointing a thermal camera at a heatsink doesn't prove which CPU is running warmer, rather which heatsink is pulling off more heat, which last I checked is what a heatsink is supposed to do (ignoring that pointing thermal cameras at shiny surfaces isn't a valid measurement as well). Considering we're looking at, converted into Freedom Units we're talking the difference between 83.7F and 80.4F the fact that AMD couldn't give a legitimate measurement from a thermal probe or read off the CPU should make everyone nervous.

Beat me to it. That picture is a perfect example of misleading marketing. First, it's a heatsink. That's not telling you much. Second, unless you look at the temperature (1.8C difference) it gives a very misleading image of the temperature of the heatsink (which again, doesn't matter to begin with). The scale is set *Very* specifically for the temperature difference to make the first image misleadingly appear warmer than the second by putting the red to white transition just above the heatsink on the left. You could touch these, and wouldn't be able to tell a temperature difference at that close in temp.

The question has already been asked, where are the mobile parts. I honestly dont care how they stack against the intel mobile parts, being an AMD fan tried and true, i've been waiting for them to finally get off their asses and put out a top end CPU so that I can replace me 10 year old laptop

Pointing a thermal camera at a heatsink doesn't prove which CPU is running warmer, rather which heatsink is pulling off more heat, which last I checked is what a heatsink is supposed to do (ignoring that pointing thermal cameras at shiny surfaces isn't a valid measurement as well). Considering we're looking at, converted into Freedom Units we're talking the difference between 83.7F and 80.4F the fact that AMD couldn't give a legitimate measurement from a thermal probe or read off the CPU should make everyone nervous.

Well it looks like they're using the same heatsink for both CPUs, so if one of the heatsinks is cooler it stands to reason that the CPU is running cooler. It makes sense with a lower claimed tdp on the AMD CPU. Not to mention Intel has been happy to let CPUs run in to the 90s and say it's fine. Also that Intel heatsink looks significantly more saturated

3950x is not a 135 W part, it is a 105 W. And not all MB and chipset will be able to support the new CPU. A lot of them will so best to check before hand. As for the single core performance saying Intel will still hold the lead is a bit premature. I think it will be close enough to not matter anymore.

The new AMD chips support the AVX2 extension which should allow 256 element MADs. My question is, does every core support that? Or is it like intel's AVX512 units where there's only two per chip regardless of the number of cores?

AVX-256 is supported throughout the entire CPU core. The FPU was widened to make use of it. Load/Store bandwidth was doubled to support it. Also, AMD doesn't have a guaranteed multiplier hit when AVX2 is invoked.

(This doesn't mean the CPU won't down-clock if you make extended use of AVX2, but AMD has not followed Intel's lead and set a static multiplier to reduce CPU clock when performing AVX2 code).

The new AMD chips support the AVX2 extension which should allow 256 element MADs. My question is, does every core support that? Or is it like intel's AVX512 units where there's only two per chip regardless of the number of cores?

AVX-256 is supported throughout the entire CPU core. The FPU was widened to make use of it. Load/Store bandwidth was doubled to support it. Also, AMD doesn't have a guaranteed multiplier hit when AVX2 is invoked.

(This doesn't mean the CPU won't down-clock if you make extended use of AVX2, but AMD has not followed Intel's lead and set a static multiplier to reduce CPU clock when performing AVX2 code).

Seeing how AVX-512 is apparently a bigger deal for servers than home users this might get them in the door for larger enterprise deployments.

Zen 2 is going to give AMD an awful lot of fame in the enthusiast market. Icelake mobile looks, honestly, pretty impressive, but with any desktop improvements MIA from the Intel side (9900KS barely counts), it's easy to see AMD winning quite a bit of marketshare over the next year or so among enthusiasts.

Minor quibble - Lots of mini mistakes in the article that could probably be corrected for clarity - that said, the similarities between AMD and Intel's product lines make them easy to make. Examancer says it well above, so not going to rehash them - just my hope (as well!) that they're fixed!

In addition to my previous comment. FLIR states that their cameras have in best case controlled scenarios (extremely controlled circumstances, a blackbody, their highest end cameras, and precise calibrations) a 1 degree C margin of error. The standard scenario with a normal camera, uncontrolled environment, unknown emissivity, etc is 2 degrees C. This is more than the difference in these images, and unless they controlled the environment extremely tightly, properly accounted for material emissivity, and are using the highest end FLIR equipment, the images are meaningless as they fall within the margin of error.

The new AMD chips support the AVX2 extension which should allow 256 element MADs. My question is, does every core support that? Or is it like intel's AVX512 units where there's only two per chip regardless of the number of cores?

Per Anandtech's deep dive into the architecture, the 256bit AVX operations are carried out by the floating point units present in every core. They also apparently don't come with frequency limitations like Intel AVX utilization imposes, per AMD.

Zen 2 is going to give AMD an awful lot of fame in the enthusiast market. Icelake mobile looks, honestly, pretty impressive, but with any desktop improvements MIA from the Intel side (9900KS barely counts), it's easy to see AMD winning quite a bit of marketshare over the next year or so among enthusiasts.

Icelake desktop parts, from what I've gathered, aren't going to be out until mid to late 2020. Laptop parts, even if they're good, aren't comparable. Icelake will also likely be up against zen 2 gen 2 at that point (aka ryzen 4000 series). If Intel is barely beating out ryzen with their 2020 chips, they're already behind.

I'm getting a zen 2 chip. Unsure if it'll be the a 38-3900 or all out for the 3950, but one of them is happening. Already have everything else I need. So uh. Anyone interested in a year old ryzen 2200g?

For people talking about thermals:There's already been comparables from an Intel chip and a ryzen chip pulling energy under load. I'm failing at Google, else I would link /cry.

We don't know the base frequency on the 4.1GHz chip that Intel announced, because all it said was "up to 4.1GHz Max Turbo." But according to Anandtech, the i7-1065 G7 will be a 1.3GHz base / 3.9GHz Boost part.

There are a lot of ways to read these results. Perhaps the CPU will maintain higher mid-core turbo frequencies and the actual impact on clock is smaller. Perhaps some of the decrease is because much more power will now be dedicated to the GPU than Intel has ever delegated. Both of these would be fair explanations for the decline that wouldn't point to a major process node miss.

But the straight-line, simplistic comparison is not encouraging. Pulling back from 4.8GHz to 4.1GHz is a decline of 14.6%. That offsets most of the 18% increase in IPC that Intel was claiming to gain from Skylake - Ice Lake. The straight-line implication -- which may not be accurate -- is that Intel only picks up 3-4% from Ice Lake over Coffee Lake, once frequency decline is taken into account.

In addition to my previous comment. FLIR states that their cameras have in best case controlled scenarios (extremely controlled circumstances, a blackbody, their highest end cameras, and precise calibrations) a 1 degree C margin of error. The standard scenario with a normal camera, uncontrolled environment, unknown emissivity, etc is 2 degrees C. This is more than the difference in these images, and unless they controlled the environment extremely tightly, properly accounted for material emissivity, and are using the highest end FLIR equipment, the images are meaningless as they fall within the margin of error.

What's really amazing is that the new 3000 series matches the top of the line consumer/enthusiast i9 in gaming. Meaning, IPC is on par or better than Intel, and for half the price with double the cores. For people who need IPC for work; even better! Real competition on the CPU front is here again. Thank you AMD for getting back in the game.

I'm very happy with my original threadripper build. Though it may get retired for a zen2 epyc.

The latest grist in the rumor mill is that AMD will introduce Zen 2 Threadrippers by the end of 2019, with SKUs up to 64 cores. And they should slot right into your existing TR4 board (with the likely proviso that X399 won't support PCIE 4.0, etc.).

This is great news if it holds true for non-cherry-picked benchmarks done by third parties.

Heat and power draw translate into noise, and having quiet air cooling is important to me. If AMD can offer good gaming performance and higher performance per watt then my next CPU will be a Ryzen.

They didn't release anything about the coolers and etc... Is this with their spire coolers vs some stock intel heat sink that they strapped to it or both got a noctua d15 on them?

The main issue is that a lot of games still respond well to clock speed and not the number of cores... Just the ability to hit 5 GHZ on the i7 9700k with a beefy cooler makes it faster than the old ryzens that's capped at maybe 4.2-4.3 ghz even with a beefy cooler....

No doubt that Intel is gonna get massacred in productivity suites where multi-threading is actually utilized heavily, but clock speed is still pretty much king at gaming.

You are taking what AMD claimed in marketing material, using very suspect slides (if history is used as a comparison), and acting like its probably-maybe true.

Everything you stated should be taken with a massive amount of salt, and I find it very hard to believe an Ars article didn't even include a similar disclaimer (they almost always do). And yes, I see the "AMD stated..." and all of that, but usually Ars articles cover stuff like this with more... I dont know the term off the top of my head... it usually sounds more like "well they SAY this, but...". I dont feel this article is even in the same ballpark.

EDIT - It would be nice if what AMD claimed was 100% true, though. I just don't take any slide provided by either AMD or Intel without a metric ton of salt - and neither did Ars about two or so months ago.