AMD Shares More Details On The Radeon VII

Yesterday was an exciting day for AMD aficionados as the California chipmaker announced its Radeon VII and demoed its third-generation Ryzen processors. AMD aims to target gaming enthusiasts and content creators with its new gaming graphics card.

Credit: AMD

Many enthusiasts were praying for a Navi announcement. However, they received a die-shrunk version of Vega 10 instead. The Radeon VII employs AMD's existing Vega 20 graphics processor. In case you haven't been following AMD's discrete graphics card evolution, Vega 20 made its debut with the chipmaker's Radeon Instinct MI60 and MI50 data center graphics cards a couple of months ago. Vega 10 takes after the Graphics Core Next (GCN) 5.0 microarchitecture while Vega 20 is based on the updated GCN 5.1 revision.

Meet The AMD Radeon VII

Credit: AMD

Vega 10 was manufactured by GlobalFoundries under the company's 14nm FinFET node. It measured 487mm² and contained up to 12,500 million transistors. For Vega 20, AMD turned to TSMC and its 7nm FinFET manufacturing process. Although Vega 20 only measured 331mm², the die housed up to 13,230 million transistors. Thanks to this die-shrunk, AMD was able to deliver up to 1.8 times the gaming performance per area and double the memory capacity and memory bandwidth on the Radeon VII in comparison to the Radeon RX Vega 64.

Credit: AMD

The AMD Radeon VII comes equipped with 60 compute units, and since the graphics card adheres to the GCN standard, this tallies up to 3,840 stream processors, 240 TMUs (texture mapping units), and 64 ROPs (render output units). AMD states that the Radeon VII will feature a 1,450MHz base clock and a boost clock that escalates up to 1,800MHz. The Radeon VII is good for up to 13.8 TFLOPs of single-precision floating-point performance.

On the memory side, it sports 16GB of HBM2 (second-generation High-Bandwidth Memory) operating across a 4,096-bit memory bus, which is capable of delivering a phenomenal memory bandwidth of up to 1 TB/s. AMD didn't disclose the Radeon VII's memory speed. However, if we do the math, the memory would need to be clocked at 2,000MHz to achieve a throughput of 1024 GB/s.

Credit: AMD

The VII's reference design flaunts a silver shroud, three cooling fans, and an illuminated Radeon logo on the side. On top of that the graphics card is outfitted with two 8-pin PCIe power connectors. Although AMD didn't reveal the TDP (thermal design power), the Radeon VII's triple-fan cooler suggests it's somewhere in the range of around 295W. The reference model exhibited at AMD's CES 2019 keynote comes with three DisplayPort outputs and a single HDMI port.

Radeon VII vs Radeon RX Vega 64 vs GeForce RTX 2080

In order to prove the Radeon VII's prowess, AMD pitched the graphics card against the Radeon RX Vega 64 and the Nvidia GeForce RTX 2080 with the first being the flagship that it's replacing, and the latter being the rival it's targeting. AMD's results looked promising and revealed that the red chipmaker is closing the performance gap with Nvidia.

amd_radeon_vii_press_deck_-_final_Página_11

amd_radeon_vii_press_deck_-_final_Página_12

amd_radeon_vii_press_deck_-_final_Página_13

amd_radeon_vii_press_deck_-_final_Página_14

amd_radeon_vii_press_deck_-_final_Página_11

amd_radeon_vii_press_deck_-_final_Página_12

amd_radeon_vii_press_deck_-_final_Página_13

amd_radeon_vii_press_deck_-_final_Página_14

In the DaVinci Resolve 15 test, the Radeon VII is reportedly up to 27 percent faster than the RX Vega 64, and beats the GeForce RTX 2080 by 2 percent. While editing 8K footage in Adobe Premiere, the Radeon VII performs up to 29 percent and 8 percent faster than the Radeon RX Vega 64 and GeForce RTX 2080, respectively. However, the card only really begins to flex its muscles when it gets to LuxMark. The VII runs up to 62 percent faster than Vega 64 and 65 percent faster than the RTX 2080. Lastly, the Radeon VII is 27 percent faster than the Radeon RX Vega 64 in Blender.

amd_radeon_vii_press_deck_-_final_Página_16

amd_radeon_vii_press_deck_-_final_Página_17

amd_radeon_vii_press_deck_-_final_Página_19

amd_radeon_vii_press_deck_-_final_Página_16

amd_radeon_vii_press_deck_-_final_Página_17

amd_radeon_vii_press_deck_-_final_Página_19

For the gaming tests, AMD benchmarked its new flagship at three different resolutions: 2560x1440, 3440x1440 super wide, and 3840x2160. The Radeon VII beats the RTX 2080 by 11.6 percent and the RX Vega 64 by 28.9 percent in Battlefield V at 2560 x 1440. In Forza Horizon 4 at 3440 x 1440, the Radeon VII is 14.8 percent faster than the Radeon RX Vega 64 but falls behind the GeForce RTX 2080 by 1 percent. When playing Strange Brigade at 3840 x 2160, it delivers up to 42.6 percent and 19.2 percent more performance than the RX Vega 64 and RTX 2080, respectively.

AMD also compared the Radeon VII in a couple of eSports titles. The card delivered over 300 FPS at 1080p, 200 FPS at 1440p, and 100 FPS at 4K in Tom Clancy's Rainbow Six Siege. When it comes to Fornite, it's around 25 percent faster than the RX Vega 64. AMD puts a strong emphasis on streaming. The performance cost for gaming and streaming simultaneously, Fornite at 1080p, 60 FPS with Open Broadcaster Software (OBS) on a Radeon VII is just a mere 3 percent.

Availability and Pricing

Consumers can purchase the Radeon VII for $699 directly from AMD beginning on February 7. AMD will later offer the graphics card through third-party retailers from March 7. For a limited time, it'll also come with the 'Raise the Game Fully Loaded' bundle, which includes free copies of Resident Evil 2, Devil May Cry 5, and Tom Clancy's, The Division 2.

It might be a better product than we first though. If it can beat the 2080 RTX in a many titles, it could be a more interesting product than my initial take.

shrapnel_indie

1696401 said:

295W???

That's quite a bit, certainly... most homebuilts won't have an issue with that if built properly.

Now, I realize that there was no Ray Tracing involved, and the titles/apps used were probably cherry picked.... but what I could gather with the data provided... it can outperform the RTX2080... It's nice to see that... but as pointed out... 295W is the cost outside of initial investment. I'm sure we'll know more once we get to see actual reviews.

ThatTechieGuy

That a ton of power for 7nm. Was hoping it would be 25% less than NVIDIA. Also had PCIE4 on the wishlist

wwaaacs5

hm if it does 4k better without ray tracing, ill prob get it over the 2070 or 2080. i don't much care for the performance hit ray tracing causes, plus i do a lil editing on the side, seems that this will probably be better for me.

collin3000

Ray tracing doesn't matter yet.... But at only a $50 price Delta (if you catch a 2080 on sale) i'd honestly go Nvidia. I hope 3rd party Vega 2's are cheaper. Especially if the TDP is 295w on 7nm vs 250w on a 2080

Kaz_2_

I get this

JamesSneed

695105 said:

Ray tracing doesn't matter yet.... But at only a $50 price Delta (if you catch a 2080 on sale) i'd honestly go Nvidia. I hope 3rd party Vega 2's are cheaper. Especially if the TDP is 295w on 7nm vs 250w on a 2080

You are likely spot on however we probably should wait to see test results. Personally I'm waiting for Navi and Nvidia's 7nm refresh before my next GPU purchase, so I can decide between those two products.

monsta

1FPS FTW LOL

jimmysmitty

251426 said:

It might be a better product than we first though. If it can beat the 2080 RTX in a many titles, it could be a more interesting product than my initial take.

I never like company provided results. Mainly because if you look each game uses a different API and resolution. It smells like cherry picking to me.

We will see though once it is benchmarked by third party sources using common games instead of examples that show superiority.

s1mon7

I'm a fan of AMD, but at a higher TDP, same price and performance to one of the worst value cards in Nvidia's history, this card isn't competitive. In 5 days AMD will no longer have the Freesync advantage, making them compete on performance, value and efficiency alone. They lose at least in one of those and gain in no other categories. And again, that's against a card that Nvidia is price gouging on with little performance gain over last gen. Plus Nvidia adds RTX to the table, no matter how small of an added value that is for the prospective buyer. Add the fact that even if all things were equal, most people would rather pick up an Nvidia card, and AMD will have a hard time selling this card.

I feel like they launched this card just so AMD can say they have a high performance/4K GPU after all as well, while it wasn't possible for them to price it properly while still making a profit due to this card being expensive to make at TSMC's 7nm. It's one of those "barely worth it but might as well release it just so it's there" kind of products that they pushed mainly to uphold their rep. It was kind of silly that a company priding themselves as a GPU company (pushing GPUs to all high-performance console vendors) had no 4K-capable GPU available. This fixes that, but the card is a very tough sell. I was waiting for a flagship card from AMD, but I won't be getting this one. Feels also like an opportunistic, unplanned launch that happened purely due to the unexpectedly poor current Nvidia line-up.

Paradoxically, it might be a good deal for the computing and productivity folk. Getting a high performance GPU for $699 that's not far off from professional cards might be a steal. Indie devs and start-up studios might go for it. Especially if they ALSO game. But as a pure gaming card, this isn't a good value.

ninja91

1696401 said:

295W???

695105 said:

Ray tracing doesn't matter yet.... But at only a $50 price Delta (if you catch a 2080 on sale) i'd honestly go Nvidia. I hope 3rd party Vega 2's are cheaper. Especially if the TDP is 295w on 7nm vs 250w on a 2080

250W TDP on 2080 is closer to 350W when OC'd to the max (Ref design and non-reference would take that 300W while stock), and If Radeon VII is anything like 1st gen Vega, then It'll OC while undervolted and would be around the same TDP as ref design 2080

jacoro1

Is there still no word on HDMI 2.1 support? Gonna need something to pair with the new OLED TVs!

Olle P

2809234 said:

I feel like they launched this card just so AMD can say they have a high performance/4K GPU after all as well, while it wasn't possible for them to price it properly while still making a profit due to this card being expensive to make at TSMC's 7nm.

I think you're right here.Rumors are:1. The only reason this card exist is that Navi is delayed and this is what was feasible to get timely to the market.2. Early estimate of the production cost for this card said $750. Expect each GPU to be sold at a loss for AMD.

JamesSneed

Im hearing the 128 ROP's everyone is posting is actually wrong and its 64. Lord.

b3astbynatur3

I think at this point it comes down to what you value more, double the Vram or Ray Tracing... I personally am going with double the Vram. Digital Foundry's dive into the Resident Evil 2 demo showed that at 4k max settings the game can use up 10gb of vram. This is a close quarters game, imagine the open world games to come in the next 3 to 4 years. I'll take 4k with ultra textures and a little AA on top over shiny reflections any day.