Veteran

If you could still buy 1080tis, the 2080 wouldn't be competitive with it either. Not at the price point it ended up at.

Click to expand...

And that’s exactly the problem 2080. They are poor value compared to 1080Tis today, the only redeeming factor may be the advanced features down the road. VII does not have the RTX moat, shallow as it may seem. Everyone who had $700 to spend on 1080TI level performance Pascal/Vega generation product has done so already.

Newcomer

So at the same price and performance of a 2080 the choice is clear: for gamers, the choice is going to be the 2080: for RTX and DLSS. For compute/AI folks, the Vega will be better for the 16GB HBM2.

Click to expand...

What..?

I just bought a RTX 2080 because my Ti was going bad. I didn't really want to buy it, but I already had a 3440x1440p G-sync gaming display. Otherwise, I would've regretted buying the RTX. Since I can clearly see, that AMD's new 7nm "Radeon Seven"... is a much better buy than nVidia's RTX 2080.

It is far better, because it will hold it performance & value better with the next-gen games that will come, given it's 16GB of VRAM and the 1TB/s bandwidth. Forward thinking on future 1440p~4k titles for years to come. FreeSync2 is the standard and Radeon VII delivers PCIe 4.0.

LegendAlpha

For me this is the greatest question why? If the silicon or logic was broken they had enough time to test it with Vega 10 and change it in Vega 20. So the hole situation is strange.

Click to expand...

The reasons for the shift were to my knowledge not given by AMD.
Implementation bugs may not be the only reasons why the feature was delayed or cancelled. The original direction was that the new path would have been transparent to programmers, being managed by the hardware and driver.
A late abandonment of that direction may have left any API design effort in a position where resources were allocated elsewhere, or even if they were freely available the time it would have taken to pull together a robust API would have taken too long.

Also, not every flaw or weakness needs to be a bug that can be fixed with a correction to some logic.
A design might make decisions that in hindsight hindered the concept. If the next generation were to make fundamental changes to how the design worked, pouring resources into a design that is being replaced might hinder the new direction that learned from the lessons of the initial concept. If the time it took to get an API for Vega's methods would have dragged into Navi's launch period, it may have deprecated to make way.
There have been silicon designs with early versions of functionality that were never exposed to the general market. Intel's Williamette had the first version of hyperthreading, but it would not be exposed until the second-gen Northwood.
Possibly, elements of NGG were disclosed on a more aggressive schedule than could be sustained, and it would take a more fleshed-out version of the technology to be considered acceptable.

Newcomer

I just bought a RTX 2080 because my Ti was going bad. I didn't really want to buy it, but I already had a 3440x1440p G-sync gaming display. Otherwise, I would've regretted buying the RTX. Since I can clearly see, that AMD's new 7nm "Radeon Seven"... is a much better buy than nVidia's RTX 2080.

It is far better, because it will hold it performance & value better with the next-gen games that will come, given it's 16GB of VRAM and the 1TB/s bandwidth. Forward thinking on future 1440p~4k titles for years to come. FreeSync2 is the standard and Radeon VII delivers PCIe 4.0.

Click to expand...

Maybe you should at least wait for reviews before making such a bold declaration like this.

Newcomer

Being a bandwidth-limited sort of guy, I had one big wish for 2019:
a card that can do 1,000 GB/s for less than $1,000.
Actually, quad-GPU (4 TB/s) for less than $4,000.
AMD delivers in the first week?

Veteran

I just bought a RTX 2080 because my Ti was going bad. I didn't really want to buy it, but I already had a 3440x1440p G-sync gaming display. Otherwise, I would've regretted buying the RTX. Since I can clearly see, that AMD's new 7nm "Radeon Seven"... is a much better buy than nVidia's RTX 2080.

I have bought bottles of wine & opened them years later. At least with RX Vega, People (& you) will be able to taste it's refinement continuously over the next 3+ years. (Which I hope to do so, matched to my (soon to be) Acer 38" 3840 x 1600 freesync2.)

None of that^ even matters, if nVidia's $1,200 graphics card... can not give us stable 75 ~100+ frames at 4k.

Most of us "advanced gamers" will just stick with our g-sync 3880 x 1440 & 3440 x 1440 gaming displays, and our 1080ti's (RTX is a bust)... and just wait 3 months until AMD releases their 7nm Vega 128 freesync2 cards for $1,200... then upgrade to a nice 4k FreeSync 2 display

Newcomer

Maybe you should at least wait for reviews before making such a bold declaration like this.

Click to expand...

I don't need to, I posted facts. And based my opinions on those specs.

If you have doubt about my speculation about games in the future, Perhaps. But I don't need to see reviews of hardware, to understand where games are headed.. (ie: games are not going to use less VRAM in the future.)

Newcomer

You sound befuddled. I don't live in a small apartment with only one room. I maintain 4 Personal Computers in my house. My Newegg account is close to 20 years old. Don't hate, I worked hard my whole life for my enjoyment/hobby.

Veteran

It is far better, because it will hold it performance & value better with the next-gen games that will come, given it's 16GB of VRAM and the 1TB/s bandwidth. Forward thinking on future 1440p~4k titles for years to come. FreeSync2 is the standard and Radeon VII delivers PCIe 4.0.

Click to expand...

The numbers posted by AMD has the VII barely doing 25% uplift in performance compared to Vega 64, sometimes less. That's not enough to touch the RTX 2080. We shall see when reviews come out.

Newcomer

Gotta laugh seeing how "futureproof" VII is. Sure it got gimmicky 16GB of VRAM but the performance? It ties the 2016's 1080Ti. How relevant will it be in titles requiring 16GB VRAM? Lol

TBH this launch is not disappointing as the hyped Vega 64 one. This GPU is here just to reuse dead dies from Pro cards. It also serves the purpose of saying "we are not dead".

Sure many kids were totally expecting Navi. But yeah, youtubers might say it's coming every second but its hard to show a card without drivers or any other sign of life. So don't buy hype that is not based on evidence.

In fact I'm glad they didn't hype (no Koduri huh) the last gen of GCN based stuff. Given the history of differences between GCN iterations, it's safe to assume Navi will have almost the same characteristics as Vega. At least, they might go over 4SE with 6k SPs.

RegularNewcomer

Well, some games already use more than 8gb, and perform well with Vega-like raw power. Same thing with some nvidia card having 11 or 12gb of vram. Yeah you maybe need to cut some effects here and here, but at least you can in most case push the textures to the max, and even have less "stutters" due to assets swap. That why I've a Vega FE (this, and a good deal on it at the time), because the 4gb on my Fury was just dumb...

LegendVeteranSubscriber

If HBCC works as advertised, 16GB on Vega VII seems overkill for whatever gaming scenario may come in the next handful of years. Which is why I think AMD should have promoted the card for Pro applications a lot more.

The Radeon VII is using 4GB stacks and I don't think anyone is making 2GB stacks, though. So the choice for 16GB may be more related to raw bandwidth than VRAM size.
However, 1 TB/s sounds like a total overkill when compared to Vega 64 bandwidth/performance ratio. That's why I really thought they'd go with 3 stacks for 12GB at 768GB/s.
At first I thought the 4 HBM PHYs were directly connected to the back-end and that was the only way to enable all 128 ROPs. But it seems they're decoupled.

In the end, there's a bunch of things I don't get on Radeon VII as a gaming card, but would make a lot more sense as a Pro card. Them not cutting down FP64 performance is another factor.

Compared to Titan V, the Radeon VII is a card that has 53% higher bandwidth, 33% more VRAM and 12% higher FP64/FP32/FP16 throughput at less than 25% of the price.

Perhaps this is AMD again trying to have one chip to compete on many markets due to very limited development resources for RTG.
With a whopping 128 ROPs, Vega 20 was never going to be a chip 100% exclusive to datacenters..

RegularNewcomer

https://forum.beyond3d.com/posts/2054467/
So.. it seems raster engines can still rasterise only 64 pixels per clk.
Makes a lot of sense, considering that your avg triangle size will likely be much smaller than 32 pixels while projected on screen in modern games even in 4K

About Us

Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!