At $699, the Radeon 7 and the 2080 cost the same and perform about the same. The question now is 16gb of memory vs 8gb + Raytracing + DLSS.

Seems to me that AMD was forced to go with HBM2 memory again because of RND costs already sunk into Vega. It was easier for them to release a Vega 2.0 than to redo the memory interface to support cheaper GDDR6 memory.

I still question the amount of memory though, 16 GB is only useful for professional applications, AI, data center usage, etc.

I feel a cut down 12GB version with 756 MB/s memory bandwith for say $150 or $200 less would sell very well and see pretty much no performance dropoff compared to the 16GB 1000 MB/s version.

At $699, the Radeon 7 and the 2080 cost the same and perform about the same. The question now is 16gb of memory vs 8gb + Raytracing + DLSS.

Seems to me that AMD was forced to go with HBM2 memory again because of RND costs already sunk into Vega. It was easier for them to release a Vega 2.0 than to redo the memory interface to support cheaper GDDR6 memory.

I still question the amount of memory though, 16 GB is only useful for professional applications, AI, data center usage, etc.

I feel a cut down 12GB version with 756 MB/s memory bandwith for say $150 or $200 less would sell very well and see pretty much no performance dropoff compared to the 16GB 1000 MB/s version.

Part of the point is that AMD should have been developing larger-than-Polaris GDDR-based GPUs alongside their professionally oriented Vega GPUs. I guess they save money by doing it, but they end up losing quite a bit of high-end sales as well as marketing leverage, and no doubt margins.

And to be very specific: if Nvidia can put out 1080Ti/2080 performance with 352/256-bit memory controllers using high-clocked GDDRx, so can AMD. A 512-bit double-Polaris would be a force to be reckoned with in the gaming market, if AMD ever bothered to make one.

Click to expand...

*Should have* AMD makes so much less money than nvidia and intel yet AMD is just now releasing Zen2 which will take the crown in server space and dekstop too. Amd is also keeping up with nvidia for discrete GPU's Not winning but is competitive. It's a damned miracle AMD is competing as high as they are.

*Should have* AMD makes so much less money than nvidia and intel yet AMD is just now releasing Zen2 which will take the crown in server space and dekstop too. Amd is also keeping up with nvidia for discrete GPU's Not winning but is competitive. It's a damned miracle AMD is competing as high as they are.

Yeah because when you don't have money and have a good deal of debt you can just produce gpu out of thin air. Never mind the 3 year development cycle for gpu. So unless you have time travel available and willing to lend AMD a hand bring them some money from the future then it is not going to change development pace or status.

Yeah because when you don't have money and have a good deal of debt you can just produce gpu out of thin air. Never mind the 3 year development cycle for gpu. So unless you have time travel available and willing to lend AMD a hand bring them some money from the future then it is not going to change development pace or status.

It is also a fact. Between Intel illegally fucking them over during the P4 era and a series of spectacularly stupid decions under previous management AMD really doesn’t have the kind of money to throw around that their competition does. They do some pretty outstanding stuff with their limited budget but it is what it is.

At $699, the Radeon 7 and the 2080 cost the same and perform about the same. The question now is 16gb of memory vs 8gb + Raytracing + DLSS.

Seems to me that AMD was forced to go with HBM2 memory again because of RND costs already sunk into Vega. It was easier for them to release a Vega 2.0 than to redo the memory interface to support cheaper GDDR6 memory.

I still question the amount of memory though, 16 GB is only useful for professional applications, AI, data center usage, etc.

I feel a cut down 12GB version with 756 MB/s memory bandwith for say $150 or $200 less would sell very well and see pretty much no performance dropoff compared to the 16GB 1000 MB/s version.

I don’t see how it would be useful since it would take tflops away from normal rendering and I don’t think AMD has the expertise, manpower, or money to pull it off. It’s not like it has idle tensor cores sitting around like the RTX series.

For example, the 2080ti has 110 tflops of int8 just sitting around doing nothing except for RT and DLSS. Vega has ~60 Tflops (?) of int8 if the card commits 100% of itself to int8.

Exactly. If it wasn't for Nvidia's ridiculous pricing and stupidly limited RAM for such expensive cards, AMD would have been forced to stand down until large Navi.
AMD looks to have a very competitive card for the price, especially for those who do more then game.

I don’t see how it would be useful since it would take tflops away from normal rendering and I don’t think AMD has the expertise, manpower, or money to pull it off. It’s not like it has idle tensor cores sitting around like the RTX series.

For example, the 2080ti has 110 tflops of int8 just sitting around doing nothing except for RT and DLSS. Vega has ~60 Tflops (?) of int8 if the card commits 100% of itself to int8.

Click to expand...

I see this very useful if it is supported with multiple cards and not necessarily the same model card. Not SLI/CFX or Multi-GPU but using the second card for process, ML and maybe even RT. For example primary card does the game in what ever resolution, the second card does the ML processing then to your monitor it goes. Lag would be my only concern in this method. You would not have to render at a lower resolution like what Nvidia does but could render at full resolution and reap the benefit of the processing power of your second card.

The Microsoft demo shows some spectacular results from 1080p to 4K. This one example looks like it is much better than DSLL but too soon to tell. What is awesome about this is that anyone with a DX 12 card can use it. 4K gaming may come to a large number of folks now without needing to upgrade.

I see this very useful if it is supported with multiple cards and not necessarily the same model card. Not SLI/CFX or Multi-GPU but using the second card for process, ML and maybe even RT. For example primary card does the game in what ever resolution, the second card does the ML processing then to your monitor it goes. Lag would be my only concern in this method. You would not have to render at a lower resolution like what Nvidia does but could render at full resolution and reap the benefit of the processing power of your second card.

Click to expand...

I agree it’d be great if a second card could be purposed just for RT and such (but also do xfire for older games). I think a lot of people think the same and it’s definitely a way for AMD to catch up without going down the massive die route.

I once read someone saying RT would be easy to split off but I’ve never gotten that deep into it.

I agree it’d be great if a second card could be purposes just for RT and such. I think a lot of people think the same and it’s definitely a way for AMD to catch up without going down the massive die route.

I once read someone saying RT would be easy to split off but I’ve never gotten that deep into it.

Click to expand...

The other aspect is you don't need to invest in Nvidia Tax to get it, any DX 12 card will do it. Just updated last post just before yours which has a snippet about Microsoft demonstration.

The Microsoft demo shows some spectacular results from 1080p to 4K. This one example looks like it is much better than DSLL but too soon to tell. What is awesome about this is that anyone with a DX 12 card can use it. 4K gaming may come to a large number of folks now without needing to upgrade.

All of these (non-xfire, non-sli) multi-gpu capabilities that have been / are being introduced in D3D 12 are pretty neat - I like the way the multi-GPU works and this is another neat use for secondary cards in a system.

Wondering if gaming systems in a year or two are going to look like the early PhysX days, with everyone buying / keeping older cards for dedicated ML.

I would also agree with ManofGod dealing with image color quality AMD vs Nvidia. They have subtle differences and sometimes stark differences. HDR seems to exacerbate compression type artifacts. Still valid point on some objective proof. Nvidia level of detail seems to be less negative hence more blurrier textures since the mipmaps (lower versions of textures) are push closer to the camera view. It may just be how Nvidia and AMD sets their default image settings. Which some find AMD sharper but nosier while others find Nvidia softer, blurrier making it just a preference - many just can't tell the difference to begin with making it a mute point or the differences are so small that it is insignificant.

With new drivers and Windows update I will see if I can capture the HDR differences between Nvidia and AMD - the thing is if the monitor being used to look at the differences is poor - it will prove nothing. This is a hard one to fully show. Taking pictures of a 10 bit HDR image on a SDR 8 bit camera for example - or 10 bit image being shown on a 8 bit panel. Now if one can show the differences considering the limitations - the actual differences will definitely be more pronounced with higher quality monitors.

Click to expand...

NO, NO, NO!!!

Bandwith compression is LOSSLESS!
What you are descibing is akain to "Loudness" in music....aka "vibrance".

With increase bit depth, HDR-10 -> way more colors -> less chance of duplicate colors to compress -> compression ration goes down -> Memory bandwidth needed, will go up. Pascal had some issue with loosing performance with HDR much more than AMD. I am tending to think it lies with this.

The thread listed is dealing with non-gaming graphics but was a good read. Pretty sure Microsoft WHQL mandates the rendering of 2d text quality so that should be very close if not the same. Dealing with gaming graphics Nvidia does some things differently. For example on my 144mhz HDR monitor, AMD maintains 10bit depth HDR-10 at all refresh rates. Nvidia only has 10 bit depth at 144hz, anything else and it uses 8 bit color plus dithering. Unless what Windows is reporting is wrong, AMD image is a much better quality HDR-10 image on refresh rates less than 144hz..

It has been awhile since I went looking at mipmaps, will have to get back with that. If Nvidia and AMD maintained their relative L.O.D then AMD will have a sharper image in general.

Even if they're limited to reference PCB, there also hasn't been any alterations to the cooling (at least there doesn't appear to be in either of these linked cards). Which is disappointing. Perhaps there wasn't enough time R&D to production time for AIBs to do it. Hopefully we'll see some of that in the future.

Even if they're limited to reference PCB, there also hasn't been any alterations to the cooling (at least there doesn't appear to be in either of these linked cards). Which is disappointing. Perhaps there wasn't enough time R&D to production time for AIBs to do it. Hopefully we'll see some of that in the future.

Click to expand...

At least it's not a blower. It'll be interesting to see if 7nm Vega will be more or less power hungry than original Vega.