Currently Playing:

Well it does matter to a degree. Always has.If all the 8th gen consoles had Rapid Packed Math for example, then developers would use it, but sadly that isn't the case as it wasn't bolted onto Graphics Core Next until after the 8th gen base consoles launched.

Then the argument should be about features instead of the architecture. No reason why we couldn't opt in for hardware extensions for the same effect ...

Pemalite said:

Turing is actually more efficient than Pascal on a SM to SM based comparison... However, Turing introduced allot of new hardware designed for other tasks... Once games start to leverage Ray-Tracing more abundantly, then Turing's architecture will shine far more readily.

It's a chicken before the egg scenario.

Whether nVidia's approach was the right one... Still remains to be seen. Either way, AMD still isn't able to match Turing, despite Turing's pretty large investment in non-rasterization technologies that takes up a ton of die space... Which is extremely telling.

Higher performance per SM came at an expense of 40%+ larger die area compared to it's predecessor so Nvidia is not as flawless as you seem to believe in their execution of efficiency ...

As far as ray tracing is concerned, there's no reason to believe that either AMD or Intel couldn't one up whatever Turing has because there's still potential improve it with new extensions such as traversal shaders, more efficient acceleration structures, and beam tracing! It's far from guaranteed that Turing will be built for the future of ray tracing when the yet to be released new consoles could very well obsolete the way games design ray tracing around Turing hardware with a possibly superior feature set ...

Turing invested equally just as much elsewhere such as tensor cores, texture space shading, mesh shaders, independent thread scheduling, variable rate shading, and some GCN features (barycentric coordinates, flexible memory model, scalar ops) are all things that can directly enhance rasterization as well so it's just mainstream perception that overhypes it's focus towards ray tracing ...

There's other ways to bloat Nvidia's architectures in the future with features from consoles they still haven't adopted like global ordered append and shader specified stencil values ...

Pemalite said:

Well. It's only early days yet. Turing is only the start of nVidia's efforts into investing in Tensor Cores.

In saying that... Routing FP16 through the Tensor cores has one massive advantage... It means that Turing can dual issue FP16 and FP32/INT32 operations at the same time, allowing the Warp Scheduler another option to keep the SM partition busy working.

So there is certainly a few "Pro's" to the "Con's" you have outlaid.

Tensor cores are pretty much DOA since consoles won't be adopting it and AMD aren't interested in the idea either. Not a surprise since there's hardly any applications for it beyond image post-processing and even then it doesn't provide a clear benefit over existing methods ...

Compared to double rate FP16 in shaders which are far more flexible and can be used for many other things including post-processing such as water rendering, ambient occlusion, signed distance fields collision for hair physics ...

I don't think the real-time graphics industry is headed into the direction of tensor cores since there's very few compelling use cases for it ...

Pemalite said:

In some instances the GTX 1070 pulls ahead of the Xbox One X and sometimes rather significantly. (Remember I also own the Xbox One X.)Often the Xbox One X is matching my old Radeon RX 580 in most games... No way would I be willing to say it's matching a 1070 across the board though... Especially when the Xbox One X is generally sacrificing effects for resolution/framerate.

Generally speaking you're going to need a GTX 1070 to get the same experience as the X1X is pretty definitively ahead of the GTX 1060 in the same settings and by extension the RX 580 as well ...

Pemalite said:

I would place the Playstation 4 at more than 4x faster. It has far more functional units at it's disposal, granted Maxwell is also a far more efficient architecture... The Playstation 4 also has clockspeed and bandwidth on it's side.

I am surprised the Switch gets as close as it does to be honest.

From a GPU compute perspective the PS4 is roughly ~4.7x faster, the same with texture sampling depending on formats but it's geometry performance is just a little over 2x faster than the Switch so it's not totally a slam dunk in theoretical performance since developers need to use some features like async compute to mask the relatively low geometry performance ...

The Switch get's as 'close' (still can't run many AAA games) as it does since NV's driver/shader compiler team desire to take responsibility for performance so it doesn't matter what platform you develop on for Nvidia hardware when their whole entire software stack is more productive ...

For AMD, on the PC side they can't change practices as that easily so I can only imagine their envy for Sony to be able to shove a whole new gfx API down every developers throat ...

The leak is based from the youtuber adoredtv which I deemed fake long ago, here's the full lineup:

Funny thing about this lineup it matches the next-gen consoles very nicely. Take the R5 3500g and disable 2CU's for better yields and clock it at 1,8ghz and you get 4,1 Teraflops which is a very good fit for Xbox lockhart, Navi 12 with 4CU's disable clocked at 1,8ghz adds to 8,3TF and matches the rumored PS5. And than the last Navi 10 at 48CU's clocked at 1,95ghz gives 12TF which is a nice fit for Xbox anaconda.

Now the clock-speed is based on the leaked gonzalo which indicates a Cpu clocked at 3,2ghz and GPU clocked at 1,8ghz for the PS5, since consoles usually have lower clock-speed than desktop parts I assume Navi can hit 2-2,1GHz without much issue.

About navi hitting geforce 2080Ti performance, I don't think it's impossible, Vega 7 is close to geforce 2080 and assume Navi can hit 2-2.1ghz then a 64CU gpu with maybe 5% per/TF increase from architecture and it's very close to geforce 2080Ti about 10% under which this leak suggest. And a 64CU Navi gpu should have the die size around 300mm2+ so even the price is not out of whack with gddr6 ofc.

Even though I still think this is a fake leak the rumored next-gen consoles gives it some credibility.

It doesn't, neither with the lineup or the price. It just doesn't hold up at all if you look close enough:

Navi 12 having so many different core counts. GCN4 had Polaris 10 with 36CU, Polaris 11 with 16CU and Polaris 12 with 8CU; add to that Polaris 22 with 24 CU in the RX Vega M, which is paired to an Intel CPU. In other words, every different CU count on the chip also resulted in a different Polaris version name. Having Navi 12 range from 15 to 40 CU is thus patently wrong.

R3 3300G/R5 3500G. So much wrong with those. First, AMD retired the R3/5/7/9 naming scheme with Polaris already, there's no reason to bring it back, especially not if the top end isn't going to be R7/R9, but still RX. Unless that's meant to stand for Ryzen 3/5, of course. Then, the CU counts are impossibly high. There's no way they could be fed through DDR4 without choking on the bandwith. Even with DDR4-3200, efficiently feeding more than 12 CU is next to impossible. Having so many CU would just bloat the chip size, making them more expensive for AMD to produce.

Those prices are unbelievably, impossibly low. While it's clear that AMD will want to undercut NVidias prices to gain market share, they wouldn't undercut them by such a massive amount. I mean, the RTX 2080 is over 1000€, and the proposed RX 3080 would already come close with less than a quarter of the price? No can't do. They would be even cheaper than their own predecessors, which are already on bargain bin prices due to the end of the cryptomining boom and thus high stocks that need to be cleared out. Not only would AMD not make money with those prices, but would rather ensure, that the rest of the Polaris and Vega cards would become instantly unsellable. In other words, AMD would lose money with those prices - and the goodwill of their board partner who build the actual graphics cards along with it.

The TDP values: AMD was trailing behind NVidia by a lot, and with that, they would surpass NVidia again, and not just marginally so. The recently released 1650 for instance trails an RX 570 by over 20% if locked to 75W, and an RX 3060 is supposed to be on par with an 580? More power than a Vega 64 LCU for less than half the TDP? That's simply not realistic.

The VRAM sizes. RX 36060 having around RX 580 power but only half the memory? Really don't think so. And Vega 64 could already have used more than 8GB, so the 3080 being stuck with it while being more powerful, while not impossible, would still be a major disappointment.

Well it does matter to a degree. Always has.If all the 8th gen consoles had Rapid Packed Math for example, then developers would use it, but sadly that isn't the case as it wasn't bolted onto Graphics Core Next until after the 8th gen base consoles launched.

Then the argument should be about features instead of the architecture. No reason why we couldn't opt in for hardware extensions for the same effect ...

Sure.

fatslob-:O said:

Higher performance per SM came at an expense of 40%+ larger die area compared to it's predecessor so Nvidia is not as flawless as you seem to believe in their execution of efficiency ...

The reason for the blow-up in die size is pretty self explanatory. Lots of functional units spent for specific tasks.It's actually a similar design paradigm that the Geforce FX took.

But even with the 40%+ larger die area, nVidia is still beating AMD hands down... And I am not pretending that's a good thing either.

fatslob-:O said:As far as ray tracing is concerned, there's no reason to believe that either AMD or Intel couldn't one up whatever Turing has because there's still potential improve it with new extensions such as traversal shaders, more efficient acceleration structures, and beam tracing! It's far from guaranteed that Turing will be built for the future of ray tracing when the yet to be released new consoles could very well obsolete the way games design ray tracing around Turing hardware with a possibly superior feature set ...

I agree. Never said anything to the contrary... However we simply aren't there yet so basically everything is speculation.

In saying that... Intels Xe GPU hardware will have GPU accelerated Ray Tracing support, how that will look... If it will take the approach Turing has remains to be seen.

fatslob-:O said:

Turing invested equally just as much elsewhere such as tensor cores, texture space shading, mesh shaders, independent thread scheduling, variable rate shading, and some GCN features (barycentric coordinates, flexible memory model, scalar ops) are all things that can directly enhance rasterization as well so it's just mainstream perception that overhypes it's focus towards ray tracing ...

There's other ways to bloat Nvidia's architectures in the future with features from consoles they still haven't adopted like global ordered append and shader specified stencil values ...

I have already expressed my opinion on all of this.I would personally prefer if the individual compute units were made more flexible and can thus continue to lend itself to traditional rasterization rather than dedicate hardware to Ray Tracing. But I digress.

At the end of the day, Turing is simply better than Vega or Polaris, it's not the leap many expected after the resounding success that was Pascal, but it is what it is.Whether nVidia's gamble is the right one remains to be seen, but it's hard not to be impressed considering how much die sizes have bloated outwards, performance only marginally increased... And yet still resoundingly beats AMD.

And this comes from someone who has historically only bought AMD GPU's and will likely continue to do so. Even my notebook is AMD.

fatslob-:O said:

Pemalite said:

In some instances the GTX 1070 pulls ahead of the Xbox One X and sometimes rather significantly. (Remember I also own the Xbox One X.)Often the Xbox One X is matching my old Radeon RX 580 in most games... No way would I be willing to say it's matching a 1070 across the board though... Especially when the Xbox One X is generally sacrificing effects for resolution/framerate.

Generally speaking you're going to need a GTX 1070 to get the same experience as the X1X is pretty definitively ahead of the GTX 1060 in the same settings and by extension the RX 580 as well ...

Doesn't generally happen. The Xbox One X really isn't doing much that a Radeon RX 580/590 can't do... Granted it's generally able to hit higher resolutions than those parts... Likely thanks to it's higher theoretical bandwidth (Even if it's on a crossbar!) and lower overheads, however... It does so at the expense of image quality with most games sitting around a medium quality preset.

I would take an RX 580 and run most games at 1440P with the settings dialed up than with the dynamic-resolution implementation most Xbox One X games take with medium quality settings. Games simply look better.

Still not convinced an Xbox One X is equivalent to a 1070. Just haven't seen it push the same levels of visuals at high resolutions as that part.

In-fact... In Gears of War 4, Forza 7, Fortnite, Witcher 3, Final Fantasy XV, Dishonered 2, Resident Evil 2 and so on with a Geforce 1060 6GB is turning in similar (And sometimes superior) results as the Xbox One X.

Geforce 1070 would be a step up again.Obviously some games will run better on one platform than another... I mean. Final Fantasy runs better on the Playstation 4 pro than Xbox One X... But this is a general trend with multiplats. 1060 6GB > Xbox One X > Playstation 4 Pro > Playstation 4 > Xbox One > Nintendo Switch.

fatslob-:O said:

From a GPU compute perspective the PS4 is roughly ~4.7x faster, the same with texture sampling depending on formats but it's geometry performance is just a little over 2x faster than the Switch so it's not totally a slam dunk in theoretical performance since developers need to use some features like async compute to mask the relatively low geometry performance ...

The Switch get's as 'close' (still can't run many AAA games) as it does since NV's driver/shader compiler team desire to take responsibility for performance so it doesn't matter what platform you develop on for Nvidia hardware when their whole entire software stack is more productive ...

For AMD, on the PC side they can't change practices as that easily so I can only imagine their envy for Sony to be able to shove a whole new gfx API down every developers throat ...

Never argued anything to the contrary to be honest.

The Switch does have some Pro's and Con's. It's well known that Maxwell is generally more efficient than what anything Graphics Core Next provides in gaming workloads outside of Asynchronous Compute, but considering that the Xbox One and Playstation 4 generally have more hardware overall, it's really a moot point.

It doesn't, neither with the lineup or the price. It just doesn't hold up at all if you look close enough:

Navi 12 having so many different core counts. GCN4 had Polaris 10 with 36CU, Polaris 11 with 16CU and Polaris 12 with 8CU; add to that Polaris 22 with 24 CU in the RX Vega M, which is paired to an Intel CPU. In other words, every different CU count on the chip also resulted in a different Polaris version name. Having Navi 12 range from 15 to 40 CU is thus patently wrong.

R3 3300G/R5 3500G. So much wrong with those. First, AMD retired the R3/5/7/9 naming scheme with Polaris already, there's no reason to bring it back, especially not if the top end isn't going to be R7/R9, but still RX. Unless that's meant to stand for Ryzen 3/5, of course. Then, the CU counts are impossibly high. There's no way they could be fed through DDR4 without choking on the bandwith. Even with DDR4-3200, efficiently feeding more than 12 CU is next to impossible. Having so many CU would just bloat the chip size, making them more expensive for AMD to produce.

Those prices are unbelievably, impossibly low. While it's clear that AMD will want to undercut NVidias prices to gain market share, they wouldn't undercut them by such a massive amount. I mean, the RTX 2080 is over 1000€, and the proposed RX 3080 would already come close with less than a quarter of the price? No can't do. They would be even cheaper than their own predecessors, which are already on bargain bin prices due to the end of the cryptomining boom and thus high stocks that need to be cleared out. Not only would AMD not make money with those prices, but would rather ensure, that the rest of the Polaris and Vega cards would become instantly unsellable. In other words, AMD would lose money with those prices - and the goodwill of their board partner who build the actual graphics cards along with it.

The TDP values: AMD was trailing behind NVidia by a lot, and with that, they would surpass NVidia again, and not just marginally so. The recently released 1650 for instance trails an RX 570 by over 20% if locked to 75W, and an RX 3060 is supposed to be on par with an 580? More power than a Vega 64 LCU for less than half the TDP? That's simply not realistic.

The VRAM sizes. RX 36060 having around RX 580 power but only half the memory? Really don't think so. And Vega 64 could already have used more than 8GB, so the 3080 being stuck with it while being more powerful, while not impossible, would still be a major disappointment.

So no, nothing realistic about the leak at all.

Yep, also we know that the 3000 serie APUs will be a refresh zen+/vega and not zen2/navi, which is another indication the leak is made-up. I've watched all those navi clips from adoredtv, this "leaker" seems to know a lot of amds technical stuff but didn't know that zen2 would be chiplet design and he had no clue about the vega 7 gaming gpu. That's why I gave up on this leak early.

Myself think there will only be a low-end navi11 gpu with 20 or less CU's and a midrange navi10 gpu with 40CU's that will have less than geforce 2060 performance. And Navi12 and Navi20 are just made up from people who have managed to trick websites/youtubers that they are legit.

"Donald Trump is the greatest president that god has ever created" - Trumpstyle

6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

Currently Playing:

It doesn't, neither with the lineup or the price. It just doesn't hold up at all if you look close enough:

Navi 12 having so many different core counts. GCN4 had Polaris 10 with 36CU, Polaris 11 with 16CU and Polaris 12 with 8CU; add to that Polaris 22 with 24 CU in the RX Vega M, which is paired to an Intel CPU. In other words, every different CU count on the chip also resulted in a different Polaris version name. Having Navi 12 range from 15 to 40 CU is thus patently wrong.

R3 3300G/R5 3500G. So much wrong with those. First, AMD retired the R3/5/7/9 naming scheme with Polaris already, there's no reason to bring it back, especially not if the top end isn't going to be R7/R9, but still RX. Unless that's meant to stand for Ryzen 3/5, of course. Then, the CU counts are impossibly high. There's no way they could be fed through DDR4 without choking on the bandwith. Even with DDR4-3200, efficiently feeding more than 12 CU is next to impossible. Having so many CU would just bloat the chip size, making them more expensive for AMD to produce.

Those prices are unbelievably, impossibly low. While it's clear that AMD will want to undercut NVidias prices to gain market share, they wouldn't undercut them by such a massive amount. I mean, the RTX 2080 is over 1000€, and the proposed RX 3080 would already come close with less than a quarter of the price? No can't do. They would be even cheaper than their own predecessors, which are already on bargain bin prices due to the end of the cryptomining boom and thus high stocks that need to be cleared out. Not only would AMD not make money with those prices, but would rather ensure, that the rest of the Polaris and Vega cards would become instantly unsellable. In other words, AMD would lose money with those prices - and the goodwill of their board partner who build the actual graphics cards along with it.

The TDP values: AMD was trailing behind NVidia by a lot, and with that, they would surpass NVidia again, and not just marginally so. The recently released 1650 for instance trails an RX 570 by over 20% if locked to 75W, and an RX 3060 is supposed to be on par with an 580? More power than a Vega 64 LCU for less than half the TDP? That's simply not realistic.

The VRAM sizes. RX 36060 having around RX 580 power but only half the memory? Really don't think so. And Vega 64 could already have used more than 8GB, so the 3080 being stuck with it while being more powerful, while not impossible, would still be a major disappointment.

So no, nothing realistic about the leak at all.

Yep, also we know that the 3000 serie APUs will be a refresh zen+/vega and not zen2/navi, which is another indication the leak is made-up. I've watched all those navi clips from adoredtv, this "leaker" seems to know a lot of amds technical stuff but didn't know that zen2 would be chiplet design and he had no clue about the vega 7 gaming gpu. That's why I gave up on this leak early.

Myself think there will only be a low-end navi11 gpu with 20 or less CU's and a midrange navi10 gpu with 40CU's that will have less than geforce 2060 performance. And Navi12 and Navi20 are just made up from people who have managed to trick websites/youtubers that they are legit.

Well, I expect Navi to be at least around 2060 in performance (between 2060 and 2070 is my expectation, or around Vega 56, if you will), but yes, no high-end GPU or other wonder APU in sight.

The leak is based from the youtuber adoredtv which I deemed fake long ago, here's the full lineup:

Funny thing about this lineup it matches the next-gen consoles very nicely. Take the R5 3500g and disable 2CU's for better yields and clock it at 1,8ghz and you get 4,1 Teraflops which is a very good fit for Xbox lockhart, Navi 12 with 4CU's disable clocked at 1,8ghz adds to 8,3TF and matches the rumored PS5. And than the last Navi 10 at 48CU's clocked at 1,95ghz gives 12TF which is a nice fit for Xbox anaconda.

Now the clock-speed is based on the leaked gonzalo which indicates a Cpu clocked at 3,2ghz and GPU clocked at 1,8ghz for the PS5, since consoles usually have lower clock-speed than desktop parts I assume Navi can hit 2-2,1GHz without much issue.

About navi hitting geforce 2080Ti performance, I don't think it's impossible, Vega 7 is close to geforce 2080 and assume Navi can hit 2-2.1ghz then a 64CU gpu with maybe 5% per/TF increase from architecture and it's very close to geforce 2080Ti about 10% under which this leak suggest. And a 64CU Navi gpu should have the die size around 300mm2+ so even the price is not out of whack with gddr6 ofc.

Even though I still think this is a fake leak the rumored next-gen consoles gives it some credibility.

So you really believe Sony will launch a new gen that is not even 2 times faster than its latest console launch, the ps4 pro? But for some reason you believe the anaconda will be 2x the x1x, making it almost 1.5 the power of the ps5 assuming its all coming at the same time. What are you expecting the prices will be for them?

Honestly a new gen console coming at only 1.3 the power of the previous last gen top console would be just ridiculous. Specially not weaker than stadia.

It takes genuine talent to see greatness in yourself despite your absence of genuine talent.

The leak is based from the youtuber adoredtv which I deemed fake long ago, here's the full lineup:

Funny thing about this lineup it matches the next-gen consoles very nicely. Take the R5 3500g and disable 2CU's for better yields and clock it at 1,8ghz and you get 4,1 Teraflops which is a very good fit for Xbox lockhart, Navi 12 with 4CU's disable clocked at 1,8ghz adds to 8,3TF and matches the rumored PS5. And than the last Navi 10 at 48CU's clocked at 1,95ghz gives 12TF which is a nice fit for Xbox anaconda.

Now the clock-speed is based on the leaked gonzalo which indicates a Cpu clocked at 3,2ghz and GPU clocked at 1,8ghz for the PS5, since consoles usually have lower clock-speed than desktop parts I assume Navi can hit 2-2,1GHz without much issue.

About navi hitting geforce 2080Ti performance, I don't think it's impossible, Vega 7 is close to geforce 2080 and assume Navi can hit 2-2.1ghz then a 64CU gpu with maybe 5% per/TF increase from architecture and it's very close to geforce 2080Ti about 10% under which this leak suggest. And a 64CU Navi gpu should have the die size around 300mm2+ so even the price is not out of whack with gddr6 ofc.

Even though I still think this is a fake leak the rumored next-gen consoles gives it some credibility.

So you really believe Sony will launch a new gen that is not even 2 times faster than its latest console launch, the ps4 pro? But for some reason you believe the anaconda will be 2x the x1x, making it almost 1.5 the power of the ps5 assuming its all coming at the same time. What are you expecting the prices will be for them?

Honestly a new gen console coming at only 1.3 the power of the previous last gen top console would be just ridiculous. Specially not weaker than stadia.

Adored has a new vid up and his more recent info states while the SKU's are supposedly the same I believe, the performance levels and power usage was slightly worse. The top SKU was Radeon VII+10% performance but somewhat higher than 225W TDP. I don't believe the pricing was too much different overall either. The new info seems more likely than what we had before, but it's tough to say overall. It still doesn't feel like a typical Radeon launch line up, but they need to shake things up, and maybe that's what they're trying to do this time around. Hard to say.

*I was second guessing myself afterwards so I watched again and realized I def made a couple of mistakes above, like I should have said the number of SKU's was the same but the new SKU layout is different. The top tier isn't more power hungry either, it's a few of the lower tiered models. The top tier is the only one that's gone up in price basically to $500.

You can see below as I'm posting a shot of the new leaked specs. Should've done that in the first place.

Currently Playing:

The reason for the blow-up in die size is pretty self explanatory. Lots of functional units spent for specific tasks.It's actually a similar design paradigm that the Geforce FX took.

But even with the 40%+ larger die area, nVidia is still beating AMD hands down... And I am not pretending that's a good thing either.

Never did pretend that it was a good thing but I was only trying to present a counterpoint to your perception that Nvidia somehow has a near perfect record on efficiency ...

Pemalite said:

The reason for the blow-up in die size is pretty self explanatory. Lots of functional units spent for specific tasks.

It's actually a similar design paradigm that the Geforce FX took.

But even with the 40%+ larger die area, nVidia is still beating AMD hands down... And I am not pretending that's a good thing either.

I never argued that it was ever a good thing but I assume I got my point across by now ...

Pemalite said:

I agree. Never said anything to the contrary... However we simply aren't there yet so basically everything is speculation.

In saying that... Intels Xe GPU hardware will have GPU accelerated Ray Tracing support, how that will look... If it will take the approach Turing has remains to be seen.

That wasn't my impression so I'm not sure if you realize this explicitly but when betting on a new technology to be standardized, there's always going to be a stake of an 'overengineered' solution ending up being inferior on a technical performance basis because like it or not there's going to be a set of trade-offs depending on each competitors strategy ...

Let's take a more sympathetic approach to AMD for a moment to not disregard their achievements so far for every downside they have because at the end of the day they still managed to pivot Nvidia a little bit towards their direction so by no means is AMD worse off than they were technologically speaking after the release of Turing ... (AMD were arguably far worse off against Pascal because unlike Turing where they could be similarly competitive on a performance/area basis, they couldn't compete against Pascal on ANY metric)

Meh, Xe won't be interesting at all to talk about until it gets closer to release or if it ever releases at all under the current situation with Intel ...

Pemalite said:

I have already expressed my opinion on all of this.I would personally prefer if the individual compute units were made more flexible and can thus continue to lend itself to traditional rasterization rather than dedicate hardware to Ray Tracing. But I digress.

At the end of the day, Turing is simply better than Vega or Polaris, it's not the leap many expected after the resounding success that was Pascal, but it is what it is.Whether nVidia's gamble is the right one remains to be seen, but it's hard not to be impressed considering how much die sizes have bloated outwards, performance only marginally increased... And yet still resoundingly beats AMD.

And this comes from someone who has historically only bought AMD GPU's and will likely continue to do so. Even my notebook is AMD.

More compute units isn't sustainable if we want a ray traced future when we take a look at Volta but I don't deny that Turing still has an advantage compared to AMD's offerings, however it would be prudent to not assume that Nvidia will forever retain this advantage when ultimately they can't solely control the direction the entire industry is headed towards ...

Pemalite said:

Doesn't generally happen. The Xbox One X really isn't doing much that a Radeon RX 580/590 can't do... Granted it's generally able to hit higher resolutions than those parts... Likely thanks to it's higher theoretical bandwidth (Even if it's on a crossbar!) and lower overheads, however... It does so at the expense of image quality with most games sitting around a medium quality preset.

I would take an RX 580 and run most games at 1440P with the settings dialed up than with the dynamic-resolution implementation most Xbox One X games take with medium quality settings. Games simply look better.

Still not convinced an Xbox One X is equivalent to a 1070. Just haven't seen it push the same levels of visuals at high resolutions as that part.

In-fact... In Gears of War 4, Forza 7, Fortnite, Witcher 3, Final Fantasy XV, Dishonered 2, Resident Evil 2 and so on with a Geforce 1060 6GB is turning in similar (And sometimes superior) results as the Xbox One X.

*snip*

Geforce 1070 would be a step up again.Obviously some games will run better on one platform than another... I mean. Final Fantasy runs better on the Playstation 4 pro than Xbox One X... But this is a general trend with multiplats. 1060 6GB > Xbox One X > Playstation 4 Pro > Playstation 4 > Xbox One > Nintendo Switch.

The source at hand doesn't seem all that rigorous in it's analysis compared to digital foundry since he doesn't present a frame rate counter, omits information, and sometimes he changes settings which raises a big red flag for me. Let's use more high quality sources of data instead for a better insight ...

Going by DF's video on SWBF2, you practically need a 1080Ti (maybe you could get away with a 1080 ?) to hold 4K60FPS on ultra settings to do better than an X1X which I imagine to be the high preset with with a dynamic res between 75-100% 4K. An X1X soundly NUKES a 1060 out of orbit with 4K MEDIUM preset settings and nearly DOUBLES the frame rate according to Digital Trends ...

In a DF article, it states that FFXV on the X1X runs on the 'average' preset equivalent on PC. Once again, DT was nice enough to provide us information about the performance of other presets and if we take 1440p medium settings as our reference point then a 1060 nets us ~40FPS which makes an X1X at least neck on neck with it factoring in the extra headroom being used for dynamic resolution ...

When we make a side by side DF comparison in Wolfenstein 2, X1X is running at a lower bound of 1656p at near maximum preset equivalent on PC. A 1060 was no where near in sight of the X1X's performance profile on it's best day running at a lower resolution of 1440p/max preset all the while it was far away from the 60FPS target according to guru3D. A 1080 is practically necessary to do away with the uncertainties of delivering lower than X1X level of experience because an X1X is nearly twice as fast as a 1060 in the pessimistic case ...

I don't know why you picked Forza 7 for comparison when it's one of the more favourable titles in comparison for the X1X against a 1060. It pretty much matches PC at maximum settings while maintaining perfect 40K60FPS performance with better than 4x MSAA solution while a 1060 can't even come close to maintaining a perfect 60FPS on max settings at the PC side from guru3D reporting from another source ... (a 1070 looks bad as well when we look at the 99th percentile)

For The Witcher 3, given that base consoles previously delivered a preset between PC's medium/high settings I imagine that DF would put the X1X resoundingly within the high settings. From Techspot's data, seeing how much of a disaster The Witcher 3 is with high settings and 4K is on a 980 I think we can safely say it won't end all that well for a 1060 even with dynamic res ...

With Fortnite, the game ran a little under 1800p on the X1X while a 1060 ran moderately better with a lower resolution of 1440p according to Tom's Hardware. Both run the same EPIC preset so they're practically neck and neck in this case as well ...

There's not enough collected quality data about Dishonored 2 to really say anything about X1X to compare against PC ...

In Ubisoft's Tom Clancy's The Division, X1X was running dynamic 4K with a range between 88-100% and past analysis reveals that it's on par with a 980Ti! (X1X had slightly higher settings but 980Ti came with full native 4K)

At Far Cry 5, even Alex from DF said he couldn't maintain X1X settings on either a 1060 or the 580 ...

Even in the pessimistic scenario you give the 1060 waaay too much credit than what it's truly worth when an X1X is more than a match made for it. Is a 1070 ahead of an X1X ? Sure I might give you that since it happens often enough but in no way would I place an X1X below a 1060 since it doesn't seem to happen that much if ever at all when we take into account good sources of data ... (the future becomes even darker for the 1060 with DX12 only titles)

Pemalite said:

Never argued anything to the contrary to be honest.

The Switch does have some Pro's and Con's. It's well known that Maxwell is generally more efficient than what anything Graphics Core Next provides in gaming workloads outside of Asynchronous Compute, but considering that the Xbox One and Playstation 4 generally have more hardware overall, it's really a moot point.

Switch is a neat kit of hardware alright but it's too bad that it's very well going to be obsolete from an architectural standpoint soon so this puts a spanner in Nintendo's potential plans of backwards compatibility ...

Whatever quantity you arbitrarily decides for Sony would be at least double of what MS would contract, so scale cost and batch purchase would still apply. Also why would the contract establish a specific design instead of quantities of the chip powering PS5 and revisions along the line? Or do you think Sony would change from AMD to NVidia mid gen? Also contracting a cadence shipment for a 2 year timespam doesn't seem unlikely.

I never argued a timed contract of 2 years. I argued 30 million. If MS orders 15, which is likely, Sony wouldn't order 30 because of reasons I've already stated though not directly responded too. Call my number arbitrary all you like. We know nothing about anything. Rumours are simply.... Rumours. This whole discussion is arbitrary.

"Also why would the contract establish a specific design instead of quantities of the chip powering PS5 and revisions along the line?"

Because signing off on a product that isn't even finished R&D yet is... Stupid. Even if they're involved in the process. No crystal balls are involved.

Do I think they're going to change to nVidia? Why even ask a question like that? Throwing some bait out? Idk... Anyway... I'm not the most informed on the topic but even I know such a drastic change in hardware would end them.

For what reason would the order from Sony not be about double of MS when they sell about double?

So since you know they won't change supplier they can have an agreement signed with volumes even if the specifics of the HW isn't defined and immutable.

duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

Currently Playing:

I never argued a timed contract of 2 years. I argued 30 million. If MS orders 15, which is likely, Sony wouldn't order 30 because of reasons I've already stated though not directly responded too. Call my number arbitrary all you like. We know nothing about anything. Rumours are simply.... Rumours. This whole discussion is arbitrary.

"Also why would the contract establish a specific design instead of quantities of the chip powering PS5 and revisions along the line?"

Because signing off on a product that isn't even finished R&D yet is... Stupid. Even if they're involved in the process. No crystal balls are involved.

Do I think they're going to change to nVidia? Why even ask a question like that? Throwing some bait out? Idk... Anyway... I'm not the most informed on the topic but even I know such a drastic change in hardware would end them.

For what reason would the order from Sony not be about double of MS when they sell about double?

So since you know they won't change supplier they can have an agreement signed with volumes even if the specifics of the HW isn't defined and immutable.

Well, look at the launch of the current gen. Xbox One sold well! Really well! It was the 3rd highest launch ever at the time (if memory serves) in spite of them absolutely butchering their reveal and Sony's more capable, cheaper machine and all the good will in the world.

They could have killed a puppy live on YouTube and would have been applauded. In spite of this, ps4 didn't double xbox during the launch window.

Sony isn't going to have that luxury next generation. While they're very likely to sell more next generation, it isn't written in stone. AMD would be fools to back one horse over the other when they already have both in their pocket.