The GeForce Turing merry-go-round is in full flow once again. Nvidia has officially registered the ‘Turing’ trademark, providing our heaviest hint yet that this will indeed be the new graphics card architecture for Nvidia’s next-gen gaming chips.

Secondary to this, and perhaps more interestingly, Nvidia also applied for a pair of additional trademarks for ‘Quadro RTX’ and ‘GeForce RTX’. That’s no typo, we could, in fact, be set to see Nvidia abandon the GeForce GTX range entirely in favourite of GeForce RTX. This would no doubt be in reference to its real-time ray tracing capabilities and would also serve as a neat cut-off from the previous gen chips.

Where this steps up a gear is with YouTube channel AdoredTV, who claims to have been contacted by an Nvidia source that he cannot disclose. Take it with a pinch of salt, but he claims he has been contacted with details on the nomenclature for Nvidia’s upcoming GPUs, with his source suggesting the first GPUs will be the Titan RTX and the GeForce GTX 2080 RTX, followed by a GeForce RTX 2070 later in the year.

In terms of performance, the same source suggests the Titan RTX will run 15% faster than the current Titan V, and 50% faster than the GeForce GTX 1080 Ti. It will allegedly launch for $3000. Woe betide our wallets.

For those who are marginally thriftier, the GeForce RTX 2080 will offer performance that’s around 8% faster than the current champ, the GeForce GTX 1080 Ti, and 50% faster than the GeForce GTX 1080. It’ll carry a price tag of $500-700, which seems a little optimistic.

At the moment, aside from the trademarks this is total conjecture. We’re going almost entirely off the word of AdoredTV’s trusted source. We know exactly how trustworthy the channel and the source are come August 20th when Nvidia is hosting its GeForce Gaming Celebration. It’s expected this is the day we finally find out what GeForce Turing is, once and for all.

What do you think, could we be set to see a series rebrand from Nvidia? Would the RTX name make a great fit for Turing's ray-traced lighting capabilities? Let us know in the comments section below!

The GTX 1080 and 980 and acccording to this rumor the rtx2080 are all mid range chips, the fact that they are priced so high is because Nvidia has been operating on 100% and above profit margin since 2016 with the profit margin being 134% this year. Also people are fooled by the name ending in 80 as up to the GTX 580(including) those were indeed high end chips along with the GTX 780.

Yes, but I'm talking about the gtx XX80 Non-Ti.The gtx 980Ti is what the gtx 980 should have been and the gtx 980 should have been a gtx 970 in terms of prices at the time.980ti should have costed 550$ at launch and the gtx 980 should have costed 350$ to work on a good solid profit margin for Nvidia, but nope… The gtx 970 could have been 300$ easily as this is pure profit considering that gtx 970s are not completely failed 980 and would be thrown away otherwise.

And that's because people are ignorant… if people gave a shiz, a single shiz, things could be back to the pre-2011 GPUs, with GPUs selling for what they are worth at a profit margin that the companies NEED and NOT at a profit margin that the companies WANT…

Also my advice if you really want to learn something more than shallow facts, read books, there are great books on hardware architecture and design, transistor logic, integrated circuits, cost of build them and so on, I've read quite a few myself as I was self-educating(the only way of educating oneself) to be a hardware architect and designer, but then realized that to work that I have to go to the US or China… ouch… otherwise I'd be designing boring-arse micro-chips for vending machines and such :/

So now I code and I read 2-4 books a season, except this summer, because I got tired of math and university entrance exams…

Nah when AMD is operating on a 50% profit margin and were the first to overprice their GPUs even when there was competition. The hd7970 came out in late 2011 costing 550$ when the hd6970 a bigger die size GPU costed 350$ just a year before that in 2010.

you compare a 10% smaller die with 30% smaller transistors to its predecessor. if they were both made with the same fab, the 7970 would be larger with a higher density. thats a high-end gpu by any math.

Exactly because the hd7970 is on a 28nm it is a lower die size and die size is what matters… -_-AMD and Nvidia pay for wafers and NOT for transistors, thus they pay for the die size of a chip. A 300mm^2 28nm chip with 10% failure rate costs as much as a 300mm^2 32nm chip with 10% failure rate, considering that wafers since 2009-2010 scaled with inflation until 2018 and even now they still haven't gotten the price increase they were said to have.

so what you are saying is with a smaller fab, they should increase the transistor count to keep the die size big? because of a similiar failure rate but higher number per wafer does not also mean lower cost for the consumers, although thats what we want. what a smaller fab does do is give more options for total amount of gpus. a non-failed die does not mean its a complete die. thus you get different versions of the same die. your example had 5 different versions. since you want to talk about fabrication, you should know right away that fab changes means there are higher failure rates until the fabrication matures. in fact, that is always said when there are new fabs.

you think that because they get more dies per wafer and each wafer costs the same then that means each die should be cheaper but amd worked on improving the architecture over time, which costs money so new dies have to recoup that cost. a company works with an operating cost. companies dont sit still when gpus are being sold and then cram for the new gpu at last minute. if you focus on one thing, your argument fails because you dont have the full picture. your argument on die size will always fail.

Nvidia's GPU department operates at 134% profit margin this year, they have been operating on 100% and above profit margin, since Pascal launched, especially with the founder's edition bs…

Look if they keep the transistor count the same across process nodes then the performance will stay the same, they barely improve the architecture from generation to generation, hell AMD has barely improved anything since GCN and that was technically a downgrade from their previous architecture, but was much easier to develop for, much cheaper to research and design and since Nvidia was SIMD and AMD was VLIW they changed to SIMD too.

And wafer costs are reported and the price of wafers have been consistent since the R&D costs and the production had barely gone up since 2009, basically scaled with inflation, they expected this year to be a rise in cost, but no word on it so far.

And not always are their high failure rates when the process node is new, Pascal had less than 20% failure rate from the get go…Fermi over the course of it's 2x generations(gtx 400 and gtx 500) had failure rates between 30-40%…

And talking about AMD's R&D budget, they admitted that since GCN 1.0 in 2011 they have been cutting the budget and is now at a record low in a decade, but will be ramped up again, since they thought that by 2020 everything would be integrated and dedicated GPUs wouldn't be relevant for most consumers… they were wrong…

Nvidia's Pascal had a low priority R&D cost as they were developing the "Tensor" architecture, Volta and Turing… And since Pascal is super similar to Maxwell, with the exception that they split the SMMs into two and added 64fp cores for each less cuda core… -_-

Pascal, by all means, was a low budget architecture, with low failure rates from the get go, with small dies sold at big prices, resulting in huge profit margins, 100% and more, this year 134% profit margin…

And the prices have always covered all the costs of the GPUs, R&D, chip cost, PCB costs and every other related or non-related expenses and profits… -_-Like which company doesn't account for the R&D costs?

So to sum it all up, Pascal and Polaris were extremely cheap(relative to what previous architectures were) to research and develop, had low failure rates, on wafers whose price has been scaling with inflation… and both AMD and especially Nvidia sold them at huge profit margins… -_-

And on top of that they have been selling at high profit margins the hd7000 and gtx 600, the r9 300 and gtx 900 and now the gtx 1000 and rx 400/500. The gtx 700 and r9 200 series were actually quite well priced, the r9 290 a 438mm^2 die size GPU was sold at 390$ MSRP, while something like the hd7970 a 352mm^2 die size GPU was sold at 550$ just 1 year earlier both on the same architecture…Gtx 680 a 294mm^2 die GPU was sold in 2012 for 500$ MSRP, while the gtx 780 a 651mm^2 die size GPU costed 650$ for a 2.25x times bigger die, which makes sense, again on the same architecture.

The R&D from Kepler to Kepler was guess what? next to none, they were developing maxwell in that time. The R&D from GCN 1.0 to GCN 1.1 next to none and they were working on cutting the GPU R&D budget at the time… -_-

you have a lot of wikipedia knowledge but nothing else. when you can show white papers and facts and not wiki "facts" and opinion, then you might be credible. but with you getting simple facts wrong (651mm^2 is wrong since its actually 561mm^2 but you used that and then used your math to vouch for it so its not a simple fat finger/over-sight).you are taking the fact that nvidia has had a huge increase profit that the chips they make are being overpriced. but you arent including anything else they make or that the gtx gpus have stuff disabled and thus the tesla/quadro are full optioned chips for their specific model. lets not forget that each company does stuff other than make consumer gpus

nvidia also has a research based on cpus and and has done so for many years; along with a whole division based around software building that is excluding drivers. amd is a company that was fighting two companies while bleeding money and all you care about is that their process shrink means that that the big chips became small chips and thus required a requisite drop in price also. business isnt your strong point businesses take their strong points and make money to feed the spots that arent there to make money but provide additions to the hardware that is bought. tesla/quadro have certified drivers that arent allowed the freedom that gtx driver gets.

software costs money while the hardware profits feed. there is research beyond the next gpu. how do you think amd is also getting zen 2 and 3 going. takes time and people, both of which cost money. if the gpu side is working at a 100%+ profits and the cpu side is bleeding money, then why would they cut prices? if they company wants to expand and they have the ability to do so because the market allows it, then who are you to decry it. blame the game, not the player.

I made a typo, my math is for 561mm^2…And you are saying until now they had lower profit margins and now suddenly they got a big boom in Quadro sales, when that's not the case as professional GPUs sales have been on the decline in past years…Nvidia's research for CPUs is minimal they haven't even produced an ISA… or made substantial anything on existing ISAsand AMD operates at a lower than 100% profit margin, that's just Nvidia, AMD is on about 40-50% for their GPU division since the rx 400/500 series.

And the profit margin for Nvidia's GPU division includes the software developers and all software related costs, it includes also all R&D costs and all production costs… not just the GPU production itself… -_-

And when they release official documents then I will give you official documents, I love how people love to dismiss anything that is not handwritten about their favorite companies…

And yes true, business isn't a strong point of mine YET, but as far as I know, 134% profit margin is not a good thing for the consumers and I'm a consumer… -_-

None of Nvidia's GPUs are worth it… Nvidia's GPU division operates at a 134% profit margin this year and has been operating at a 100%+ since 2016… For reference apple operates on 40% profit margin, that means Nvidia is more than 3x(three times) more overpriced than Apple and Apple is considered the most overpriced thing there is by the populous in terms of technology.

I am a very famous Youporner myself, 1.5 billion subs. My father is boss at Nvidia and he told me it will be the UGTX 1K80Ti (the Ultra TI comes later) for 100 bucks. It will feature 50K CUDA cores and 1 TB HBM 2. Ready for 16K240

Yes you have my word on that!!! I 100% do that not for subs or any money or fame … And yes Gaben is my Grandfather

GD, you suck, for spreading rumours all over the place. The amount of misinformation todays society spreads is HUGE! If done properly, you probably can sell humanity that an alien invasion is on their way …

If the performance increase is true, I'll probably wait for the 2080, then. I'll wait to see how they AIBs are (Asus, MSI, etc.). I've thought about seeing how much the price drops for the 1080Ti, but hopefully the 2080 isn't too ridiculously priced. I just need a little extra performance to help me get some more frames @1440p. Other than that, the 1070 has been a wonderful GPU, probably the best I've ever owned. But I got it when I had a 1080p monitor, and now that I'm @1440p, I need that little bit extra performance. But the 1070 was perfect for 1080p.

I mean that the 1070 was overkill for a lot of games @1080p, and I probably wouldn't have considered upgrading at all if I'd kept my old monitor. Of course, now that I've tasted a higher resolution, I can't go back ;P as that's always the case. And the 1070 is great for 1440p in many games, but the more demanding ones really took a hit to the framerate. G-Sync helps, but I like to keep it above 60FPS, and the 1070 struggles with some settings maxed. So, I'm excited to see what's coming up, though I'm sure the price will make my wallet weep.