A recap on Nvidia's ray tracing behemoth

Highly anticipated: After what feels like an eternity of rumors, speculation, and blind hope, Nvidia has all but officially announced that its next graphics card, the GeForce RTX 2080, will be unveiled at Gamescom on August 20. In the runup to the hotly anticipated event, here’s a recap of what we know—along with some of the latest rumors surrounding the GPU.

Update (8/20): Tune in to watch Nvidia's founder and CEO, Jensen Huang, kick-off a special event in Cologne, Germany. Gamescom 2018, runs August 21-25 there, but before the show begins, Nvidia will be hosting an event where we expect the new GeForce RTX 2080 GPU will be unveiled. The keynote will be livestreamed in Twitch and Facebook. We also have it here (watch above), so you can follow the announcement while catching up on all your tech news.

The original post with a recap on Nvidia Turing follows below:

Nvidia revealed its new Turing architecture at SIGGRAPH last week, but gamers wanted to know what this meant for the company’s GeForce line. Nvidia obliged by releasing a trailer containing enough hidden clues to keep Sherlock Holmes busy: we see mention of user names “Mac-20” and “Eight Tee” (2080, see), RoyTex (RTX), Not_11, and Zenith20 (20 series). There are also coordinates for a location in Cologne (Gamescom’s home), another 2080 hint in the way that the date appears at the end, and a user called AlanaT (Alan Turing).

So, while there’s a small chance that Nvidia will keep using the GTX name alongside the RTX cards, it appears that the latter is replacing the long-used former.

We still don’t have any official performance information on the RTX 2080, but what we do know from the SIGGRAPH reveal is that ray tracing will be a star feature—the RTX name stands for real-time ray tracing.

A simple explanation of ray tracing is that it’s a rendering process involving the mapping of light rays through an image to create realistic shading, reflections, and depth-of-field. Doing this in real-time while maintaining an acceptable frame rate requires a massive amount of power, which is where the Turing architecture’s RT cores help. As you can see in the video above, the results can be spectacular.

The all-important price is another unknown element of the RTX 2080. A poster on Baidu recently claimed that it would be as low as $649. While that is a nice thought, it seems pretty unlikely to be so (comparatively) cheap. The same person also claims the card will have 3,072 CUDA cores running at 1,920MHz with a boost clock of 2,500MHz, though 1.7GHz/1.95GHz boost is more likely. The GTX 1080, for comparison, comes with 2,560 CUDA cores and a boost clock of 1,733MHz.

We do know that Turing GPUs are being built on the 12nm FinFet process and will use GDDR6 memory. The RTX 2080 is expected to come with between 8GB -16GB of 14Gbps GDDR6 and have a TDP of 170W to 200W. Peak FP32 compute performance, meanwhile, is thought to be around 12.2 TFLOPS, putting it slightly above the GTX 1080 Ti’s 11.3 TFLOPS FP32

Thankfully, we only have a few days left before Nvidia reveals all about the RTX 2080.