9 things you need to know about the Nvidia RTX 2080 Ti and RTX 2080

September 14, 2018

Earlier this month Nvidia announced an impressive lineup of new Turing-based GeForce graphics cards in the form of the RTX 2080 Ti, RTX 2080 and RTX 2070. While we got some specs and even gaming time with the RTX 2080 Ti, there's still plenty we didn't know about these next-gen graphics cards – that is until now.

Nvidia is finally allowing us to tell you almost everything behind its new architecture from how Tensor and RTX Cores fit next to CUDA cores, new shading methodologies, yet another version of anti-aliasing, and how you can use that new NV Link connector to SLI multiple graphics cards together.

We're not going into the full breadth of what Turing means, you can visit our sister site Tom's Hardware for that information. Instead we're only going to focus on the 9 most important things that will change the game for you.

Where else could we start but Turing's headlining feature, ray tracing – which is a light rendering process that's designed to mimic real life lighting. In the same way a ray of light can strike an object to illuminate it and other objects around it, ray tracing determines this phenomenon in a digital environment for more realistic and complex lighting.

Back in 1979 it took 1.2 hours to render a 512 x 512 ray traced image and now we can do ray tracing in real time. To achieve this, Nvidia RTX basically simplified the math.

Take a quick look at that bunny above and see how it's made of a seemingly infinitesimal number of triangles. Well, ray tracing requires calculating how light will hit and react to each of those triangles and that requires a ton of calculations, which even the world's best graphics cards would struggle to accomplish – and that's just one object in a digital environment.

What Nvidia RTX does instead of looking at thousands of triangles, is look at fewer triangles categorized into larger container. This way ray tracing can be applied to larger groups of surfaces at once and this called the "Bounding Volume Hierarchy."

Now that we know the secret of ray tracing, we have to break it to you that not everything in games can be ray traced – at least not yet.

Instead, Nvidia applies ray tracing on some surfaces and objects in game, while a majority of the lighting and shadows is still created by industry standard methods.

So, for example you might see ray traced metal, windows and water in Battlefield V, but the illumination on basically every other surface is provided by traditional global illumination.

While this fabled light-rendering technology might be in the name of Nvidia's new graphics cards, the actual RT cores found in each GPU operate almost completely separately from the other multi-billion transistors.

In fact, Nvidia has told us that RT Cores will be almost used exclusively for ray tracing.

What happens when the games you're playing or media you're watching involves no ray tracing? They basically just sit there and don't draw power.

Nvidia has told us that RT Cores might possibly also be utilized for audio processing and calculating physics, but we haven’t seen any game developers jump up and announce this yet.

While RT Cores might sit one the sidelines for a majority of games (at least for now), Tensor Cores play a much bigger factor into the performance of Nvidia’s new Turing graphics cards.

In fact, they look to do as many calculations as the traditional floating point and integer pipelines traditionally found in GPU architectures.

If there's anything AI is good for, it's deep learning, which in turn is spectacular at working with images and specifically anti-aliasing, which is all but necessary in today's world of high-resolution gaming. Anti-aliasing is essential because it removes the jagged edges and staircase effect caused by square pixels.

Going more into the technical side of things, anti-aliasing involves identifying two similar pixels and determining whether to combine them for a more uniform image. Typically, GPUs to this point would handle the process by copying the same values in sequence, however Tensor Cores can take a complete array of values and compute them all at once.

Nvidia claims that with this method, Turing is eight times faster at processing anti-aliasing than Pascal.

Not only are Tensor Cores good at anti-aliasing, they’re also spectacular at increasing resolution, aka super sampling.

In fact, Nvidia is introducing a new version of the graphics option known as Deep Learning Super Sampling. Nvidia calls it a breakthrough in high-quality motion image generation, as the technique super-samples and anti-aliases at the same time.

At the RTX launch event we saw demos Epic Infiltrator, in which a DLSS 2X version of the experience running on an Nvidia RTX 2080 Ti was able to match the quality of 64X super-sampled render with temporal anti-aliasing generated by an Nvidia GTX 1080.

Now, that’s not exactly a fair fight, the Nvidia GTX 1080 Ti would have been a better match up, but it’s impressive none the less. 64X super-sampling which typically requires duplicating a pixel 64 times and shifting it slightly for a larger image.

Meanwhile, Nvidia tells us that DLSS 2X can create sharper images than TAA could, which we can attest to from our demo experience.

Now we’ve only been talking about bits and pieces of the Turing architecture, but here it is in all its glory.

The Turing architecture, or specifically the TU102 version, includes 18.6 billion transistors, which is a significant increase from the 15 billion transistors found on Pascal.

Part of this was Nvidia going from a 16nm process on Pascal to a 12nm with Turing, but credit also goes to those new Tensor and RT Cores.

Speaking of which, the TU102’s listed 72 RT Cores and 576 Tensor Cores, but these won’t be the same counts you’ll find on all Nvidia Turing GPUs. For example, Nvidia has detailed that the RTX 2080 Ti will only feature 68 RT Cores.

This new architecture also comes with a load of expanded capabilities such as now being able to support 8k 60fps HDR display. What’s more, HDR on Nvidia’s cards is now native, whereas Pascal cards had to rope in processing that caused a noticeable dip in performance.

In terms of connecting to those said 8K 60fps HDR displays, users will find the usual three DisplayPorts on most cards and a new USB-C port. This is specifically a USB-C 3.1 Gen 2 port that supports UHD video and and is able to pipe out 27 watts of power – which will come in handy for powering VR headsets with just one cable.

Also gone is the old-school high-bandwidth bridge previously used to connect multiple Nvidia graphics cards into SLI. Instead you’ll find a new NVLink connector that supports 50GB of dedicated bandwidth per GPU. The Nvidia RTX 2080 Ti can delivering double the bandwidth (100GB total) because it has two NVLink connectors.

Unfortunately, it seems the maximum number of GPUs you can SLI will remain at just two. GeForce RTX NVLink Bridges from Nvidia will start at $79 (about £60, AU$110).

Instead of users having to walk their GPU up increasing clock speeds and fail/pass test in a time-consuming process, NV Scanner introduces an automatic process. This solution can walk the GPU along a very precise micro voltage curve to maximize performance.

At the same time, it runs a mathematical test (comprised of a test algorithm and Nvidia created workload) to look for errors in overclocking.

Nvidia is working with partners to incorporate the API into popular overclocking applications like EVGA Precision X1 and MSI Afterburner

The Founders Edition version of the Pascal GPUs were already a step away from Nvidia’s traditional reference cards and now the company’s self-produced versions of these RTX graphics cards even more remarkable.

We can’t not talk about Nvidia introducing dual axial fans; not only is this the first time Nvidia has stepped away from a blower-style cooler for its own graphics cards, it’s even more competitive with company’s board partners (i.e. Asus, Gigabyte, etc).

That said, Nvidia expects its new Founders Edition cards to be much quieter with this new dual fan system. According to the company, the RTX 2080 will be five times quieter when overclocked at only 29dBA as opposed to the GTX 1080 that reached 36dBa under the same conditions.

The RTX 2080 Ti will also be the first Founders Edition card with a 90MHz factory overclock and Nvidia is touting users will be able to push an additional 60W to overclock the RTX 2080 Founders Edition.

That’s about all we can tell you about the Nvidia GeForce RTX 2080 Ti and RTX 2080, but be sure to stick around TechRadar as our review on both cards will be coming next week.