DLSS has a lot of potential in high-end graphics workloads

Nvidia's DLSS technology is an impressive technological innovation, combining the powers of deep learning and the company's custom Tensor compute cores to deliver high-resolution images with higher framerates and reduced aliasing than their native counterparts.

While the image quality improvements have been highly debated online, it cannot be denied that DLSS delivers an incredible performance boost over native resolution images. It has also been noted by many analysts that the quality of the final rendered image is often better than a native resolution rendering when combined with TAA, creating a sharper output in many cases which delivers more perceivable image detail. While DLSS upscales lower resolution images to achieve its high resolution output, the results Nvidia has showcased so far are impressive.

DLSS has arrived in 3DMARK in the form of the application's "Nvidia DLSS Feature Test", which users have already customised to see how DLSS impacts the performance of UL Benchmarks' 3DMARK Port Royal test scene at a full 4K resolution. By default, 3DMARK Port Royal runs at 1440p, with 4K effectively doubling the GPU loads offered by the already demanding benchmark.

In this 4K stress test, Nvidia's DLSS technology has been found to deliver a performance boost of almost 94%, having a larger impact on GPU performance than the game's native 1440p mode.

This highlights one of the major benefits of DLSS, as it's upscaling not only reduced the amount of number crunching that's required to create a final image but it also significantly decreases the memory bandwidth requirements of the benchmark. In this case, it allows the RTX 2080 Ti to deliver a larger performance impact than the reduction in GPU core computational needs of the scene.

Similar results can be seen in the standard 3DMARK DLSS feature test, With Nvidia's RTX 2060 offering the largest boost in overall performance within the benchmark. This is likely due to the GPU's similar compute performance to the RTX 2070 and its significantly reduced memory bandwidth capabilities, due to its 25% reduction in memory bis size.

Even with the bandwidth offered by GDDR6 memory, high-end graphics performance can still be limited by VRAM performance. That said, any game developer that's worth their salt will create their games based on the memory performance offered by the hardware of the time, limiting VRAM bottlenecks to a relatively small number of games at ultra-high resolutions. This is why memory bandwidth related issues are more often than not limited to benchmarks and tech demos, at least on flagship level graphics cards like the RTX 2080 Ti.

While Nvidia's DLSS technology requires a lot of deep learning computation and effort from both Nvidia and game developers to implement in games, the technology offers a lot of potential to deliver graphically impressive real-time experiences at higher performance levels than what is possible with traditional rendering techniques.

DLSS may not be implemented into many games at the time of writing, but there is a clear reason why developers have expressed more interest in DLSS than DXR Ray Tracing, at least when the Nvidia's RTX 20 series was announced back in Gamescon 2018.

Register for the OC3D Newsletter

Subscribing to the OC3D newsletter will keep you up-to-date
on the latest technology reviews, competitions and goings-on at Overclock3D.
We won't share your email address with ANYONE, and we will only email you with updates on site news, reviews, and competitions and you can unsubscribe easily at any time.

Simply enter your name and email address into the box below and be sure to click on the links in the
confirmation emails that will arrive in your e-mail shortly after to complete the registration.