GeForce 20 series

The GeForce 20 series is a family of graphics processing units developed by Nvidia, and was announced at Gamescom on August 20, 2018[4] and started shipping on September 20, 2018.[5] Serving as the successor to the GeForce 10 series,[6] the 20 series marked the introduction of Nvidia's Turing microarchitecture, and the first generation of RTX cards, the first in the industry to implement realtimehardwareraytracing in a consumer product. On July 2, 2019, the GeForce RTX Super line of cards was announced, which comprises higher-spec versions of the 2060, 2070 and 2080. In a departure from Nvidia's usual strategy, the 20 series doesn't have an entry level range, leaving it to the 16 series to cover this segment of the market.

Contents

The RTX 20 series is based on the Turing microarchitecture and features real-time hardware ray tracing.[7] The cards are manufactured on an optimized 16 nm node from TSMC, named 12 nm FinFET NVIDIA (FFN).[8] This real-time ray tracing is accelerated by the use of new RT cores, which are designed to process quadtrees and spherical hierarchies, and speed up collision tests with individual triangles.

The ray tracing performed by the RT cores can be used to produce effects such as reflections, refractions, shadows, depth of field, light scattering and caustics, replacing traditional raster techniques such as cube maps and depth maps. Instead of replacing rasterization entirely, however, raytracing is offered in a hybrid model, in which the information gathered from raytracing can be used to augment the rasterized shading for more photo-realistic results. The limited offering games that supported ray tracing at the time of the cards' launch led to some controversy.[citation needed]

The second generation Tensor cores (succeeding Volta's) work in cooperation with the RT cores, and their AI features are used mainly to two ends: firstly, de-noising a partially ray traced image by filling in the blanks between casted rays; also another application of the Tensor cores is DLSS (deep learning super-sampling), a new method to replace anti-aliasing, by artificially generating detail to upscale the rendered image into a higher resolution.[9] The Tensor cores apply the result of deep learning techniques run on supercomputers to codify how to, for example, increase the resolution of images. In the Tensor cores' primary usage, a problem to be solved is analyzed on a supercomputer, which is taught by example what results are desired, and the supercomputer determines a method to use to achieve those results, which is then done with the consumer's Tensor cores. These methods are delivered to consumers as part of the cards' drivers.

Dedicated Integer (INT) cores for concurrent execution of integer and floating point operations[12]

Nvidia segregates the GPU dies for Turing into A and non-A variants, which is appended or excluded on the hundreds part of the GPU code name. Non-A variants are not allowed to be factory overclocked, whilst A variants are.[13]

The GeForce 20 series was launched with GDDR6 memory chips from Micron Technology. However, due to reported faults with launch models, Nvidia switched to using GDDR6 memory chips from Samsung Electronics by November 2018.[14]

With the GeForce 20 series, Nvidia introduced the RTX development platform. RTX uses Microsoft's DXR, Nvidia's OptiX, and Vulkan for access to ray tracing.[15] The ray tracing technology used in the RTX Turing GPUs was in development at Nvidia for 10 years.[16]

All of the cards in the series are PCIe 3.0 x16 cards, manufactured using a 12 nmFinFET process from TSMC, and use GDDR6 memory (initially Micron chips upon launch, and later Samsung chips from November 2018).[14]

^Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.

^Texture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.