Nvidia’s GeForce GTX 1070 graphics card gets undressed and dissected. Its core the Pascal GP104 GPU exposed and beautifully photographed. These are the first ever public die shots of the 314mm². If you’re a hardware enthusiast grab a drink, sit back, relax and prepare to drool.

Firs things first, all credit goes to Fritzchens Fritz who has gone to astounding lengths to get these amazing die shots. He has a wonderful collection of pristine quality die shots of a wide range of GPUs, the latest of which happens to be GP104. Nvidia’s mid-sized Pascal GPU powering the GeForce GTX 1080 and its little brother the GTX 1070.

The couple of photos you see above are of a GTX 1070 that has been torn down. The GP104 ASIC was first removed from its socket and the die separated from its mini circuit board.

The building block of the Maxwell architecture has been stripped apart and redesigned to create Pascal. This building block which Nvidia dubs the Streaming Multiprocessor or SM for short is the engine that drives the graphics and compute horsepower of every Pascal chip.

With Maxwell, Pascal’s predecessor powering the GTX 900 series, Nvidia introduced the Streaming Maxwell Multiprocessor. The SMM built on the strengths of Nvidia’s Kepler SM – introduced with the GTX 600 and 700 series – which Nvidia dubs the SMX. It also done away with many unnecessary complexities which enabled the engine to deliver more throughput and higher clock speeds. The Pascal SM in its own right is an evolution of the Maxwell SM, a smarter, more streamlined engine.

Inside of a fully unlocked GP104 there are four Graphics Processing Clusters or GPCs for short. Each GPC consists of five Streaming Multiprocessors or SMs for short – each SM contains 128 CUDA cores – and sixteen Texture Mapping Units , AKA TMUs. Each GPC includes two render back-ends made up of eight Render Output Units, ROPs, each. In total this adds up to 2560 CUDA cores, 160 TMUs and 64 ROPs inside GP104. Finally the engine is connected via eight 32-bit GDDR5X memory segments – 256bit memory controller – to 8GB of GDDR5X memory.

Each GP104 streaming multiprocessor includes 128 FP32 CUDA cores, the same as Maxwell. Within each GP104 streaming multiprocessor there are four 32 CUDA core partitions, four dispatch units, two warp schedulers and a fairly large instruction buffer. Twice as large compared to Maxwell.

GP104, Bare

There it is, GP104 in all its glory. It looks nothing like the block diagram and for good reason. The GP104 diagram published by Nvidia is no more than a simplistic visual representation of the architecture. The physical implementation of that architecture however is far more complex. It’s no surprise that designing a modern high performance graphics chip can cost well into the hundreds of millions.

The architecture itself, putting pen to paper, is not the hardest or most expensive part of developing a chip like GP104. The physical implementation itself, putting light to sand so to speak, is where things can get really challenging. The bigger the chip the bigger the challenge.

The physical layout of the GP104 GPU is grouped into four main GPC engine divisions. Each GPC taking roughly one quarter of the chip and housing 5 SMs. Feeding the beast are the eight 32bit GDDR5X memory segments that make-up ring around the periphery of the die.

The SM arrangement is almost identical to what we’ve seen with the much larger 3840 CUDA core GP100 GPU that powers the Tesla P100 accelerator. Only inside GP100 each SM contains exactly half the number of CUDA cores, dispatch units and warp schedulers vs GP104. But in turn there are twice as many SMs per GPC.

So the primary layout difference between GP104 and GP102 is that Nvidia is grouping 64 CUDA core SMs in pairs made up of 128 CUDA cores each and in turn naming the larger 128 unit an SM instead. This is all while maintaining the exact same ratio of dispatch units, warp schedulers and instruction buffers per CUDA core that we’ve seen with GP100.

The GP104 Pascal Streaming Multiprocessor

So think of it as Nvidia just pairing 64 CUDA core groups together in a single SM. This decision is likely influenced by the significant reduction of FP64, double precision, CUDA cores per SM inside GP104 vs GP100. GP104 only contains 1 FP64 CUDA core for every 32 FP32 CUDA cores. While GP100 has one FP64 CUDA core for every two FP32 CUDA cores, 16 times more than GP104.

Additionally, each GP104 SM has twice the number of registers as Maxwell. This in turn means that not only can Pascal accommodate more threads compared to Maxwell but each thread has access to more registers and thus a lot more throughput. Finally, each warp scheduler can dispatch two instructions per clock.

Nvidia’s Senior Architect, Lars Nyland admitted that the 16nm FinFET process played an important role in realizing the team’s power efficiency goals for Pascal, but maintains that numerous architectural improvements aided in further reducing the energy footprint of the architecture. Including the employment of new on-chip voltage signaling techniques.