NVIDIA GTX 690 Review

Introduction:

Just over a month ago NVIDIA proved it was on the right track when it released the GTX 680 that featured an all new GPU micro architecture called Kepler. The GTX 680 proved to be the fastest single GPU card on the block and addressed issues that seemed to dog the Fermi-based offerings including power consumption and heat. With the drop to 28nm, the GK104-based GTX 680 delivered excellent power consumption and thermal performance to match its impressive gaming performance. It soundly beat AMD's HD 7970 in just about every test it was run through including in Surround (Eyefinity) resolutions of 5760x1080. Again, impressive to say the least. Now here we are a month later and NVIDIA just announced the successor to the GTX 680 with the introduction of the GTX 690. This balls out card features not one but two full powered 28nm GK104 GPU cores with 3072 CUDA cores and 4GB of 6Gbps GDDR5 memory. I have to think NVIDIA learned some valuable lessons from the GTX 590 that was down clocked to meet a power envelope to put a card such as this out as its flagship offering. If it is as good as GTX 680 SLI it will prove to be a success.

NVIDIA brought new technologies to the party when it released the GTX 680 including Adaptive VSync, FXAA, and dynamic clock control called GPU Boost. To go with those we get Temporal Anti-Aliasing (TXAA) for improved visual quality with a lower hardware overhead performance hit. Built like a tank, the GTX 690 should prove to be the ultimate gaming card for those that have to have the best multiple screen or high resolution gaming experience. Lets see what makes it tick!

Closer Look:

About a week ago I received a package from NVIDIA with a strange item in it; A pry bar with the inscription "For use in case of Zombies or....." with the NVIDIA logo to the side. A pretty cryptic thing to be sure. The obvious implication is that the card would arrive in some kind of special packaging and indeed it did. A crate with the words "Caution Weapons Grade Gaming Power" burned into the wood that needed the pry bar sent last week or a least a big screwdriver to open. Having seen the industrial design of the GTX 690 during NVIDIA's announcement this past Saturday an industrial strength package was needed to drive the point home. That it does. Inside the crate under several layers of foam was the object many (including myself) have been anxious to see.

The GTX 690 uses the same 28nm Kepler SMX architecture introduced on the GK104-based GTX 680. There are just double the components with the GTX 690. Each GPU consists of a series of GPCs (Graphics Processing Clusters), four in this case, on each GK104 with two SMX units each with 192 cores for a total of 1536 CUDA cores per GPU core or 3072 on the GTX 690. Six times what is available on the GTX 580 for a point of reference. To more effectively manage power consumption, the traditional method of running the shader clock at twice the core clock was abandoned and now the clock speeds run at a 1:1 ratio. Each GPC has a single raster engine and dynamically share 1MB of L2 cache. Each GPU core features 128 texture units and 32 ROPs. A new feature with GK104 is hardware and software based GPU Boost technology, which dynamically boosts the clock speeds of the GPU cores when there is available TDP headroom, much like the latest CPUs from Intel and AMD. The base clock speeds for the cores on the GTX 690 are 915MHz with a GPU Boost core clock speed of around 1019MHz. The GTX 690 memory subsystem is the same as that used on the GTX 680 with four 64-bit (256-bit) memory controllers per GPU core each handling 2GB of GDDR5 memory running at 1500MHz (6000MHz effective).

The architecture has been discussed endlessly online so let's see what went into making the GTX 690 arguably the highest performing video card on the planet.