GeForce GTX Titan Preview

GeForce GTX Titan Preview

This morning heralds the release of the world's fastest GPU, the NVIDIA GK110 featured in the GeForce GTX Titan. A card so special it doesn't have a number to conveniently slot it in to the GeForce stack next to its compute cousins, named instead in homage to the Supercomputer for which it was designed at Oak Ridge National Laboratory in Tennessee. This card will be available for purchase in North America beginning the week of February 25th, with full performance testing reviews around the web debuting later this week. What you're seeing today is the first unveiling of a card so special its being launching thrice, once for the money, twice for the performance, thrice for the buying. Availability is said to be sufficient for demand but rumors persist that worldwide there are but 10,000 units, a mere $10M in sales available for NVIDIA and their region specific partners - EVGA and ASUS in the case of North America.

The first unveiling of GK110 happened with the announcement of completion of the ORNL Titan supercomputer, where we learned there were two Tesla GK110 products. Like K20X, Titan features 14SMXs to give 2688 cores. Clock speeds are up vs. K20X, as is TDP at 250W, with the new GPU Boost 2.0. The card leverages the same design language as the GeForce GTX 690, aiming to inspire confidence in quality and premium performance. The custom design vapor heat chamber and fin array dissipate the thermal load so well, with the new deeper fan, that NVIDIA claim the GTX Titan to be quieter than the GTX 680, especially in multi-GPU. The card is PCI-Express 3.0 and features dual-DVI outputs, one HDMI and one DP 1.2 output and requires an 8-pin PCI-Express power input and a 6-pin PCI-Express power input. Two SLI bridge connectors allow up to 3-way SLI.

NVIDIA states that this card exists to provide people with opportunity to have the world's fastest gaming PC, and brings massive performance to form factors that previously couldn't accommodate them. The first is in reference to the SLI and Tri-SLI capabilities of the GTX Titan, which appear to surpass any other combination of graphics cards available today. The second refers to the svelte 10.5" length and rather run of the mill 250W TDP. At this size and cooling requirement, Titan can go where HD 7900s and GTX 690s can't go (without extensive modding skills), such as portable LAN boxes and living room friendly HTPC combination gaming builds. Finally, this card allows CUDA developers to get high performance Double Precision support without the full cost of a Tesla system, lowering the cost of entry barrier for developer workstations and one-man shows.

The MSRP - but not necessarily actual retail price - of GeForce GTX Titan is $999, matching that of the GeForce GTX 690. Both the GTX 690 and Titan will sell in the market together; NVIDIA feels they are complementary products rather than competitive, due to the different ways they generate their horsepower. If you're a single screen gamer, the 690 is a better choice for you, but if you're hankering after some Surround or Ultra HD gaming performance - and perhaps 3D - then Titan is going to be your buddy. NVIDIA listened to the feedback ultra-enthusiast consumers gave them and have made a special point to permit higher voltage adjustability levels on Titan cards, with new controls for GPU Boost v2.0. Display overclocking - higher than normal refresh rates - will also be permitted through tools, if you want 80Hz vsync or whatever your panel is capable of.

GPU Boost 2.0 moves away from estimated power monitoring to temperature monitoring, in an effort to be more efficient by allowing boost states for longer. By using GPU temperature as the control metric, NVIDIA claim they can unlock personalized performance control at the same time as enabling more performance. With control over fan speed and goal GPU temperature the end user can get more control over the boosting Titan does, even though they all ship with the same base and boost clocks. These options, and voltage controls, are available through third party tools like EVGA Precision and GPU TweakIT, and the final maximum voltage permitted is still controlled by the Add In Board partner - it's not completely unlocked.

So we like the performance profile for extreme resolution and IQ enthusiasts, we like the design and size, we like the power profile. The new GPU Boost v2.0 sounds interesting, sounds fun to tweak which is what enthusiasts at this level want but without hands on we can't tell you if it's persnickety or simple. Big Kepler shows that AMD's no-big-GPU strategy hasn't moved on at all, the biggest GPU they made was a competitor for the second biggest from NVIDIA, even if NVIDIA couldn't get a full 15 SMX chip product out publicly. In a way, that's a success for AMD - they managed to move their top GPUs from the ~$250 price point up to ~$500USD.

The $999 price point of Titan acknowledges this: you can't sustainably make and sell the biggest, fastest GPU in the world at those price levels; the GPU price war is over and everybody is protecting margins. Despite fitting as well as it does into the competitive landscape right now, it redefines things in a bad way for consumers. In the space of one generation we've gone from the big GPU being a $500-$600USD product to $1000, representing some serious inflation even in this economy. It makes the ridiculous pricing of the ASUS ARES II look well positioned and invites the next round of mid-range GPUs to be priced higher again.

You'll have to wait for those review sites that were fortunate enough to receive product samples to see performance testing results to be published later this week, but there's enough information now to see a clear message from NVIDIA - we're going to win the high end, no matter what. Coming less than a week after AMD 'clarifying' their 2013 strategy, it's likely that AMD's response is going to be covered in mint jelly; official New Zealand cards, perhaps leveraging improved Boost technology and binning to get an edge on the big Kepler. The monstrous Asus Ares II will likely still stand as the world's fastest graphics card, but the single GPU crown goes to NVIDIA.