2013 started off on a rather dull note for the PC graphics industry. NVIDIA launched its game console platform "Project: Shield," while AMD rebranded its eons-old GPUs to Radeon HD 8000M series. Apparently it could all change in late-February, with the arrival of a new high-end single-GPU graphics card based on NVIDIA's GK110 silicon, the same big chip that goes into making the company's Tesla K20 compute accelerator.

NVIDIA may have drawn some flack for extending its "GTX" brand extension too far into the mainstream and entry-level segment, and wants its GK110-based card to stand out. It is reported that NVIDIA will carve out a new brand extension, the GeForce Titan. Incidentally, the current fastest supercomputer in the world bears that name (Cray Titan, located at Oak Ridge National Laboratory). The GK110 silicon physically packs 15 SMX units, totaling 2,880 CUDA cores. The chip features a 384-bit wide GDDR5 memory interface.

by: jihadjoeThe issue I have with that 85% of a 690 comment is that there's no unit of measurement.

Is it 85% in game performance? Then that pretty much says right there how the GK110 card will perform.

Is it 85% of the render/shader performance? In this case the actual in-game performance might be even better than the 690, because there aren't any SLI scaling issues to deal with.

I'm expecting this to play out much like 560Ti vs 580, which is really where GK104 probably stands against GK110.

its whichever inflates the results, so in this case 85% of game performance, or maybe even worse*85% of compute capability! which gk104 has it crippled
however tho its safe to say its 25-30% faster than gtx680 otherwise its poitlless to release, but its common knowledge now that with the bigger die size higher end models u lose the efficiency advantage of the middle or lower end models of the same architecture, as these are tuned for highest performance rather than performance per watt, but with high clock count and low clockspeed that actually might be different, its just a matter of finding the best diesize/performance/efficieny ratio, and from there based on cost they can prioritize smaller die higher clock(less expensive, but more leakage), or bigger die size, lower clocks(more expensive, less leakage) . i just wish there was any reviews looking into the power consumption scaling with different clocks on gcn and kepler architectures

Personally, I'm hoping to get a pair of GTX 660 soon, before newer "gen" stuff doesn't kick in, so I start having enthusiast's remorse or something, like I feel going for Socket 2011 is a bad idea since Haswell is so close, and Steamroller not far behind either... but the news of a possible 8-core iteration (hence a upgrade path) calmed that down.

After that, I might sell off one of the 660s (2014+), get a high-end single GPU Radeon HD 9000 (hopefully a generation that will be more high-res friendly) and keep the other 660 for PhysX (LOL), and 3D movies and so, since I have a BenQ XL2420TX, that is much more nVidia 3D-friendly than AMD.

All in all, I'd get, for the time being, very close to GeForce Titan performance for a lot less...

by: NeoXFPersonally, I'm hoping to get a pair of GTX 660 soon, before newer "gen" stuff doesn't kick in, so I start having enthusiast's remorse or something, like I feel going for Socket 2011 is a bad idea since Haswell is so close, and Steamroller not far behind either... but the news of a possible 8-core iteration (hence a upgrade path) calmed that down.

After that, I might sell off one of the 660s (2014+), get a high-end single GPU Radeon HD 9000 (hopefully a generation that will be more high-res friendly) and keep the other 660 for PhysX (LOL), and 3D movies and so, since I have a BenQ XL2420TX, that is much more nVidia 3D-friendly than AMD.

All in all, I'd get, for the time being, very close to GeForce Titan performance for a lot less...

um no, the only time a dual gpu solution was better that the top end single gpu was during the gtx400 series were the 480 ran super hot and couldnt overclock, so 2 gtx 460s totaled more cores and at much higher clocks(my gtx460 was stock overclocked at 800mhz)
in this case however 2 gtx660s wont even beat titan in theory to do it in a practical world with sli lol

by: sergionographyum no, the only time a dual gpu solution was better that the top end single gpu was during the gtx400 series were the 480 ran super hot and couldnt overclock, so 2 gtx 460s totaled more cores and at much higher clocks(my gtx460 was stock overclocked at 800mhz)
in this case however 2 gtx660s wont even beat titan in theory to do it in a practical world with sli lol

I don't remember stating anywhere that they'd be faster than this so called "Titan".
Do you research, 2 GTX 660 are QUITE faster than a GTX 680, while being cheaper (where I live) too.

Doesnt look that Titan'e compared to the GTX 690. They should have gone with a slightly more aggressive look. Gunmental maybe to make it look more rugged then the 690. The blower design makes it look a bit too plain. The GTX 690 was okay since it had a big fan in the middle. The blower design leaves a lot of window to see the dust accumulate between the heat sink. Maybe a etched desing or something to spice it up.

Expected more of a cooler than this my self. I do think it is beautifully engineered but these coolers need a way for the user to clean the dust better as it will build up right at the front of the cooler.

by: AsRockExpected more of a cooler than this my self. I do think it is beautifully engineered but these coolers need a way for the user to clean the dust better as it will build up right at the front of the cooler.