Nvidia’s GTX Titan brings supercomputing performance to consumers

Share This article

Introducing GPU Boost 2.0

One of the major features Nvidia is introducing with Titan is a second generation of GPU Boost technology. GPU Boost is Nvidia’s overclocking technology that increases a card’s clock speed automatically provided that its TDP is within a certain range. Like Intel’s Turbo Mode, GPU Boost increases performance when there’s sufficient TDP headroom to do so.

GPU Boost (GB) can also work in reverse. If board TDP rises past a certain point, the card will lower its own operating frequency and voltages to reduce power consumption. The problem with GB 1.0 is that its metric, board TDP, isn’t a perfect stand in for either goal. GB 2.0 shifts the relevant metric from board TDP to GPU temperature. According to Nvidia, measuring temperature instead of TDP gives the company better visibility on whether a card can afford to throttle up to higher frequencies or not.

And GPU Boost 2.0…

Measuring by temperature instead of TDP gives Nvidia better visibility on what maximum voltages should be allowed, and slightly increases the GPU’s operating range. For those customers who want even more control, the company is also going to allow deliberate over-volting — but warns that this will impact board longevity.

The yellow line is GPU Boost 2.0, the white line is GPU Boost 1.0. This also demonstrates why board temperature is a better metric for controlling overclocking. GB 1.0’s maximum frequency was lower than 2.0’s and its voltage/frequency throttles didn’t kick in as sharply to prevent board damage.

Again, customers who want more control over their GPU thermals will have it. The Titan targets 80 degrees C by default, but users can change the number.

Set the thermal target to 90 degrees C using a tool like EVGA’s Precision, and the GPU will adjust its frequency headroom accordingly.

The Titan is the first GPU to feature GPU Boost 2.0, but it won’t be the last — Nvidia plans to bring this technology to market with its next generation of mainstream GeForce products.

Titan, GTX 690, and the $1,000 video card market

Titan doesn’t replace the GTX 690 — it complements it. We can’t show you performance figures yet, but you’re going to be impressed. So why did Nvidia build a second $1,000 GPU when such cards ship in very low volumes?

The technical answer to that question has to do with the nature of simultaneous multiple video card operation (such as Nvidia’s SLI). The more GPUs in a system, the more difficult it is to maintain low-latency frame times. This leads to the problem known as microstuttering — extremely short hiccups in what should be a smooth display.

Games already have to be optimized to take advantage of SLI, and the more GPUs you have, the more difficult that process becomes. As a result, multi-GPU scaling above two GPUs is dicey. Two GTX 680 or a GTX 690 might be twice as fast as a single card, but two GTX 690’s (or four GTX 680’s) won’t scale nearly as well.

Two Titans, on the other hand — well, that’s just standard SLI. And for gamers who don’t want to utilize SLI but are willing to pay top dollar for single-card performance, Titan is a huge step forward.

The other reason Nvidia is launching Titan is simple: Because it can. Because, when you’ve got an incredible piece of graphics silicon, people want to show it off. And Titan is well worth showing.

Tagged In

Post a Comment

Wow, looks like a great card. It’ll be interesting to see the performance benchmarks, and also the noise levels, power consumption and details of the rest of the Titan line.

Joel Hruska

There is no “Titan line” at this juncture. There’s one card coming.

VirtualMark

Will they be coming later in the year? I was hoping for some new mobile chips.

Joel Hruska

VirtualMark,

That’s a tricky question. Probably not.

The GK110 contains a lot of silicon that gamers don’t *need.* In mobile, where die size and power consumption are at a premium, it doesn’t necessarily make sense for NV to push this chip into that space.

Now, I think it’s fair to say that NV might do some new mobile parts using some of what they’ve learned with GK110, but I’d be surprised to see them launch a top-to-bottom set of SKUs.

My old 8800GTX is comparable, and the thing ran like a beast until a few months ago, and now I have a 7970. In fact, it bothers me that the author of this article didn’t mention that the 7970 has actually beaten out the GTX680 with the recent driver release.

Joel Hruska

Aus379,

You absolutely wouldn’t. The card runs very cool (I can’t say how much exactly until Thursday).

I will visit the competitive situation against the 7970 when I have numbers I can show to back up the analysis.

Aus379

Do you think that this card might actually be DOWNCLOCKED for the reference card?

And comparing this to the 7970 is ridiculous.
Try with the 8970, or whatever high end single core AMD comes out with next.
Still, the raw FLOPS output of the 7970 might be comparable to the Titan.

Incorect, the low end 8xxxM are rebrands (nvidia do the same with M gpus) there will be a new line this year but they deleyed it as the 7000 series is still very good, so they will spend more time on the 8000 series to make it better.

They have extended the functionality and freedom a lot over the last few years but they are still limited.

https://twitter.com/xarinatan Alexander ypema

Yes but that’s the fundamental difference between CPUs and GPUs, GPUs are really good at massive parallel multitasking like numbercrunching and hashing (bitcoin farming anyone?), CPUs are much better at everything else, I don’t think you can smoothly run an OS on a GPU for example. Might be a linux port for it though, heh.

Joel Hruska

Alexander,

There was a project some years back that got an OS running an old Radeon, though it wasn’t bootable.

I know Xeon Phi can run an operating system. I believe that in theory, an Nvidia or Radeon card as well. But it can’t run Windows and it’s more of a virtualized OS than anything else — you’re still dependent on the main CPU for things like talking to other PCIe devices.

With modern frame buffers up to over 1GB, you’ve got plenty of room onboard the card to run something, but I don’t know if it would be possible to save your data to a drive.

https://twitter.com/xarinatan Alexander ypema

Exactly, CPUs have a ton more functionality beyond simple arithmetics, and that’s why I don’t think it’d smoothly run, but perhaps you could emulate a linux environment on top of it, since linux has been ported pretty much anywhere these days, but for now I think the most useful way to take advantage of the GPGPU is to see it as a sort of 486 math co processor :P Except much more powerful of course.

chojin999

Xeon Phi are almost full x86 cores with added vector units. It’s quite a difference

Joel Hruska

Xeon Phi cores *are* full x86 cores, for all intents and purposes. They connect to a different interface, obviously, but they’re x86 compatible with a few minor changes (I believe Intel dumped a few instructions — CMOV, a couple others).

chojin999

The fact is that GPU manufacturers claim DSP style capabilities matching CPU ones with the “cores” word as a marketing trick to sell more cards at an higher price.
But the problem is that these GPUs are really limited compared to real DSPs that can be a lot faster with their true cores and even high precision vector units.

Basil_Nolan

I’ll be running Doom 3 on this one!

Eric Belfort Mattos

Couldn’t be that the main reason for Nvidia to have launched this model is to get economy of scale in order to increase its competitiveness and profitability in the supercomputing market segment?

e92m3

At no point did the 680 provide greater performance per dollar. Also, when properly utilize, GCN was significantly more powerful, ending up with almost exactly the same performance per watt metrics as the 680. Then of course compute performance comes into the picture…Then nvidia’s cards are thrashed in each and every metric. Seems that you and most other reviewers are incapable of understanding that the MORE POWERFUL compute device will always draw more power when the process node is this similar.

The titan barley provides any more double precision compute than a mildly OCed 7950…Titan is pure rubbish to rip off stupid children.

Funny to look back at the old articles on this website. What a biased and poorly researched bit of trash.

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2015 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.