This site may earn affiliate commissions from the links on this page. Terms of use.

Companies often play coy with announcements at major press events, but Nvidia’s latest tactic with its new GTX Titan X may take some kind of reward. The company’s CEO, Jen-Hsun Huang, made a surprise appearance at an Unreal Engine conference and handed Unreal chief developer Tim Sweeney an autographed GPU. This new Titan X will carry an estimated eight billion transistors and a massive 12GB of RAM. But Nvidia isn’t saying more until its own GTC event.

According to Sweeney, Titan X was developed partly in response to the VR headsets and technology that are widely on display. In more plausible reality, it’s a chip in the same vein as the original GTX Titan and GTX 780 — an HPC and scientific computing GPU that Nvidia is bringing over to the consumer business.

Previous rumors have suggested that the GM200 has a 384-bit memory bus (up from 256-bits on the GTX 900 family) with 12GB of RAM (so that checks out), 96 ROPS, and either 192 or 256 texture mapping units (TMUs). Total core count on the die is rumored to be 3072, up from the current 2048. Treat all these figures with a grain of salt — they could be inaccurate, or they could reflect the entire chip (NV might fuse off sections of the die to improve yields).

Nvidia may be going with a different color scheme, but the cooler design looks quite similar to the original Titan, shown above

It’ll be interesting to see how Nvidia prices this card, given the lack of competition from AMD. The GTX 780 and 780 Ti both took sharp price cuts after AMD launched the R9 290 and 290X, but the dual-core GPU Titan Z was much more expensive than either the R9 295X2 or even two Titan Black’s in SLI. Even evaluated as luxury products, the later cards didn’t quite have the panache of their progenitor.

IF Nvidia does trim the die, it’ll have to contend with concerns over how the GPU’s memory controller and L2 cache are impacted. The full implementation is rumored to have 24 SMM blocks, but if the chip uses a lower figure, it could end up with the same bifurcated path to memory as the GTX 970. While I’ve maintained that the GTX 970 remains an excellent card in the vast majority of use cases, a vocal minority of owners have been extremely unhappy over what they allege amounts to false advertising.

We’ll have to wait a little while longer to find out if Titan X will sweep the field, but any uprated GTX 980 is going to be a potent solution. As of this writing, AMD has made no comments regarding any competitive moves it might make in response.

AMD said they has no plan to fight ATM. they are more into Mobile/APU stuff until at least 2016.

Trenter

When did amd say that? I think you’re pulling words out of your butt.

Joel Hruska

AMD’s CPU division is certainly focused on mobile, but I’ve seen no corresponding announcement from the GPU guys. Then again, we haven’t seen any truly competitive announcements either. Give it time. ;)

e92m3

Latest I’ve heard is release in June. Straight rumor at this point, however.
Seems like a strategy based on two things: clearing existing product from the channel and also getting to see the true metrics of the ‘full’ (or pretty close) Maxwell die. There have been several indications that the something along the lines of a 380x (or whatever the name ends up being) is pretty much ready. I’m expecting AMD to hold back with the full implementation this time until they can see NVidia’s full hand, likely until after a ‘Ti’ or similar yet still different ‘black edition’ response.
No idea when this next ‘titan’ will be released, certain people keep confusing official announcement with release. You’re certainly not suffering from such confusion, many are though. In the end, I don’t expect any information from AMD at all until the next ‘titan’ is out and fully understood.

Trenter

I think amd are playing the waiting game and looking to lock in pricing after the titan launch. I think amd is all in on pc graphics. Rumor is they built the 390x with the largest die in their history, titan x is also the largest from nvidia I believe. Both companies are sending 28nm off with a bang, it’s gonna be a sight to behold.

Yeah. Right. I think I’ll stick to the gaming cards. Not some GPU with unlocked DP and insane price.

Butts MacGruber

The Titan Z is already like $3000

Kevin Phan

This most likely will be like the Black and Z’s purpose, for high end animation and calculation rendering on workstation, not on gamers hands. Even if Nvidia made a super high end card for gamers, it isn’t stupid enough to price it that high. But thing is… It’s not for gamers.. GASP!!!!! ITS NOT FOR YOU DONT YOU UNDERSTAND?!?!?!?!?!?!?!

Chris MacDonald

Spot on.

I do rendering of massive scenes at work and have been closely following gpu rendering and its finally getting tantalisingly close. The problem has been RAM. My scenes regularly eat up 16gb+ of ram, and running dual gpu cards doesn’t double the amount of vram available sadly, as all assets have to be loaded on to each gpu for rendering.

Once they break the 16gb Mark I am pretty sure i will be going to my directors to ask them for a few cards like this.

aniket mittal

the Titan x’s box has a line written on it. INSPIRED BY GAMERS, BUILT BY NVIDIA. So i think the statement you are making is crap. Nvidia pulled that shit off with the titans , saying that its for the people who are into rendering scenes and for devs, but this time its written on the card. So i think we can safely say that the card is for the gamers.. link- “http://www.maximumpc.com/nvidia_geforce_gtx_titan_x_we_touched_it_2015#slide-1”

Missino

Maybe it is so people can actually play assassins creed unity at more than 10 fps.

Kyle

Currently using Intel CPUs and a GTX 970. I’m definitely rooting for AMD to catch up though. As soon as their performance catches up, I’m jumping over to their side.

Performance takes priority over my preferred brand, unfortunately.

Zunalter

Don’t you mean ‘fortunately’, as in:

Fortunately, I am not such a brainwashed fanboy that I would not run the best card for the money in order to wave some flag with ‘my’ team on it?

Rarer and rarer breed.

Kyle

Haha, pretty much. I said unfortunately, because I would like to stimulate the competition, but I won’t sacrifice performance over it. I also don’t wish to encourage competition unless it’s through innovation.

If our objective is progress, and faster technological advancement, then it’s foolish to support the company with the inferior product. It’s illogical, wouldn’t you agree?

Zunalter

Absolutely, the only thing you do is hurt yourself, and in large enough numbers, the advancement of the market at large. The only caveat being that in this market it seems that different applications are going to have different definitions of “inferior”.

Dignitas ZeNeo

My GTX 980 literally arrived today, and the Titan X is announced the same day which I’ve been waiting on for months and months -_-

Myles

Might as well throw my 970 in the garbage and get a second mortgage on my house to afford this thing!

Zunalter

Oh man, I can’t wait until they reveal how much of that 12GB isn’t feasibly accessible! I mean, if a $350 card has that ‘feature’ that makes the experience better so obviously better, I can’t imagine them leaving it out of their ultra-high end card.

Phobos

But does it really have 12GB or just 11.5GB?

Missino

Nah it actually has 10 gigs with a low bandwidth 2 gigs.

mike dar

Still.. no certainty 12 gig on a single die. I think the CEO is looking at the high VRAM buyers which have little access, and prepping them to hold their bucks just a littlllleeee longer lol.
It seems that Nvidia is relying on DX12 to solve some problems for them. I personally don’t care how the full “12 gig” gets accessed, as long as it does, for modeling and video production.
We also need to hear about how the new TiTan plays with other cards on different PCIe slots where the render-er UIs are concerned. I suspect there shouldn’t be a problem, but who knows, crippling product in a limited MFGer sector is common.
One wonders how long before a 980″TI” comes to fruition also and hopefully single die above 6 gb,

Here’s to the new Titan providing more than 6gb, single die, 384-bit memory bus, not 512 two die.
Push the slider all the way over!

e92m3

16 GB single-die workstation cards have been on the market for a long while now with 384-bit memory buses that actually provide full bandwidth to all the of the ram, not some nonsense aggregate score.
It’s very apparent that you don’t understand the how the aggregated metrics of the 970 can severely impact workloads that utilize significant quantities of vram. Maybe because none of your workloads are actually all that applicable. Tell me more about these ‘models’ you’re creating, as well as this ‘video production’… You’re clearly unaware of how your hardware is actually being utilized.
How about the 970 downclocking the memory (in addition to the already severely limited bandwidth) when running any mildly significant CUDA workload? No comment? Didn’t know about it? Oh dear.

Are you talking about display ports when you say ‘render-er UIs’? Display port aside, any GPU-based rendering engine (not talking about ogl or dx viewport) that offers multi-GPU will be able to handle dissimilar GPUs, including this one. NVidia doesn’t have the balls to piss off everyone who uses GPU rendering, including the devs of these engines. Professional market is far too important to them. There may be some bugs initially as can always be expected, particularly since such functionality is hardly ever tested by anyone but end users. There’s simply no valid excuse for basic OCL/CUDA functionality to be broken permanently, as that’s all we are really discussing here.

Technically, opencl-based rendering engines could even handle AMD and NVidia in the same machine with some additional work. Of course, with additional features, the more difficult it would become. Still, it COULD work.

Chris Theis

LOL. I was thinking the same thing. I have no idea what a render-er user interface has anything to do with it.. I have a bunch of cards I use for Octane render. Vram and Cuda cores is all I really care about. I have noticed that with larger cad files >3GB it can take some time to load it into vram for each frame to render..

mike dar

What does my comment have to do with the 970? Does it give you some justification for complaining? My comment was on the new Titan, so go play with yourself and ego.
FWIW, I’m using models with close to a gig, so resources matter, the TiTan X will help.
Now pizzoff with you, ‘your discussion’ and attitude.

Still looking forward to the Titan regardless of the simpletons.

Jason E Perkins

After that whole fiasco with the 970, I’m not surprised they’re gun shy about talking hardware specs.

TheSquareRootOfThree

You can buy a vehicle for 1300 if that is the price…the card is not worth 1300. None of these high end cards are worth that much.

e92m3

That is incredibly true. Most of the people that actually need significant quantities of vram would be far better served by a cluster of ‘previous’ generation models.
Add in the lack of ECC and you’re left with a card that isn’t actually suited to anything in particular and does not offer comparable overall compute, nor any tangible double-precision usage scenario.

12GB is not a groundbreaking development by any means.

TheSquareRootOfThree

I don’t check my posts for replies often as I just like to read the articles and see different peoples view points, but I agree with you.

Kevin Phan

You dingalings realize this most likely wont be priced for gamers, right? No game ever is going to use 12GB of Vram this year. THIS beast, is for rendering animations and workstations that require lots of calculations like the Black and Z. The only reason it was used for GDC is because VR takes MASSIVE amount of processing and the X filled the bill.

Guest

Not this again the only people who buy Titans are gamers. Reality check professionals use Quadro not Titans. The only people that buy these are idiot fanboys.

Guest

Nvidia pointless X!

Titan X makes no sense it has no market. I defended TitanZ as it still had it’s uses but TitanX dose not…
–

Maxwell has no proper FP64 support, CUDA/compute users are better off with
TitanZ still, gamers are better off with SLI GTX980. I don’t see
games ever using 12GB in the time that Maxwell is relevant. Tell me
where dose TitanX fit in? I suppose someone might want to hookup 3 4k
monitors but you’d probually want 4 way SLI to give a respectable frame
rate 3 4k monitors & 4 TitanX GPUs sounds like a very expensive
setup.

I’ll
take a new shield but Titan X looks bleek… maybe if they are priced
really smartly? Say if I can get 2 TitanX GPUs for $1600 that would make
it interesting for a good 4k gaming rig
any more then that I’m
probually just going to get 980s instead. Something tells me there
likely $1000+ a piece instead of $800 where they might make some sense.

Guest

Don’t be scammed again people you know there just going to launch a 980Ti for $750 later.

~|~

Amd is too bad for this! ;)

iawrench

They call it ultra high end…. I’m sure it isn’t that difficult to fit some more transistors on board. They just need someone to draw up the circuit plan and have it litho printed…. so then the software engineers can program it to work correctly.

And then… they charge you up the ass for a few extra transistors, as if that took that much more work to design. Seriously though… this pisses me off. Monopolies piss me off.

This site may earn affiliate commissions from the links on this page. Terms of use.

ExtremeTech Newsletter

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.