I know, and I'm very sad about that. I often prefer Nvidia's GPUs, but if AMD offers me something that I can turn into a superior product via overclocking while Nvidia cripples that capability, as happened with the 7970 and 680, I'll take AMD's offering every time.

What I'm saying is that your status as a reviewer gives no inherent credibility to your dismissal of tech rumors/stories (sorry to break it to you...). That might be true if the stories were from people clueless about tech, or if everyone who is well informed about GPUs agreed with you, but that's not the case. When you get corroborating evidence from many reliable and semi-reliable tech sources, there's something to it.

I never said it did. I said that you must assume that what I post IS speculation only, since I do what I do, and any real info about un-released products I cannot post.

And like-wise, the same applies to any tech site.

That is all. GK110 is un-released, nobody except nVidia employees and those that work at nvidia board pertners know anything about it, and none of them can comment due to NDA.

So anything, anything at all about it...is questionable.

Heck, it might not even actually exist, and is only an idea.

Post a pic of a GK100 chip, full specs and everything else officail form nvidia, and I'll stop my speculation.

Otherwise, if you don't like my post..that's just too bad. the report button is to the left, if you like.

you cna say asll you like that it was planned, you have no proof, and neither do I. And neither of us, if we did, could post it. So I can think and post what I like, and so can you. It's no big deal...only you are making it a big deal that I do not agree with this news.

you cna say asll you like that it was planned, you have no proof, and neither do I. And neither of us, if we did, could post it. So I can think and post what I like, and so can you. It's no big deal...only you are making it a big deal that I do not agree with this news.

Click to expand...

Argument from ignorance. You ARE claiming both that GK100 never existed and that it didn't exist because it can not be made, based on the fact that we can not provide proof to disprove your theory. You are the only one claiming anything using this argument from ignorance falacy to back it up.

The rest of us is just saying that it is entirely posible and probable that GK100 existed and was simply delayed or slightly redesigned into GK110, in a move similar to GF100 -> GF110. The proofs although rumors, are out there and have been there for a long time. Rumors about chips don't always end up being entirely true, but there's always some true to it. GK100 was mentioned many times. GK110 DOES exist. 2+2=4

All in all Nvidia has already shipped cards based on the 7.1 b transistor GK110 chip, so the notion that such a chip cannot be made is obviously false.

Argument from ignorance. You ARE claiming both that GK100 never existed and that it didn't exist because it can not be made, based on the fact that we can not provide proof to disprove your theory. You are the only one claiming anything using this argument from ignorance falacy to back it up.

The rest of us is just saying that it is entirely posible and probable that GK100 existed and was simply delayed or slightly redesigned into GK110, in a move similar to GF100 -> GF110. The proofs although rumors, are out there and have been there for a long time. Rumors about chips don't always end up being entirely true, but there's always some true to it. GK100 was mentioned many times. GK110 DOES exist. 2+2=4

All in all Nvidia has already shipped cards based on the 7.1 b transistor GK110 chip, so the notion that such a chip cannot be made is obviously false.

I've just never seen someone so ready to cavalierly dismiss a multitude of tech rumors based on their own idea of what is or is not possible from a manufacturing perspective...

To each his own I guess.

Click to expand...

Nah, actually, I'm claiming this since i know all the specs of GK110 already. I even have a die shot. And yeah, liek you said, it is now for sale.

You can find info just as easy, too.

And becuase of this, I do think nvidia knew long before AMD's 7970 release that GK110 was not possible(which is when that news of GTX680 being a mid-range chip), and as such it wasn't meant to be GTX680, ever. Is GK110 the ultimate Kepler design...sure. but it was NEVER intended to be released as GTX680. It was always meant as a Tesla GPGPU card.

Liekwise, AMD knew that Steamroller...and excavator were coming...and that they arre the "big daddy" of the Bulldozer design...but that doesn't mean that Bulldozer or Piledriver are mid-range chips.

Nah, actually, I'm claiming this since i know all the specs of GK110 already. I even have a die shot.

You can find it just as easy, too.

Anmd becuase of this, I do think nvidia knew long before AMD's release that GK110 was not possible, and as such it wasn;'t meant to be.

Liekwise, AMD knew that Steamroller...and excavator were coming...and that they arre the "big daddy" of the Bulldozer design...but that doesn't mean that Bulldozer or Piledriver are mid-range chips.

Click to expand...

Everybody knows the specs and has seen die shots, for a long time already. That means nothing to the discussion at hand. Specs and dies shots say nothing about whether it is feasible to do or not (it IS, it's been already been created AND shipped to customers) and certainly says nothing regarding the intentions of Nvidia.

If GK100/110 was so unfeasible as a gaming card that it was never meant to be one, they would design a new chip to fill in that massive ~250mm^2 difference that exists between GK104 and GK110, instead of using GK110 as the refreshed high-end card. GK110 being an HPC chip wouldn't have so many gaming features wasting space either.

If GK100/110 was so unfeasible as a gaming card that it was never meant to be one, they would design a new chip to fill in that massive $ 250mm^2 difference that exists between GK104 and GK110, instead of using GK110 as the refreshed high-end card. GK110 being an HPC chip wouldn't have so many gaming features wasting space either.

Click to expand...

I dunno. You know, the one thing that nVidia is really good as is getting the most dollar for R&D, and designing another chip kinda goes against that mantra.

I mean, it's like dropping the hot clock. They knew they had to.

Keeping within the 300W power envelope with the full-blown Kepler design was obviously not possible, proved by Fermi, IMHO.

Jen Hsun said "The interconnecting mesh was the problem" for Fermi. That mesh...is cache.

HPC is more money. WAY MORE MONEY. So for THAT market, yes, a customized chip makes sense.

See, Fermi was the original here. GF100 is the original, NOT GK100. or GK110.

If nvidia started with Kepler as the new core design, then I would have sided with you guys, for sure, but really, to me, Kepler is a bunch of customized Fermi designs, customized in such a way to deliver the best product possible for the lowest cost, for each market.

You may think the Steamroller analogy is wrong here, but to me, that is EXACTLY what Kepler is. And you know what..nVidia says the same thing, too.

The hotclock to me, and the lack of DP functionality, says it all. hotclock lets you use less die space, but requires more power. DP functionality also requires more power, because it requires more cache. Dropping 128-bits of memory control..again, to save on power...

If the current GTX680 was meant to be a mid-range chip, after doing all that to save on power, damn, Nvidia really does suck hard.

Sure and if you click the check box to enable OC or break the seal on the switch and then it bricks? I would love to have heard how GK104 could do with the dynamic nanny turned off... While you may believe it might find 1.4Ghz... would it live on for any duration?

I speculate it wouldn’t or Nvidia would've not put in place such restrictions if there wasn't good reasons. Will they still have it that way for next generation? Yes, almost assuredly but at that point better TDP and improve clock and thermal profiles will mean there's will be no gain over operating at an exaggerated-fixed clock. I think for mainstream both sides will continue to refine boost type control. It provides them the best of both worlds, lower claimed power usage, while the highest FpS return.

i love how everyone is saying AMD will have a hard time competing did everyone forget that yawn that is the 7970 GHz edition still beat out the GTX 680 and this gen for the most part each company is equal at the typical price points.

8970 is expected to be 40% faster than the 7970

GTX 780 is expected to be 40-55% faster than the 680

add in overclocking on both and we end up with the exact same situation as this generation. So in reality it just plain doesnt matter lol performance is all i care about and who gets product onto store shelfs and from their into my hands. Doesn't matter whos fastest if it takes 6 months for stock to catch up.

Click to expand...

It's allways entertaining when fanboys get butt hurt over peoples opinions, the fact is a stock reference 7970 is slower than a stock reference 680, I know this hurts you to accept this fact but it is true. As for the Ghz edition, compare it to a 680 that is factory overclocked and the result is thew same. My 680 classified walks all over any 7970, so less and check facts more.

Guys, we should only look to cadaveca now for tech rumors, this guy obviously knows what's up and we can't trust dozens of other knowledgeable people/sites. They all just make stuff up and obviously only release info to get page views.

HPC is more money. WAY MORE MONEY. So for THAT market, yes, a customized chip makes sense.

See, Fermi was the original here. GF100 is the original, NOT GK100. or GK110.

If nvidia started with Kepler as the new core design, then I would have sided with you guys, for sure, but really, to me, Kepler is a bunch of customized Fermi designs, customized in such a way to deliver the best product possible for the lowest cost, for each market.

You may think the Steamroller analogy is wrong here, but to me, that is EXACTLY what Kepler is. And you know what..nVidia says the same thing, too.

The hotclock to me, and teh lack of DP functionality, says it all. hotclock lets you use less die space, but requires more power. DP functionality also requires more power, because it requires more cache.

Click to expand...

at Kepler being Fermi. Sure and Fermi is Tesla arch (GT200).

If we go by similarities, as in they look the same to me with a few tweaks we can go back to G80 days. Same on AMD side. But you know what? They have very little in common. Abandoning hot-clocks is not a trivial thing. Tripling the number of SPs on a similar transistor budget is not trivial either and it denotes exactly te opposite of what you're saying. Fermi and Kepler schematics may look the same, but they aren't the same at all.

As to the rest. It makes little sense to think that GK104 is the only thing they had planned. In previous geerations they created 500 mm^2 chips that were 60%-80% faster than their previous gen and AMD was close, 15%-20% behind. But on this gen they said: "you know what? What the heck. Let's create a 300mm^2 chip that is only 25% faster than our previous gen. Let's make the smallest (by far) jump on performance that we've ever had, let's just leave all that potential there. Later we'll make GK110 a 550 mm^2, so we know we can do it, and it's going to be a refresh part so it IS going to be a gaming card, but for now, let's not make a 450mm^2 chip, or a 350mm^2, no, no sir, a 294mm^2 and with a 256 bit interface that will clearly be the bottleneck even at 6000 MHz, let's just let AMD rip us a new one..."

EDIT: If GK110 had not been fabbed and shipped to customers already, you'd have the start of a point. But since it's already been shipped, it means that it's physically posible to create a 7.1 b chip and make it economically viable (the process hasn't changed much in 6 months). So like I said something in the middle, lika a 5b transistor and/or 400mm^2 would be entirely posible and Nvidia would have gone with that, because AMD's trend has been upwards in regards to die size and there's no way in hell Nvidia would have tried to compete with a 294mm^2 chip, when they knew 100% that AMD had a bigger chip AND they have been historically more competent at making more in less area. Nvidia can be a lot of things, but they are not stupid and would not commit suicide.

Guys, we should only look to cadaveca now for tech rumors, this guy obviously knows what's up and we can't trust dozens of other knowledgeable people/sites. They all just make stuff up and obviously only release info to get page views.

Click to expand...

Yep.

The fact you can't ignore that bit, says something.

What, I cannot speculate myself?

And when you can't attack my points, you go after my character? lulz.

As if I want to be the source of rumours. Yes, I want to be a gossip queen.

Like, do you get that? I'm not the one that posted the news...BTA didn't either...he just brought it here for us to discuss...

These same sites you trust, get it wrong just as often as right. Oh yeah, Bulldozer is awesome, smokes INtel outright..yeah..that worked...

HD7990 form AMD in auguest....but it was Powercolor...

Rumours are usually only part-truths, so to count them all as fact...is not my porogative.

If we go by similarities, as in they look the same to me with a few tweaks we can go back to G80 days. Same on AMD side. But you know what? They have very little in common. Abandoning hot-clocks is not a trivial thing. Tripling the number of SPs on a similar transistor budget is not trivial either and it denotes exactly te opposite of what you're saying. Fermi and Kepler schematics may look the same, but they aren't the same at all.

As to the rest. It makes little sense to think that GK104 is the only thing they had planned. In previous geerations they created 500 mm^2 chips that were 60%-80% faster than their previous gen and AMD was close, 15%-20% behind. But on this gen they said: "you know what? What the heck. Let's create a 300mm^2 chip that is only 25% faster than our previous gen. Let's make the smallest (by far) jump on performance that we've ever had, let's just leave all that potential there. Later we'll make GK110 a 550 mm^2, so we know we can do it, and it's going to be a refresh part so it IS going to be a gaming card, but for now, let's not make a 450mm^2 chip, or a 350mm^2, no, no sir, a 294mm^2 and with a 256 bit interface that will clearly be the bottleneck even at 6000 MHz, let's just let AMD rip us a new one..."

Click to expand...

Well, that's just it. This is complicated stuff.

I am not saying at all that GK104 was the only thing...it isn't. But GK110 was never meant to be a GTX part. Kepler is where the Geforce and Tesla become truly seperate products.

And yeah, it probably did work exactly like that...300mm2...best they could get IN THAT SPACE, since this dictates that they can get so many chips per wafer. You know, designs do work like that, so they can optimize wafer usage...right?

Oh gosh. Before you say that one more time, can you please explain at least once, why it has so many units that are completely useless in a Tesla card?

Looking at the whitepaper, anyone who knows a damn about GPUs can see that GK110 has been designed to be a fast GPU as much as it's been designed to be a fast HPC chip. Even GF100/110 was castrated in that regards compared to GF104, and G80 and G9x had the same kind of castration, but in Kepler the family where "Geforce and Tesla become truly seperate products." they choose to mantain all those innecessary TMU, tesselators and geometry engines.

- If GK104 was at least close to 400mm^2, your argument would hold some water. At 294mm^2 it does not.
- If GK104 was 384 bits, your argument would hold water. At 256 bit, it doe not.
- If GK110 didn't exist and had not released 6 months after GK104 did...
- If GK110 had no gaming features and wasn't used as the high-end refresh card...
- If GK104 had been named GK100... you get it.

Oh gosh. Before you say that one more time, can you please explain at least once, why it has so many units that are completely useless in a Tesla card?

Click to expand...

Because all those things are needed for medical imaging. HPC products still need 3D video capability too. Medical imaging is a very vast market, worth billions. 3D is not gaming. That's where you miss some things.

And no, I do not agree with the summation that GK110 was intended to be a "fast GPU". The needed die size says that is not really possible.

But, since it's for HPC, where precision is needed over speed as a priority, that's OK, and lowered clocks, but greater functionality, makes sense.

However, for the desktop market, where speed wins overall, the functionality side isn't so much needed, so it was stripped out. This makes for two distinct product lines, with staggered releases, and hence not competing for each other.

I mean likewise, what do all those HPC features have to do with a gaming product?

Because all those things are needed for medical imaging. HPC products still need 3D video capability too. Medical imaging is a very vast market, worth billions. 3D is not gaming. That's where you miss some things.

Click to expand...

Medical imaging is not HPC. Maybe you should have been more clear. That being said, Nvidia has announced GK110 based Tesla, but no Quadro:

And an HPC chip has never been profitable on it's own and I don't think it is right now either.

Click to expand...

I bet nVidia would disagree.

For me, Medical imaging is part of the HPC market. Precise imaging isn't needed just for medical uses either, anything that needs a picture that is accurate, from oil and gas exploration to military uses, all fall under the same usage. Both Tesla and Quadro cards are meant to be used together, building an infrastucture that can scale to consumer demands, called Maximus. If you need more rendering power, say for movie production, you got it, or if you need more compute, for stock market simulation, that's there too, so I fail to agree you've posted much that agrees with your stance there. Nvidia doesn't build single GPUs...they build compute infrastructure.

Welcome to 2012.

With this second generation of Maximus, compute work is assigned to run on the new NVIDIA Tesla K20 GPU computing accelerator, freeing up the new NVIDIA Quadro K5000 GPU to handle graphics functions. Maximus unified technology transparently and automatically assigns visualization and simulation or rendering work to the right processor.

Click to expand...

Did you read that press release?

I mean, that whole press release is nVidia claiming it IS profitible, or they wouldn't be marketing towards it.

In fact, that press release kinda proves my whole original point, now doesn't it? GK104 for imaging(3D, Quadro and Geforce), GK110 for compute(Tesla).