i love how everyone is saying AMD will have a hard time competing did everyone forget that yawn that is the 7970 GHz edition still beat out the GTX 680 and this gen for the most part each company is equal at the typical price points.

8970 is expected to be 40% faster than the 7970

GTX 780 is expected to be 40-55% faster than the 680

add in overclocking on both and we end up with the exact same situation as this generation. So in reality it just plain doesnt matter lol performance is all i care about and who gets product onto store shelfs and from their into my hands. Doesn't matter whos fastest if it takes 6 months for stock to catch up.

Click to expand...

It doesn't matter. If you look at most recent performance numbers they trade blows. Thats how it has been for the last 2-3 generations. Only reason the 680 truely looks like the better card is because it consumes a lot less power then the 7970 for the same performance range.

Unless I'm seriously misunderstanding you, you're arguing that GK110 is/was scrapped due to inherent problems with die size even though we're sitting here reading and commenting on an article that is proposing that GK110 is going to be released just fine for the 7xx series cards.

That makes no sense.

Much more likely is that their yields sucked last year on this chip and so they bumped GK104 up a tier from 660ti to 680, while putting GK110 on the back burner until they got yield issues fixed.

We'll never know with 100% certainty, but I think that it makes better sense of the available data that the original GTX 6xx lineup was to include both Gk110 (680/670?) and GK104 (660ti/660).

Click to expand...

that's right.. 'no one can say for sure', but I am inclined to believe your case.. also that atleast on paper, ie, in the 'plans', nvidia could possibly have included the GK1x0(100 or 110) in the GTX 6xx line-up, but since its a big chip(500+ mm²) and TSMC's 28nm process was in its nascent stages, nvidia might have changed plans anticipating poor yields(they might have made a pre-production study too)

Note how the AMD chip has nearly 33% more transistors, but is barely physically larger than GTX 680.

If nVidia could have fit more functionality into the same space, they would have.

...

Click to expand...

you seem to make a valid point, sir.. but I am not convinced just looking at the pics(they are zoomed at slightly diff levels, based on the match-stick size).. moreover, based on calc :
Tahiti ≈ 365 mm²
Gk104 ≈ 295 mm²

Difference in size ≈ 24%
Difference in # of. transistors ≈ 33%

looking at the above numbers, Tahiti does pack more transistors but the die sizes are not too close either

you seem to make a valid point, sir.. but I am not convinced just looking at the pics(they are zoomed at slightly diff levels, based on the match-stick size).. moreover, based on calc :
Tahiti ≈ 365 mm²
Gk104 ≈ 295 mm²

Difference in size ≈ 24%
Difference in # of. transistors ≈ 33%

looking at the above numbers, Tahiti does pack more transistors but the die sizes are not too close either

Click to expand...

Yeah, and Tahiti has 384-bit bus, so really needs to be physically bigger, for more connections to PCB for the added ram chips.

see, to me, a mid-range chip is under 250mm², like GTX 660 and HD 7870. All these claims of GTX680 being mid-range do not make sense.

Yeah, and Tahiti has 384-bit bus, so really needs to be physically bigger, for more connections to PCB for the added ram chips.

see, to me, a mid-range chip is under 250mm², like GTX 660 and HD 7870. All these claims of GTX680 being mid-range do not make sense.

Click to expand...

That's because you a priori exclude the possibilty of GK110's existence/plausibility based on size. If GK110 exists as stated, then it defines what the high end chip is, and GK104 is comfortably midrange in comparison.

What I am denying is the ability to cool a chip that large in size, yes.

I'm not denying it might have been planned...but reality says, since chips take liek 2 years to design, that they knew since day one it wasn't going to happen. They knew LONG before those "claims" came out that GTX 680 was the chip we got.

GK110 or GK100 or whatever...was NEVER meant to be GTX 680. Nor was it meant to compete with the current HD7970.

logic would dictate that the best GK 110's are destined to end up as Tesla/Quadro parts, which would leave the GeForce parts as either salvage and/or high leakage parts. In either event I wouldn't expect the GTX 780 to be widely available, which is why the pricing is a head scratcher. GTX 780 (GK 110) @ $550 (or more, depending on % of full die)

Click to expand...

That’s going to be the question… has TSMC got their process to the point that makes parts that are viable to gaming enthusiasts and not out of bounds on power. I think with geldings from Tesla and a more tailored second gen boost mapping this is saying they can, but I would say it not $550. Something tells me these will all be as GTX690’s, Nvidia only outing a singular design and construction as a factory release.

I just don't believe these performance increase claims. They gonna double the transistor count but won´t get even 50% more speed. GK110 will be highly optimised for DP computing.
GK110 will definately shine in some selected benchmarks but there will be a lot of die area that won´t be touched by even the latest games.
Big Kepler just makes no sense as a gaming card. Huge die size, huge power consumption and a huge pricetag. GK114 might de worth waiting for...

Click to expand...

Nvidia can minimize the shortcoming, and exhort the virtues to attain a card that enthusiasts will exculpate, just to exclaim its presence in the market exclaiming how great thou art! (for $600 and a 280W TDP)

they were conservative in order to get better yields essentially most chips yes can do 1050 but not all can at the proper voltage or TDP level, they also have to harvest chips for the 7950 lower clocks meant more chips more usable chips means greater volume to put on store shelves.

Regardless the refresh will probably see Nvidia take the lead but not by a whole lot they have more room to play when it comes to TDP than AMD does right now.

Click to expand...

I think it was always a TSMC issue that caused both their woe's, but yes GK104 once Nvidia got good stuff surprise themselves as to what could be wrung out, but they had to use Boost to insure they wouldn't have chip committing Hari Kari. This time around boost theyll get more aggressive and tolerate to heat and power, so that's where the gains will really come from, but will effectively quell any OC’n.

see, to me, a mid-range chip is under 250mm², like GTX 660 and HD 7870. All these claims of GTX680 being mid-range do not make sense.

Click to expand...

if you expect mid-range chips to have sub-250mm² die sizes, then GF104 (GTX 460) & even GF114 were well-over 300mm².. as for me, I am going by 'naming' convention of Fermi gen.. it had GF100 & GF110 as the high-end chips, so same could be said for Kepler(knowing that GK110 exists)...

anyways with due respect I wish to end it here, to each his own (its all speculation)..

It may not make sense to you, but it makes all the sense in the world. You are arguing against history. Are you going to suggest that GF104 was not a midrange chip? It was 332 mm^2. Significantly bigger than GK104 and definitely bigger than your 250 mm^2 figure.

All Nvidia high-end chips (GPU + HPC) of past generations have been close to or bigger than 500 mm^2. G80 484mm^2, GT200 576mm^2. GF100 520 mm^2.

Time to have a reality check man. GK100/110 IS the high-end chip. A chip that Nvidia decided it was not economically feasible this past months when TSMC supply was so constrained and yields (for everybody) were not good. End of story. It really is. There's no problem with it other than that and the fact that by being bigger it's going to have lower yields and lower number of dies, nothing that Nvidia didn't do previously or that are afraid of. GK106 took long to release too. Was it because it was not posible? No, because it was economically less "interesting" than GK104 and so was GK110. If they could win with a 294mm^2 chip there was absolutely no reason to release the big one and have lower margins as they had to with first Fermi "generation". HPC moves slower and relies on designs like the Oak Ridge supercomputer that would not been ready back at the time, so more reason to delay.

Oh, I never mean to say that my expectations are the same as what the industry sets, but yes, if a 28nm, and let me repeat...a 28nm chip is over 250mm, then yes, I would not consider it a mid-range chip. If you need more than that space(and neither AMD or nvidia did), then you've got some serious engineering issues, for sure.

Of course bigger processes took up more space.

Silly.

I never said GK100 or GK110 is NOT the high-end chip...sure is...but it was NEVER meant to be GTX680.

TSMC had yield issues. That is comical. Yeah, blame the infant technology.

Of course it was horrible. nVidia KNEW it would be, as did AMD...and they dealt with it, as they have with every process.

I like that you repeatedly just a priori dismiss dozens and dozens of reputable stories/rumors from the past year for no real reason other than your own theories.

Click to expand...

Stories and rumours. Yep.

Except, of course, as a reviewer, I do have a bit more info than the average joe, although, not as much as many other reviewers do, I'm sure.

See, the difference between me and other reviewers..I do this for fun, as a hobby..and not for cash.

I'm not posting news for hits, because that garners money for the site with ads...

TPU isn't built upon that, at all.

This is specualtion, after all, not fact, so yeah, I offer a different perspective...So?

At the end of the day, it's me playing with the hardware NOW you guys want to buy IN THE FUTURE. I don't really care who has the faster chips, who is cheaper, or what you buy...this stuff just shows up on my doorstep, with ZERO cost.

I'm just not afraid to be wrong. In the future, we can say "look, this was right, and this wasn't"...and I won't care if I'm wrong. You might...but I won't.

Pulling rank as a reviewer doesn't mean rumors/stories are untrustworthy just because you don't believe them and/or they don't fit your ideas of what is or is not going on. Maybe if we were talking about some isolated or crazy things, but not when we're talking about widespread info.

Pulling rank as a reviewer doesn't mean rumors/stories are untrustworthy just because you don't believe them and/or they don't fit your ideas of what is or is not going on. Maybe if we were talking about some isolated or crazy things, but not when we're talking about widespread info.

Click to expand...

If I had actual info about an unreleased product, I wouldn't be able to talk about it.

That's where me being a reviewer is important.

Who cares that I review stuff. It's not important, really. Like, really...big deal..I get to play with broken stuff 9/10 times, when it's pre-release. I've said it before, I'd much rather have stuff later, but I guess some OEMs value my feedback prior to launch. That's like the whole "ES is better for OC" BS.

That fact I do that for them, for free...well...it's not a big of a deal that most seem to think it is. I actually think it's kind of the opposite...

At the same time though, those that DO have info about unreleased products, like myself, also cannot say much, except what they are allowed, or their info cannot be real.

THAT is a fact I learned as as reviewer, that many seem to not know. That is just how it works. Either this info is force-fed, or it's fake.

If I had actual info about an unreleased product, I wouldn't be able to talk about it.

That's where me being a reviewer is important.

Who cares that I review stuff. It's not important, really. Liek big deal..I get to play with broken stuff 9/10 times, when it's pre-release. I've said it before, I'd much rather have stuff later, but I guess some OEMs value my feedback prior to launch.

That fact I do that for them, for free...well...it's not a big of a deal that most seem to think it is.

At the same time though, those that DO have info about unreleased products, liek myself, also cannot say much, except what they are allowed, or their info cannot be real.

THAT is a fact I learned as as reviewer, that many seem to not know. That is just how it works.

Click to expand...

well thats how they improve it later but its for PR honestly when NDA is lifted too bro

I never said GK100 or GK110 is NOT the high-end chip...sure is...but it was NEVER meant to be GTX680.

Click to expand...

Explain why GK110 wastes so much space in 240 texture mapping units and ROPs, and tesselators and whatnot, if it was never meant for high-end gaming card?

TSMC had yeild issues. That is comical. Yeah, blame the infant technology. Of course it was horrible. nVidia KNEW it would be, as did AMD...and they dealt with it, as they have with every process.

Click to expand...

Of course they dealt with it. They released the mid-range chip as the high-end card knowing that it would be able to compete with AMD's fastest chip.

No one's blaming the "infant tech". Both AMD and Nvidia design their chips according to TSMC's guidances on the process. They have to, since they have to design the chips long before TSMC is ready for production. They design around them and weight in the feasibility and profitability based on them. Guidances are one thing and reality is often a very different one. Of course AMD by being a fabbed chip maker in the past, knows better than Nvidia how to deal with them. We are not discussing that so to the point. Trying to deny that volume and yield issues are TSMC's problem is stupid. Ther guidances for the process and reality didn't match and everyone has suffered from it, be it Nvdia, Qualcomm or AMD, even if AMD has not been as vocal. Each company has very different things to address in their conference calls and trying to extract any conclusions from whether they talk about TSMC issues or not is again stupid. AMD is in far more trouble and has much more things to excuse than having to explain why profit margins on the GPU bussiness are slightly lower than expected.

So imagine we are Nvidia. 28nm is not as good as it was "promised" to be. We get close to Kepler release dates. Volume is not good, yields are not good either, neither worse then 40nm, as Jen Hsun Huang said. But Nvidia had 2 options, repeat GF100 or release GK104 as the high-end. The answer is simple. In a waffer you can have 201 GK104 die candidates. And ~100 GK110 candidates. Again, knowing that GK104 would be close to Tahiti performance or beat it, it's an easy choice*. GK104 at $500. There was no price point at which GK110 would have been more profitable, no matter how much faster than HD7970 it could have been. With the severely low 28nm volume, they would never be able to sell enough GK110 cards so as to be more profitable than they have been with GK104, even if they had acheved 100% market share.

* More so when you know that the next node willl not be ready until 2-3 years later. You'll have to do a refresh and you'll have to make it appealing, faster, so by doing what they did, they can kill 2 birds with a single stone.

What I'm saying is that your status as a reviewer gives no inherent credibility to your dismissal of tech rumors/stories (sorry to break it to you...). That might be true if the stories were from people clueless about tech, or if everyone who is well informed about GPUs agreed with you, but that's not the case. When you get corroborating evidence from many reliable and semi-reliable tech sources, there's something to it.