2012rig, read the damn graph on Tom's hardware. I don't care what they wrote to downplay their findings. It's black and white and clear as day.
I feel like I'm talking to a bunch of people with their brains turned off. 2 sites have confirmed those findings on power consumption. Nothing else needs to be said.
/done

If you say so. The numbers, the graphs, and commentary don't lie.

I even highlighted the graph for you, I don't know what else you want.

If you choose to believe that the 680 consumes more power than a 6990, I don't know what else to say.Edited by 2010rig - 3/22/12 at 8:46pm

Actually I don't know what chart you are looking at, but Tom's Hardware's chart clearly shows the GTX680 sitting between the 7970 and 7950, exactly where it's supposed to be. Also, if you actually read THIS page (which is where that chart comes from) and the page that follows it, you will see that Tom's clearly states that the 680 has considerably better performance per watt than the 7970. I think you need to get your eyes checked out, because you are dead wrong on this one.
EDIT: crap 2010rig, you beat me to it!

I've updated my post. the colors are so close between the HD 6990 and the GTX 680 that it was easy to get them mixed up. The TT review is an anamoly, and based on the power findings of other cards, it really isn't a reliable source anymore. How sad.

There I said it. Enjoy what ever little bit of pleasure you may get out of it.

I've updated my post. the colors are so close between the HD 6990 and the GTX 680 that it was easy to get them mixed up. The TT review is an anamoly, and based on the power findings of other cards, it really isn't a reliable source anymore. How sad.
There I said it. Enjoy what ever little bit of pleasure you may get out of it.

I gain no pleasure from proving you wrong, just take time and read the graphs, and the actual reviews' content next time.

Now I'm sure we'll agree that TweakTown's numbers are suspect, and you can stop going around talking about the high power consumption of the 680.Edited by 2010rig - 3/22/12 at 8:58pm

Anyway... Is this Boost feature going to affect overclocking at all? Will increasing voltage and core clock be the same, just with Boost working/not working depending on how high the user sets the overclock?

Folks please always take the TOS and professional initiative into consideration when posting on overclock.net. You are expected to adhere to both. In addition, please drop the stereotype,name calling, posting memes pictures and calling each other "trolls". This is a discussion board.Please respect each other, each other's opinion's and values.

Well now that all of that power consumption drama is over with, on to some useful conversation.

Does anybody else agree that this is a wonderful direction for Nvidia? Think about it, they have been building massive inefficient, albeit ambitious, dies for their flagships for years now. This time though, they have developed an arch' that is so efficient that a part, which by all of Nvidia's previous design precedents, was intended to be a midrange part performs on the same level (in gaming at least, the jury is still out on compute) with it's competitor's considerably larger and more muscular flagship part. This is a lean chip, and it's a really dense architecture, on the level that we have never seen in the graphics industry.

Now this may not have been an intentional strategy change, with GK100 being scrapped thus necessitating the GK104 flagship upgrade, but hopefully it will show Nvidia that the massive inefficient die method isn't the best way for them anymore. Nvidia has always pushed the limits with their designs, but this time they have channeled that energy into pushing the performance per watt limit. Imagine if instead of the massive GK100 die, Nvidia had worked on a chip that is of similar size to Tahiti. A kepler chip on a die that size would be the indisputable dominant chip this generation, and with it being a smaller chip than GK100, it easily would have been producible enough to bring to market right now.

I hope that with Maxwell Nvidia keeps pushing the efficiency of the architecture further, and then goes for a middle of the road die size for their flagship. It would be nice for them to have a chip that isn't too ambitious to be brought to market on time for a change.

I would expect it with the release of the 720 at this point. It won't be Windows 8 because MS would be talking about it already. Really though no developers are even taking advantage of DX11 yet. We have not had a single game made from the ground up for DX11. I'm a little foggy on this but I don't think we've even had a from the ground up DX10 game so devs still have a long way to go with 11.

whAaa?

The 720 is already being murmered about, & I have Windows 8 and can run Xbox in VDM.

Secondly, there are many DX11 games out, where have you been...? <-- dERP!

Folks please always take the TOS and professional initiative into consideration when posting on overclock.net. You are expected to adhere to both. In addition, please drop the stereotype,name calling, posting memes pictures and calling each other "trolls". This is a discussion board.Please respect each other, each other's opinion's and values.
Thank you.

Well now that all of that power consumption drama is over with, on to some useful conversation.
Does anybody else agree that this is a wonderful direction for Nvidia? Think about it, they have been building massive inefficient, albeit ambitious, dies for their flagships for years now. This time though, they have developed an arch' that is so efficient that a part, which by all of Nvidia's previous design precedents, was intended to be a midrange part performs on the same level (in gaming at least, the jury is still out on compute) with it's competitor's considerably larger and more muscular flagship part. This is a lean chip, and it's a really dense architecture, on the level that we have never seen in the graphics industry.
Now this may not have been an intentional strategy change, with GK100 being scrapped thus necessitating the GK104 flagship upgrade, but hopefully it will show Nvidia that the massive inefficient die method isn't the best way for them anymore. Nvidia has always pushed the limits with their designs, but this time they have channeled that energy into pushing the performance per watt limit. Imagine if instead of the massive GK100 die, Nvidia had worked on a chip that is of similar size to Tahiti. A kepler chip on a die that size would be the indisputable dominant chip this generation, and with it being a smaller chip than GK100, it easily would have been producible enough to bring to market right now.
I hope that with Maxwell Nvidia keeps pushing the efficiency of the architecture further, and then goes for a middle of the road die size for their flagship. It would be nice for them to have a chip that isn't too ambitious to be brought to market on time for a change.

This...^^

All most people are saying, it is clear that the nVidia GTX 680 is a better VALUE...!

The GTX 680 just rewrote the entire discreet GPU market!! Nobody isn't saying that AMD HD7970 sucks, it just that it certainly is not "as good as a value as" the GTX 680 ..