My Sapphire R9 280X Toxics (2 of them) arrive on Saturday, best price per bang out there, going to clock them both up by another 10%, and they'll then be faster than a 780 (oh and they cost the same as a single RIP-OFF 780) Been with Nvidia as long as I can remember, and I'm sick of being ripped-off!

Good bye Nvidia, you're not going to be laughing all the way to bank at my expense again!Edited by Mombasa69 - 10/21/13 at 2:51am

If you get a 47% increase in fps from a 47% increase in core clock and 30% increase in memory clock then the scaling is not linear.

Sure it is. In this case, the core was the bottleneck, but memory bandwidth was not, nor was anything off the card.

Quote:

Originally Posted by HeadlessKnight

And I got about 24% increase in Metro Last Light and about ~14.8% in Metro 2033. GPU usage was 99% all the time, In Skyrim (ENB) I got 25% increase at those clocks. So your point is moot and null.
So performance vs clock is not scaling linearly, it depends on other factors too such as the architecture itself & the game engine.

Using commonly reported GPU utilization percentages is a serious flaw in this argument.

If you want to prove you are not CPU (or system memory, or PCI-E or, HT/QPI, or whatever) limited, you need to change CPU (or other) speed. If performance doesn't change, then it's not a limiting factor, but if it does, it likely is, no matter what anything else reports.

Surely the game engine can be limited by various factors, some of which may not be apparent without extensive testing, but if it's only limited by the GPU, increasing GPU performance cannot possibly result in anything but near linear scaling. To imply otherwise is to say that a GPU's IPC varies with clock speed, which is certainly not the case.

My Sapphire R9 280X Toxics (2 of them) arrive on Saturday, best price per bang out there, going to clock them both up by another 10%, and they'll then be faster than a 780 (oh and they cost the same as a single RIP-OFF 780) Been with Nvidia as long as I can remember, and I'm sick of being ripped-off!

Good bye Nvidia, you're not going to be laughing all the way to bank at my expense again!

Id rather not have to deal with the headaches of multi gpu's

Also: im not sure why you feel like you were being ripped off, top end gpus have never been good bang for your buck, only amplified by the fact that amd had no answer for it so nvidia could charge whatever they wantEdited by Blindsay - 10/22/13 at 6:47am

Sure it is. In this case, the core was the bottleneck, but memory bandwidth was not, nor was anything off the card.

You can't decide based on that, Most of the realistic situations scaling won't be 100% vs core clock/ memory clock, unless there are other problems such as a severe memory bandwidth bottleneck.

Quote:

Originally Posted by Blameless

Using commonly reported GPU utilization percentages is a serious flaw in this argument.

If you want to prove you are not CPU (or system memory, or PCI-E or, HT/QPI, or whatever) limited, you need to change CPU (or other) speed. If performance doesn't change, then it's not a limiting factor, but if it does, it likely is, no matter what anything else reports.

In the first test for example (1210/1625) gave me 45.87 from 34.4 on stock clocks that's 33.34% increase from 51.25% core oc and that without even taking into consideration the 30% memory overclock. Using Heaven Benchmark 4 as an example same thing can be said but the scaling is a bit higher, 41.71%. still not perfect scaling and far from close. Also notice 7950 @ 1100/1625 is faster than itself @ 1210/1250 in Metro 2033/ Last Light.

Conclusion : Graphics cards overclocks act differently based on game engine & the efficiency of the architecture itself and how balanced it is. CPU doesn't matter very much as long as it is up to the task.Edited by HeadlessKnight - 10/22/13 at 10:17am