We are reaching a point where performance will stagnate. This is normal, it's a scientific fact that GPUs and CPUs performance will not increase as fast as it once did. Hence the push for multi-GPU PCBs and multi-core processors.

Why would you think the 7970 would be an upgrade to a GTX 590? If you have followed ATI/AMD's naming scheme then you would realize that xx70 is mid-high scale. Wait for the 7990 before you decide to upgrade if the 7970 does not do you justice.

Why would you think the 7970 would be an upgrade to a GTX 590? If you have followed ATI/AMD's naming scheme then you would realize that xx70 is mid-high scale. Wait for the 7990 before you decide to upgrade if the 7970 does not do you justice.

We are reaching a point where performance will stagnate. This is normal, it's a scientific fact that GPUs and CPUs performance will not increase as fast as it once did. Hence the push for multi-GPU PCBs and multi-core processors.

New Member

I'm sorry, was my original explanation not clear enough? I'll break it down for you.

Card A averages 100FPS
Card B averages 50 FPS

From the perspective of Card A, Card B is half (50%) as fast. From the perspective of Card B, Card A is twice (200%) as fast. It's always relative to the starting point, which in the case of the above example, is the 4870.

Again, third grade. And if you're really going to nitpick about the 12% in the guys OP, which would equate to probably less than 1-2 FPS, you deserve where this conversation is going.

Let us consider the case of a single-threaded system. According to Moore's law, transistor dimensions are scaled by 30% (0.7x) every technology generation, thus reducing their area by 50%. This reduces the delay (0.7x) and therefore increases operating frequency by about 40%.

In multi-core CPUs, the higher transistor density does not greatly increase speed on many consumer applications that are not parallelized. There are cases where a roughly 45% increase in processor transistors have translated to roughly 10–20% increase in processing power.

Parallel computation has recently become necessary to take full advantage of the gains allowed by Moore's law. For years, processor makers consistently delivered increases in clock rates and instruction-level parallelism, so that single-threaded code executed faster on newer processors with no modification.[71] Now, to manage CPU power dissipation, processor makers favor multi-core chip designs, and software has to be written in a multi-threaded or multi-process manner to take full advantage of the hardware.

To summerise, computers are not becoming faster, as time goes on transister count will do less to improve performance, hence the push for multi CPU or GPU and multithreaded application. AMD Bulldozer is proof of Moores Law!

I'm sorry, was my original explanation not clear enough? Ok Captain Derp, I'll break it down for you.

Card A averages 100FPS
Card B averages 50 FPS

From the perspective of Card A, Card B is half (50%) as fast. From the perspective of Card B, Card A is twice (200%) as fast. It's always relative to the starting point, which in the case of the above example, is the 4870.

Again, third grade. And if you're really going to nitpick about the 12% in the guys OP, which would equate to probably less than 1-2 FPS, you deserve where this conversation is going.