From what I see from these benchmarks. The new 500 series is far superior in performance compared to the old 400 series. especially 570 & 580.
Also I am wondering does this reflect real world performance in all other applications?

As true as that may be! We aren't testing to get high scores!
We're testing to see what kind of hit the cards take when hit hard with tessellation

I did a stock settings run. It seems to me clocks does involve in this though.
I got 42.3 fps without tess.
& 28.1 fps with extreme tess.
= 14.2 fps difference/42.3 fps=> 33.57% (not 100% sure, but I think it is correct)Edited by HeadlessKnight - 8/1/11 at 3:05pm

LOL! You should see my 6 GPU's of various flavors (and all the cores) chunking along at 95-99% usage 24/7 doing BOINC tasks...

And yes, the whole idea behind all these fancy benchmarks is to load things up to the max and see how fast they go...

Screen shot, 4 monitors, all 3 systems running BOINC tasks... Graphs set to 3333 second update times.... LOL! I use TightVNC on my local LAN to talk to the other 2 systems as you can see on the left monitor.

From what I see from these benchmarks. The new 500 series is far superior in performance compared to the old 400 series. especially 570 & 580.
Also I am wondering does this reflect real world performance in all other applications?

Then you're seeing wrong, because that's not what the results show at all.

What they're showing so far is that < tess units = larger perf hit from Extreme.

The generation the card is from appears totally irrelevant to the equation.

If you're just talking the raw numbers ... what are you comparing, and are you taking the clocks into account? It's fairly well established what perf increase the 5xx series have on a clock for clock basis ... 570's are about 6% faster than 470's, CFC, and about the same goes for the 580 vs the 480.

However, if you look at current prices ... the 5xx-series does terrible in bang/buck

590s have full fledged GF110 GPUs right (16 tess units each)? Just downclocked?

Officially, I don't think we know but it would be good to find out

Quote:

Originally Posted by brettjv

However, if you look at current prices ... the 5xx-series does terrible in bang/buck

This is really true for all current generations of GPUs. Especially true since the 5xx is just essentially a tweaked 4xx architecture. I could never understand why someone who had a 480 would think that getting a 580 was an upgrade for example. One would think that the prices would be competitive, but nVidia has this staunch pricing mentality where new cards cost x dollars, and older cards are priced y dollars. Unfortunately, their pricing scheme is borderline static and does not adhere to the performance of the card, thus they are not priced accordingly anymore (similar to AMD's pricing models.) Although nVidia has been successful in the past, the more competition they come under, the less likely people will remain customers.

There is no point to buying some GPU 33% more expensive if something else does the job on par (6970 vs. 580 for example.) If nVidia fail to update their pricing model, it won't matter who has the bigger guns, they simply will decline in business. Being loyal to a company does nothing to benefit the consumers, the gimmicky features each card competitively offer are becoming less appealing, when faced with cold hard reality.... its simply not worth it. The age of gouging the customers is coming to an end thank God (simply because the majority of us can no longer afford it)

nVidia have had one ace card in their pocket, and that has been extremely progressive and pro-active driver support. Unfortunately for them, AMD/ATi appear to have learned from past mistakes. They can have superior hardware (ala 58xx/59xx), and still lose customers due to shoddy support.Edited by RagingCain - 8/2/11 at 6:28am