So the Hexus.net labs took nVidia's demo at CES 2010 and pitted against a 5870 test system. They admit that they don't know anything about what test rig nVidia was using, but they used for their tests a "fair"-ish Core i7 975 @ stock with a factory overclocked 5870 at 875 Core/ 4900 Mem, which is a pretty reasonable comparison.

The benchmark was Far Cry 2. The game settings are also posted for reference, although they make no reference to the ATI Drivers.

They then did another benchmark with "Dark Void". Here are the numbers

GTX 285: min 29.28 avg 38.38 GF100: min 53.32 avg 78.32

The lab then comments that this test is skewed heavily towards GF100 because of improved PhysX rendering. Since Dark Void hasn't been released yet, the lab doesn't have numbers, but they will be running a test once they get a copy in a few days using the same ATI setup.

What are your thoughts here?

Is the performance expected/disappointing? Was the test rig fair? Do you think nVidia have a natural edge because of the test rig, or is an i7 975 @ stock and factory overclocked HD5870 a pretty fair real-world comparison? Is Dark Void a fair comparison platform? (my gut tells me no, but from what Hexus.net says, Dark Void may end up becoming a benchmark just like Crysis)

Finally, Hexus.net comments that nVidia needs to release a ~$150 card for the Joe Average gamer in the Fermi lineup to be successful. What do you think of this comment? Is the GTX260/GTX250 doing fine for that price range right now, or does nVidia need something better. (I know that the HD5700 cards offer better performance/dollar at that range, but is it a performance difference that the Average Joe will care about?). Do you agree with this statement, or do you think that the selling opint of DirextX11 is a moot point at the lower-end, and that nVidia still has the ability to compete when it comes to the $100-$200 range?

I really don't think the 5870 is the card in question myself as even these Nvidia benchmarks may be alittle high for there new hardware depending on how good the yeilds clock up.. I know it was said by (you know who's statment)that the clocks speeds was all over the place with these chips and Nvidia has never released that data which is just helping (you know who's statment) ..so they are the only ones that know if it's true or not and you can test till the cows come home but without the Fermi with some set clocks to use it's just like pissing in the wind.

Update: An nVidia spokesperson contacted the lab and gave them the following comments:

The nVidia test rig at CES used an i7 960 and 6GB of RAM, although the lab found that using the i7 960 vs the i7 975 at stock had almost 0 impact on scores.

The spokesperson also said that the cards used at CES were not tuned up to the final clock speed, and that "Final performance will be higher" (ok, I'm a little bit of a Green Team-er, but there are only so many times we can hear that before we get sick of the 'unquestionably good performance')

Hexus also pitted the cards against the best GPU in the world right now, the HD 5970

HD5970 min 76.42 avg: 99.79

So we can see that the single GF100 chip can't beat out the HD5970. Now, I don't doubt that the final GF100 will be faster than the one running at CES, but by how much higher? Could be 1%, could be 10%, could be somewhere in between. They'll really have to pull out a curve ball to beat the dual-GPU HD5970, though.

Was it expected that the HD5970 would beat out Fermi? Does it actually matter that much as long as the price-performance ratio is right?

I'd be surprised if they would have told you that the cards at CES were indeed running higher clock speeds than the final product and don't expect the final cards to run as well. Imagine that

Yes, their main goal for the final retail product, is usually to lower it's performance from what they could achieve with a BETA card...

Another gloom and doom prediction for Fermi.

I believe a moderator or two have stated that they didn't believe (only speculation) the final product would be as fast as it was at CES and in these benchmarks so don't go getting upset with me. I'm not the only one. Also, it is hardly doom and gloom. My goodness. The card is going to scream. So what that it is only 1-5FPS slower at launch.

Call me nutz but if the cards are 5-10% slower than ATIs baby, then as long as nVidia prices accordingly, there should be no issue.

$680 ATI $599-$550 Nvidia would be fine.

We all know that aint gonna happen with nVidias stupid greed out the gates with $700 GTX280s at launch. But hopefully the like then the price wars will kick in and well see a decent price around April.

Like I said slightly lower price or even equal due to PhysX and I think nVidia will have a winner. Try a $700 start again though, and I think nVidia my seal it's fate. And believe me, I don't want that as I go as far back as Voodoo 3 3000 3DFX.

Let only test games that use our PhysX engine, who cares if its not open source where both camps have a level playing field. The only test i would accept is one using open source to show which card performs better.

I can't wait until we have an industry standard for physics, that way all gamers can enjoy it without having to worry which mfg they go with.

Let only test games that use our PhysX engine, who cares if its not open source where both camps have a level playing field. The only test i would accept is one using open source to show which card performs better.

Open source? Are the benchmarks used to measure gaming performance open source? I didn't think they were.

"BSD is what you get when a bunch of Unix hackers sit down to try to port a Unix system to the PC. Linux is what you get when a bunch of PC hackers sit down and try to write a Unix system for the PC."

Let only test games that use our PhysX engine, who cares if its not open source where both camps have a level playing field. The only test i would accept is one using open source to show which card performs better.

Open source? Are the benchmarks used to measure gaming performance open source? I didn't think they were.

My bad for using open source, i should of said this. How about vendor neutral so no one has an advantage when it comes to showing how good the performance is. We all know Far Cry 2 and Dark Void use Nvidia PhysX so those games are not vendor neutral.

From what I understood the single-GPU Fermi is good but not better then the 5970 right? Since the 5970 is dual-GPU wouldn't the dual-GPU Fermi logically be a better choice?

Nobody really knows. A single-GPU GF100 may be faster than a dual-GPU 5970. We won't really know until the reviews come out after the card is released. But even then, the performance of the GF100 will go up as the drivers improve.

From what I understood the single-GPU Fermi is good but not better then the 5970 right? Since the 5970 is dual-GPU wouldn't the dual-GPU Fermi logically be a better choice?

Nobody really knows. A single-GPU GF100 may be faster than a dual-GPU 5970. We won't really know until the reviews come out after the card is released. But even then, the performance of the GF100 will go up as the drivers improve.

edit: MSim, Far Cry 2 doesn't use PhysX. It uses Havok.

Im hoping that the Dual Fermi comes out and gives the 5970 a run for its money, I would also love to see the Fermi x2 cost no more then $600 like the ATI 5970 does now.... I plan to Buy two Dual GPU's of either ATI or Nvidia so I guess im stuck on making that decision till April comes out...

I plan to play Battle Field Bad Company 2 full time on the PC and finally lay to rest my 360, so I want my setup to crush all games I throw at it

I doubt you will be seeing a dual gpu Fermi. From what I understand, the low end speculation on GF100 power consumption is 250W, with many saying 280+. For them to keep the card within PCI-E 2.0 specifications they can use no more than 300W.

I simply do not see them making a true dual GPU Fermi. They would have to neuter a Fermi chip to make a dual card. Personally I would expect a dual GPU 360 (or whatever they are calling the 448sp model).

From what I understood the single-GPU Fermi is good but not better then the 5970 right? Since the 5970 is dual-GPU wouldn't the dual-GPU Fermi logically be a better choice?

Nobody really knows. A single-GPU GF100 may be faster than a dual-GPU 5970. We won't really know until the reviews come out after the card is released. But even then, the performance of the GF100 will go up as the drivers improve.

edit: MSim, Far Cry 2 doesn't use PhysX. It uses Havok.

Im hoping that the Dual Fermi comes out and gives the 5970 a run for its money, I would also love to see the Fermi x2 cost no more then $600 like the ATI 5970 does now.... I plan to Buy two Dual GPU's of either ATI or Nvidia so I guess im stuck on making that decision till April comes out...

I plan to play Battle Field Bad Company 2 full time on the PC and finally lay to rest my 360, so I want my setup to crush all games I throw at it

From what I understood the single-GPU Fermi is good but not better then the 5970 right? Since the 5970 is dual-GPU wouldn't the dual-GPU Fermi logically be a better choice?

Nobody really knows. A single-GPU GF100 may be faster than a dual-GPU 5970. We won't really know until the reviews come out after the card is released. But even then, the performance of the GF100 will go up as the drivers improve.

edit: MSim, Far Cry 2 doesn't use PhysX. It uses Havok.

Well, the benchmarks run by the Hexus labs seem to indicate that not even the Fermi at CES can beat out the HD5970, based on the sheer brute force rendering power of the HD5970. It would be great to see an nVidia $500 single-GPU card beat out the HD5970. Obviously, I've got every finger and toe crossed, but that's a very tough benchmark.