3DMark Vantage Performance Tests

3DMark Vantage is a computer benchmark by Futuremark (formerly named Mad Onion) to determine the DirectX 10 performance of 3D game performance with graphics cards. A 3DMark score is an overall measure of your system's 3D gaming capabilities, based on comprehensive real-time 3D graphics and processor tests. By comparing your score with those submitted by millions of other gamers you can see how your gaming rig performs, making it easier to choose the most effective upgrades or finding other ways to optimize your system.

There are two graphics tests in 3DMark Vantage: Jane Nash (Graphics Test 1) and New Calico (Graphics Test 2). The Jane Nash test scene represents a large indoor game scene with complex character rigs, physical GPU simulations, multiple dynamic lights, and complex surface lighting models. It uses several hierarchical rendering steps, including for water reflection and refraction, and physics simulation collision map rendering. The New Calico test scene represents a vast space scene with lots of moving but rigid objects and special content like a huge planet and a dense asteroid belt.

At Benchmark Reviews, we believe that synthetic benchmark tools are just as valuable as video games, but only so long as you're comparing apples to apples. Since the same test is applied in the same controlled method with each test run, 3DMark is a reliable tool for comparing graphic cards against one-another.

1680x1050 is rapidly becoming the new 1280x1024. More and more widescreen are being sold with new systems or as upgrades to existing ones. Even in tough economic times, the tide cannot be turned back; screen resolution and size will continue to creep up. Using this resolution as a starting point, the maximum settings were applied to 3DMark Vantage include 8x Anti-Aliasing, 16x Anisotropic Filtering, all quality levels at Extreme, and Post Processing Scale at 1:2.

3DMark Vantage GPU Test: Jane Nash

Our first test shows the GTX460 placed right where NVIDIA wants it. The 768MB part is trading blows with the HD 5830 and the 1 GB part is going toe-to-toe with the HD 5850. If you think this is aiming a little too high, check out my Final Thoughts? The EVGA GeForce GTX460 SC is overclocked from the factory, by about 13% (+88MHz), and I am showing the results from these factory settings. We already know this chips an overclocking monster, and I'll get into that later. The big hitch in the graph is caused by the older GT200-based cards, which I am including for reference in case you want to see whether it's worth upgrading. The synthetic results overwhelmingly say: Yes.

At 1920x1200 native resolution, things look much the same as they did at the lower screen size; just the absolute values are lower, the ranking stays the same. One thing you may have noticed is how well the HD 5830 does on this test, compared to the HD 5770. That issue has been beat to death, but I mention it to demonstrate that the EVGA GTX460 SC beats the HD 5830 even when it has everything going for it. The 5870 is the only card that can break 30FPS at this resolution, and it's pretty obvious as the test plays out on the screen. All the lower choices seem choppy by comparison. Let's take a look at test#2, which has a lot more surfaces to render, with all those asteroids flying around the doomed planet New Calico.

3DMark Vantage GPU Test: New Calico

In the medium resolution New Calico test, the moderately overclocked EVGA GTX460 SC does so well that it edges out an ATI HD 5850 with base clocks. That's an impressive feat for a card in this price range. The overclock results show that synthetic performance scales linearly with higher clock rates, just as you would suspect. Even though the 763 MHz GTX460 gets within 2 FPS of a stock HD 5870, it still takes a 1.0 GHz Cypress core to get over 30 FPS in this medium-resolution benchmark, which shows how tough this test really is.

At a higher screen resolution of 1920x1200, the EVGA GTX460 SC with its factory OC keeps its slim lead over the HD 5850, by less than 2 FPS. Even the fastest single GPU cards have trouble rendering this scene, with an average frame rate in the mid 20s. Soon this benchmark suite may be replaced with DX11-based tests, but in the fading days of DX10 it has been a very reliable benchmark for high-end video cards.

We need to look at some actual gaming performance to verify these results, so let's take a look in the next section, at how these cards stack up in the standard bearer for gaming benchmarks, Crysis.

Comments

You've probably been asked this before, but how would a stock 460 in SLI mode compare to the 480 series top of the line with turbo and supercharger? I'm asking because this card right now is 179.00 at Amazon:EVGA GeForce GTX460 768MB DDR5 PCI-Express 2.0 Graphics Card 768-P3-1360-TR

Doug, I think you'll find an SLI of two EVGA GTX 460 SC video cards will perform better than a GTX 480. However than difference might narrow or go away altogether if you are using very high resolution monitors, let's say beyond 1920 x 1080 with anything beyond 24 inches. This is well worth the money. Two of these video cards should run relatively cool inside your chassis.

That's nice. My mother board, the Gig x58 UD5 rev.1, is laid out so I can plug them both into x16 PCI slots. I have three monitors. One 23" 1900 x 1080; one 23" at 1920 x 1200; and a 26" at 1920x1200. But I don't need all three to play games. I do need all three for graphics and work. I'm running all three right now with a single GTX 295, and I can play Eve on the 26" windowed with no slowdown. But Eve isn't a graphics intensive game.

Sorry, but we're not 'big enough' for a manufacturer to send two of each card. The best we could do is two 768MB GeForce GTS 460's in SLI: benchmarkreviews.com/index.php?option=com_content&task=view&id=576&Itemid=72

You'll see about 10% better performance with two 1GB versions in SLI, and maybe 15% better with two overclocked versions.

These things seem to be golden in their segment right now. I'll be building another gaming computer at the end of the year, so I have some time to see if ATI responds in a meaningful way to this threat. If they don't then I'll be buying two GTX460's to SLI in that Box. Thanks for the concise reviews of all of the different GTX-460's, helping me in choosing the ones that are right for my circumstances.I like the price performance ratio of this card and two together should do what I want them to handily.

With almost identical overclocks the EVGA consumes 176w vs 199w for the PNY GTX 460 you recently reviewed. Fairly sizable difference. Looks like they were tested on different PCs, but do you try to isolate the GPU power usage regardless of the test rig?

You're correct: they were tested on different machines (and by different people). We do TRY to isolate power, but it always works out differently. My suggestion is that you look around for a median reading... not everyone tests using FurMark (I do).

The Fermi cards all seem to be shipping with differing GPU core voltages. I'm not sure what the default GPU voltage was for the PNY card, but it definitely has an influence on temperature and power draw. Temperature by itself also has an effect on power. When doing the power tests, I see a gradual increase in current as the GPU heats up. I always wait for the temps to reach steady state before I take the power measurement. So, there are a couple factors, including ambient temps, that have too much influence for my liking.

I pointed you there so you could read about the software options, how they work, and which one is better. You can use the same technique with the GTX 460, but if you want our results take a look here: benchmarkreviews.com/index.php?option=com_content&task=view&id=559&Itemid=72&limit=1&limitstart=19

I doubt they still have all those cards and it isn't feasible for them to retest them all for every new review. As with most sites older benchmarking gets recycled for new reviews. Add 10% to the ATI/AMD cards (and older Nvidia for that matter) if it salves your bruised sense of justice or whatever.

Bob: you should visit AMD's website and read-up on the change log between 10.5 and 10.8 (latest available at the time this article was published). There's no difference in performance for these games, and I can personally tell you that my re-tests have shown less than 3% difference (in both directions).

I also took the extra step to personally verify that there was no change in performance between Catalyst 10.5 and 10.8 on my system. I retested every single benchmark with an HD5870 card and saw no reason to retest any other cards or update the benchmark scores.

I have received no response, it will be because no one knows? that card is already super overclok, you can climb over or is not advisable to do so? that people say benchmarkreviews?no he recibido ninguna respuesta, sera porque nadie sabe? esa tarjeta ya viene super overclok, se puede subir mas o no es aconsejable hacerlo? que dice la gente de benchmarkreviews?

The GTX 460 chip is generally capable of some extreme overclocks. There have been several reviews on this site that demonstrate that. I got the MSI Hawk up to 950 MHZ, IIRC. Take a look at that review to see how it compares to the HD 5870 at super high clocks. YOUR card may not overclock that high, but it's worth trying.....