Since all these renders are based on CUDA it leaves AMD out in the cold, and this I think is a very important reason to choose your graphic cards well. Also some benchmarks for Adobe Premiere CS5 would be interesting indeed (also based on CUDA)..

Mention which games are based on which engine (and create a shortlist of other games based on it), and make small notes whether they're optimised for one side or feature. Even Crysis for example has a couple of CFG settings that are only enabled for NVIDIA by default, but which should be available for ATI as well (fixing these settings fixes the FPS drop caused by caching which ATI gets at the start - TWIMTBP eh?). Reply

A thing that seem to miss in almost any review of GFXs are the quality of the rendered pictures. Is that because there really isn´t one?

Because as I see it, I would love to sacrifice a few FPS, if the picturequality is better on one card over the other. Of course such a term is very hard to measure or describe, as it is a matter of personal taste.

But - given there actually are diffrences - it would be something NEW in graphics cards tests. And not just another truckload of - admittedly nescessary - games and synthetics.

well an RTS with lots of particles like DoW II punishes some fairly up to date systems, so that is useful to see how much the CPU and how much the GPU is being pushed.

As far a rpg go dragon age has some real time cut scenes you might be able to use. With a fast enough connection you could test several machines in an mmo by logging in and walking around the same area. Even rotate the cards around repeat the test.

Last if you want a synthetic test that has relevance download UE3 create a sample level and test that, as many companies lease the unreal engine, and then add in their twists. Which gives you an idea of what the base point they are starting from.
Reply

I want to make a plug for Battlefield Bad Company 2. The frostbite engine is pretty GPU heavy and should be tapping into DX10.1 and DX11 to help test out the newer cards. Although the BF series has never been too graphically intensive, BadCo2 seems to be quite the opposite.

CPU discussion came up but I think it is an interesting subject because it does effect how a game runs. Usually, reviews just go around this by overclocking to a point where the CPU isn't a bottleneck.

For a total 180, have you considered doing a total game benchmark that would tackle both CPU and VGA performance? Because reviews usually use the uber top end systems and OC their CPU to extreme speeds, game performances usually don't translate for many people. Having targeted performance numbers (even if disabling cores and downclocking CPUs is the means of getting there) for a specific game is really helpful information. Thanks! Reply

Hi!
I just read the GF100 / Fermi would have multiple setup / geometry engines. This is a huge step forward. All previous and current 3d cards I know of, except the GTX 460 and 480, have a single setup engine.

Let me quote the gf100 anandtech preview:
- To put things in perspective, between NV30 (GeForce FX 5800) and GT200 (GeForce GTX 280), the geometry performance of NVIDIA’s hardware only increases roughly 3x in performance. Meanwhile the shader performance of their cards increased by over 150x. Compared just to GT200, GF100 has 8x the geometry performance of GT200, and NVIDIA tells us this is something they have measured in their labs.http://www.anandtech.com/video/showdoc.aspx?i=3721...">http://www.anandtech.com/video/showdoc.aspx?i=3721...

Now the question I wonder is, can we test if the older cards and the new GF100 is bottlenecked by the geometry engine? I have never heard a review mention the performance of the setup / geometry engine or if it can be a bottleneck or not. To test this you would have run at a low resolution with a fast CPU and the GPU at different clock rates. Reply

I pretty much only play WoW these days and with a 30" apple monitor I am interested in performance at max resolution.

As is this means no max settings even on my Radeon 5970. So it is not a meaningless benchmark. Enabling max everything brings it to an unplayable crawl.

PS: There is something evil about the stairs down to Blood Queen Lanathel's room. 60fps -> 17fps or less. (2-3fps on my old 4870 X2 at minimum settings). Each time I look at them I hear "Stare into the abyss..." Reply

I do not want to see benchmarks that only compare what each GPU has in FPS at the same game settings.

NO, what is useful to the typical reader or potential buyer is, what, if any, quality settings are compromised to attain a playable framerate.

If it just isn't possible to play at 20FPS (min) and at least 45FPS avg. I WANT to know that, but I don't care to know that a $200 card is faster than a $130 card at the same settings... I think we are past stating the obvious.

So I want benchmarks that, if the above is too laborious, set an average framerate of 35FPS threshold, then note what needs disabled to attain that if it is possible at all.

If it is not possible then the price of the card is pretty irrelevant isn't it? I mean, no matter how cheap it is, if it can't do the job at all there is no value in it for real world applications. Reply

I think it would be a better idea to focus on engines rather than games, id Tech, CryEngine, Unreal, Gamebryo, Essence Engine, trying to get at least one of each game genre (RTS/FPS/etc) while covering the big boys.

Also there are newer versions for some great engines due out in the near-ish future which you might want to consider waiting for or being ready to update to them, specifically the id Tech 5/6 and CryEngine 2/3

Personally we don't need a gamut of resolutions tested but a good range focusing on the commonly used ones, even just 3 or 4 resolutions such as 1920x and below. Unrealistic or uncommon resolutions should be left to specific reviews, such as when comparing multi-monitor situations. The following might not be a bad idea, 1920x1080, 1680 x 1050, 1280x1024, as they are either common or high end. Reply

Someone mentioned OpenCL for it's physics which I have to say is a great idea, Anandtech could help publicising an open alternative to nvidia only closed PhysX. Since physics/OpenCL is both important to games and is affected by the graphics card I think it's a great idea.

Something else I've wondered about, is a raw benchmark for the GPUs, showing FLOPS etc. It would give a nice idea of general power the GPUs have.

These things could give you something over the other sites also benchmarking graphics cards. Reply

Firstly, a Linux overview is long overdue. From only an overall compatibility or a feature list to real CAD and maybe even some wine based tests (not to mention vdpau also). The wine tests can be helpful to the wine developers as I am sure yours are already to NVIDIA and ATI driver developers. On this regard, maybe even all open source developers could profit, from the X-server (Xorg-edgers) to interface (KDE, GNOME, XFCE, etc) and even driver (nouveau). Not to mention those nostalgic memories from 3ddesktop (nowadays resorting to COMPIZ).

Secondly, GPGPU. My most pressing topic of interest as of now as I am myself researching in this area. Though you might need to dig through in order to find proper tests. Follow up comments with suggestions would be appreciated. My personal one would be folding@home GPU, as it is more disseminated and also runs over wine. By the way, it would be good to take double precision floating point (IEEE 754) into accont. From NVIDIA, as of now, only those 200 series GTX models offer such support and it is incomplete, it will be good to know how future ATI models and Fermi will compare.

Thirdly, some CPU overhead tests. An example would be forceware running some game at x% CPU while catalyst at y% on the same game.

Finally, testing those filters like AA and AF on and off are at least as important as how they scale at all levels (2x, 4x, 8x, 16x, transparency, et al). However, in order for those performance levels to make sense, the quality must be taken into account. Instead of using the number as an equivalent, use you quality analysis so that filters of nearly equivalent qualities are compared for performance. Reply

chill out, the link you posted clearly states the solutions to the problems and for HTPC use the ATI seems to be the least bad choice anyway. No need to bash ATI on this front the 2D issue is already stated in previous posts.

If its possible, to incorporate a notebook GPU to compare against desktop variants. ie one of the new i5 or i7 platforms with an nvidia card 9800GTS and 9600GT or whatever their refreshed smaller counterparts are.

a couple older titles:
world of warcraft - latest to current patches
Oblivion - latest edition/patches
Team fortress 2

yes, Ok, I know it's 'old' but, it's the latest version of this game / simulator. I and many others still use this application, and would benefit from knowing what to expect with a given set of hardware. Also, since your site would be one of the few to include it, it would increase readership of your reviews.
Reply

Please, do a test with some older versions of Dx as well as atleast one OpenGL based game. I have been burned badly by drivers and cards that can't run run old games (8800 GTS can't mannage over 5 fps in a game using Dx7...).

I am also very interested in more measurements of the sound levels outputted by the cards. Reply

The Unigine engine is comparable to Epic's Unreal and id Software's Doom architectures. Unigine delivers for example DirectX 11 and Tessellation, one of the API's most important features. For DirectX 11 Tessellation polygons are dissected. Unigine currently has support for Open GL 3.2 as well. It supports hardware Tessallation , Screen Space Ambient Occulsion, Direct Compute and Shader Model 5.0.

If Tessalation active, diverse objects like walls or bricks are modeled with a multiple amount of their original number of polygons. This way furrows and bumps get "noticeable” physically. The result looks excellent and has an andvantage in contrast to Parallax Occlusion Mapping (POM) and similar techniques: It is affected by Anisotropic Filtering and Multisampling Anti Aliasing.

* For a fact to see the real potential of Direct X 11 cards On a Radeon HD 5870 in DX 11 mode the frames per second drop by 39 percent as soon as tessellation is activated. While a Radeon HD 5770 deals a little better with the additional work and drops only 31 percent. Reply

Hey, i NEVER look at any of ur synthetic benchmarks, i like real world results.

i currently like how u bench a variety of different types of games. Eg u have cpu centric RTS like Dawn of war 2 (which ofcourse is still very much reliant on a gpu, but it DOES see huge gains from different cpus as well) , Crysis is gpu centric FPS, and include some games that are famous for their multi-core & DX 10 support (farcry 2). With DX11, there's really only 2 FPS that are out: Stalker Call of pripyat, and AvP. I dont see it necessary to include UE3 games cause we all know they run like gravy on all hardware. Just my 2 cents. Reply

I agree categorizing/rating games into their system stress levels ie system ‘stressability hogs' , or‘elegant resource users’, will give game developers as well as potential comp buyers useful information.

Regarding the selection of games: I'd like to see a nice broad spectrum of graphics engines. If I remember them correctly Unreal Engine, farcry 2, crysis and other engines atleast that way we can ourselves atleast correlate a bit to older games using the same engine.

Minimum FPS values if at all possible. Especially to discover if these are lower on the dual chip cards and or sli/crossfire setups.

You can still keep running the synthetic tests. There are those who like them. Just don't focus on them in the conclusion.

Keep the focus in graphics card reviews on the graphics.

But maybe you could do a roundup once in a while with different processor / graphics card combinations to discover what processors are needed to drive what graphics cards at what resolutions. Personally I have an e8500@3.8ghz, 4 gb ram and a gtx 260(216 core) but even though my processor is only a dual core and old I'm still graphics card bottlenecked. Since I game at 1920x1200 with as much eye candy on as possible. AFAIK I'd only gain a few extra fps by making a major upgrade to the higher clocked i7's. It would be nice to get a confirmation that I am correct in this.

An additional article with both your old and new games and popular ones and the performance and bug mentions of eg. the last 3 driver sets of either ATI and Nvidia. I know this is a big SOB article with lots of work. But as a consume I am sick and tired of having to switch drivers all the time to get a specific game working and then loosing 30 percent performance in other games. An article to discover who has the best drivers here and now would be nice. For a consumer perspective it doesn't really matter that a card is the worlds best at game xxxxx if it has alot of errors in others.

So basically your graphics card reviews are good (needs those minimum fps numbers) but additional articles to help out buyers to get the best performance for their setup (processor vs. graphics bottleneck) and service/ease of use (state of drivers) would be really really appreciated. Reply

whatever games you end up choosing please note the engine the game is based on. This gives a rough estimate on how other game titles based on the same engine will fare on the similar HW platform.

You've probably done this before but don't test two games based on the same engine, unless for some strange reason those would scale differently. If this is the case an in-depth analysis why it is so, would be interesting. Reply

Hey, I think that the games tested are great. The more games, the better! (As long as they are popular games, not crappy games, heh!)

Anyways, I think it would be very helpful to include minimum frame rates like over at Xbitlabs. Even more useful would be to provide time-based graphs of frame rates like those shown at HardOCP. Some games that average at 30fps for a certain card would be pretty consistent, staying within 2-3 fps of the average.. on another card (of the different "make", be it NV or ATI) it would dip to 1/2 of the average or worse. Would it dip just once throughout a couple minutes of benching, or would it dip several times?

About the Steam poll that shows most of the user base to still be gaming at 1280x1024 or lower--I do not think we should be alarmed at the statistics. Those who play at say, 1024x768 are most likely those who are using a rig that is around 4-5 years old on average, or older. They are most likely the ones who only play games and do not give a slightest amount of feces about the computer hardware/benchmarks. They do not even know what the display configuration settings like Anisotropic or AntiAliasing mean. I have a few friends and a couple of brothers who play computer games but do not even care about turning up the resolution. They are most likely not going to bother reading Anandtech articles ever.

Those who bother reading the articles are definitely going to be interested at 1680x1050 resolution as the minimum, trust me.

I guess that's all I have to say here. Just make sure to keep a couple of DX9 games around.. there will continue to be hugely popular DX9 games until Xbox360 and PS3 are finally replaced by the next-gen consoles (since most PC games are ported from the consoles).

There is one yet to be released game (~2 months until release) that you absolutely must include: STARCRAFT 2

If you don't include SC2 then you will lose merit with a HUGE portion of gamers. You did good by finally including World of Warcraft in the tests, albeit too late to matter, but for SC2 you need to include it from the moment it is released. There will be many people looking to upgrade their cards to handle SC2 so make sure that you have the data for them! Reply

You should probably try to have as few if any NVIDIA The Way It's Meant To Be Played and ATI oriented games. I know this will be hard but there is alot of games out there that prefer specific brands and it makes it kind of unfair. Also seing you will be comparing NVIDIA and ATI, show results of w/ and w/o PhysX enabled and when games and stuff start using DirectCompute show the same thing, w/ and w/o. Reply

I find that I often need to adjust the settings so that the game plays well during intense scenes where the system is taxed. Measuring average FPS doesn't give the best indication of what resolutions and quality settings can actually be used for regular gameplay.

It would be nice to use some of the more intense scenes in games and to report the minimum frame rate or 5th percentile of frame rates. Reply

ive seen a few good suggestions but had some of my own....
most people agree EE cpus are no good for most of us. i think you need 3 tiers for your reviews.

sub $100 cards get sub $100 cpus, maybe an athlon x2 at 2.7-3.2ghz. cards from $100-$250 get either a phenom2 945 or maybe a C2Q 8400. cards over $250 get something like an i7 870.

resolutions scale with the cards as well.
sub $100 cars are more likely to be used with lower resolutions like 1280*1024, 1440*900 and maybe 1680*1050
$100-$250 cards will probably have at least a1680*1050 screen with them and possibly 1920*1080 or 1200.
cards over $250 will usually have 1920*1080 or higher so use that, 1920*1200 and 2560*1600, or go with dual or triple 1080p displays instead of 2560*1600.
when doing crossfire or sli you should definitely use 2560*1600 or dual/triple 1080p monitor setups(maybe even dual/triple 2560*1600 if you have the money to piss away, or can get them as freebies :P )

as for the games?
there are several game engines out there and i think you should try to find the 2 most demanding (graphically of course) for each of the top 5 modern engines. if performance is very close between the top 2 then only use 1 in you benchmarking suite, but if performance is significantly different use both. also include 1 title each from the most popular 5 older engines. that gives you a max of 15 games, and more likely only 10. also if you include a chart showing what other games use the same engines as the ones you tested,then people can get some idea where those would fall performance-wise without having to test 30 or 40 games every time you review a card. if a certain game is particularly popular but uses a less widely used engine, some consideration should be given to including it(maybe it could take the place of a popular but not graphically intense engine like the source engine?), but we all understand that you can only do so much in the time you have.

many people seem to want gpgpu testing on these cards and i agree that having that information would be nice, but there really aren't many programs that use the gpu that are hardware agnostic, so.... if you can find some useful ones that are hardware agnostic(openCL, direct compute, whatever), great, let us know what they are. if not, oh-well.
as far as 2d performance goes, i don't believe that we need it for every card you test. maybe 1 mid-range card from each series?
Thank You for all your hard work keeping us up to date on things that matter in the world of computers, and i look forward to seeing what you guys decide to go with. Reply

1. Use a poll to find the median platform being purchased by your readers. If no one has a Core i7-980, it's not giving the buyer a true sense of what to expect if you use it as your video card test station. If you find a game IS cpu limited, mention it, which brings me to point...

2. Have Battlefield: Bad Company 2 as a benchmark. I see it as an example of things to come, where physics takes a much stronger role in the experience. It pins my overclocked e8500 to the near 100% on both cores. While generally smooth, the occasional chaotic scene can get a bit choppy, which brings me to point...

3. Minimum FPS is key. It was my CPU that was most likely the bottleneck in those scenes above, but I've seen video cards that benchmarked just fine, but they had severe performance issues when you looked at minimum FPS. Is it running out of RAM? Out of RAM bandwidth? CPU bottleneck? Reply

A lot of the people here that read your articles (including myself) are system builders.

Run benchmarks on a couple of mid-range CPUs, preferably overclocked. Such as an AMD 550/555 BE OC to 3.6/3.8 and then an I5 750 OC to 3.6. Something easily attainable by anyone using a stock cooler with minimal effort.

Software wise, I'd like to see a World of Warcraft benchmark. Why? Because 12 million people play, and I sell more computers to WoW players than any other single segment.

I know someone is going to say WoW will run on a toothpick and a rubber band, but I'd still like to see it included. 12 million people is a big reader base.

Not just a running around in the middle of nowhere benchmark, I'm talking a 25 man ICC raid measure. Reply

I'd like to see the GPU power consumption measured directly as XBit-Labs does it.

It might also be interesting to check the power consumption under full load at auto fan speed (stock) and at maximum fan speed (for 24/7 GP-GPU crunchers). For GF100 rumors say that half of its power consumption is due to leakage, which increases at higher temperatures. For GT200 I've already seen load differences of ~20W just due to fan speed. Reply

I played Crysis when it came out. Since then I had GTX260 892MB, HD4850 1GB, HD4570 1GB, GTX285 OC, and currently HD5850. Crysis is beautiful game, and even if it sounds retarded, for my own needs I always play with my new card Crysis first - just to have the perception and be able to compare with previous card that I've own.
I play on 1920*1280 (because the monitor native resolution) with all settings on highest. This test worked for me and allowed me to have a trend.

My idea is simple - use a graphically sophisticated game on several resolutions 720p and above with average eye candy first, and maximum settings at second. Get the popular display resolutions from Steam usage statistics if you like. Use a game that heavily tax the GPU, maybe a DirectX 11 one (if available). The point is not to give me a benchmark in different classes of games, but ONE or few games on many cards - so I have consistency and ability to compare.

IMPORTANT - please add same GPGPU applications. Does the cards have any computational value besides the gaming. If, for example, two cards have similar performance but one is accelerating handily a bunch of useful applications - this is a value for some users. Bench that, mention which software can use GPU for advantage, so users later comparing prices can draw their own purchase decisions.

I personally, like to transcode with HD5850 an h.264 to PSP or phone video a 4.37 GB file in 17 minutes! This has value for me above the GTX285. I also find value in a card running at 42 C temp at load versus another working at 80 C at load and burning all my PC internals.

It would good to have different configurations at different price points. Not everyone has an extremely high end system and it would be good to get an idea of how the GPUs fare with different components at different price points. You could have 3 or 4 configs like midrange, lowrange, high end and no bottleneck config. Reply

I'd like to see a lot of modern dx 11 games. Like Call of Pripyat, Bad Company 2, Metro 2033, Aliens vs Predator and Battleforge. I know most of those are first person shooters, but they are usually the most graphically intensive. Reply

Stop testing with rediculous processors that no one can afford. I understand that you don't want the CPU to become a bottle neck but honestly I don't need to read reviews about a processor being used in a video card review that no one in their right mind can afford.

I'm fine with using very fast CPUs for these tests. I want to know what the GPU is capable of. And I don't care how much the CPU used for testing costs - this amount of power could be easily accessible via OC or could be affordable in 1 or 2 years. And there's no point in seeing all GPUs being capped at the same fps due to a CPU limit - other than "get a faster CPU". Showing CPU-limits belongs into CPU reviews. Reply

My Q9550 at 4Ghz is faster than any stock i7 (unless hyperthreading is fully utilized) and it only cost me $175. I don't think its unreasonable to show what a $500 graphics card is capable of when a top of the line CPU is used if a good CPU can be had for under $200 and overclocked beyond the speed of anything available at retail. Reply

If they also included:
-tools to easily compare picture quality between cards at different AA/AF levels
-tests that might identify SLI/Crossfire setups that experience micro-stutter (if that's even possible...)
-Automatically show performance changes with CF/SLI OFF, enabled at different resolutions/AA levels.. While the end number presented to the user is pretty pointless, the difference between 2x AA and 32x CFAA might be interesting.

I've been pleased you've been including some World of Warcraft benchmarks in recent reviews, and I'd urge you to keep doing so for two reasons:

1) It's just so popular, with something like 12 million current players worldwide, a hell of a lot of folks are playing it and want to know what sort of performance they can get in it when buying a new cpu/SSD/gpu.

2) It places different demands upon a system than most games out there. WoW isn't particularly graphically demanding by modern standards, and although the graphics will improve with the Cataclysm expansion, I'm pretty sure they'll still be on the low end of things. However, because of the MMO nature of the program, and having to manage tons of other characters around you it places particular demands on the CPU, Memory, and SSD/HDD that other games do not. This is why I think it continues to deserve a place in future benchmarks. Reply

I'm in full agreement with point 1, but I think you're actually wrong about point 2. WoW is plenty graphically demanding. While the "core game" might be 5-6 years old, the graphics have been refreshed with each expansion-- while the engine is aging, lots of new effects are added all the time.

While individual spell casts aren't that graphically intensive, a raid will often have more textures on-screen than anything. While an individual player model isn't the most complex, you're scaling it to cover 25-50 models at any one time with multiple textures, and that's before the environment. It's different, yes, but the load itself isn't small at all. Reply

Please bench at resolutions the majority of us play with. Most monitors are 1680x1050, 1920x1200, 1920x1080. How many people really have 30" monitors or triple eyefinity setups? I think super high res is great as a bonus to an article, but please continue to focus on what the rest of us have and can afford.

I also wouldn't find focusing on the best bang for the buck CPU. For example the i7 920 stock speed, or an affordable stock clocked i5 cpu. Reply

The most important thing to me, regardless of the game is that the card I'm reading about is achieving playable frame rates. For example, a lot of sites these days like to run all of their games at 2560x1600 with 4xAA minimum, even if this puts the average framerates in the 20s or 30s in some games.

Its a waste of time to even read that benchmark because I'd never play a game with such terrible performance, especially on cutting edge hardware. It doesn't matter how high the settings are or how awesome the graphics card is, if the performance is not playable, scrap the benchmark and lower the settings until the results become useful. A simple note of "2560x1600 with 16xAA was not playable in this game on any cards tested" will suffice.

This gives a nice spread of different titles for each genre as well as those that support DX10/11 under Vista and/or Win7 which will be important as NVIDIA launches their DX11 hardware. In the case of Dirt 2 it's already got a built-in benchmarking tool which should also make things a bit easier I would imagine.
Reply

I will also add to the review a very old low budget machine, I mean something with the first PCIE 16x slot, to see if chaningh the GPU cold be productive in similar enviroment (and Yes, I understand a person with a P4 631 will not buy a Fermi 480, but maybe a passive 5450 may be a good upgrade wih an old SIS chipset (as you can find in many Lenovo e.g.). Reply

My reasoning is that EVE is one of the online games with the most players, and it has some of the best graphics for online games. It is not a DX11 game, yet, but you often get the situation where the player has 2 or more screens with multiple games running on them at high resolution.

I don't exactly think the games matter at all. I think the focus needs to be on running multiple GPU's for comparison, multiple resolutions likely including 1280x1024 and then higher widescreen resolutions, and eliminate the CPU as a bottleneck as much as possible by using games that really push the GPU rather than CPU.

It would be nice to have a comparison between the GPU in both an AMD and Intel based system. Yes an Intel CPU will get a higher performance in most games than AMD but disregarding the CPU performance is the card reaching it's potential in each situation? Reply

One thing that you almost never see is a comparison of the same video card on different motherboards. For Intel processors, you have Intel and NVIDIA chipsets, for AMD processors, you have AMD and NVIDIA chipsets available.

Now, it is very possible that a NVIDIA video card will work better on a NVIDIA chipset for example, or that Intel may tweak things to intentionally or unintentionally hurt the performance of AMD/ATI Radeon cards. It would be nice to see more comparisons that try to expose where a given chipset is the best choice for a given video card.

So, for Radeon cards, use a Phenom 2 X4 965 processor based system and see if the performance is better with an AMD chipset compared to NVIDIA.
Reply

I think benchmarks are often times misinterpreted by those new to AnandTech. I believe many people look at a review with one game in mind. Often times it simply isn't possible to accommodate everyone. However, I believe it would be beneficial to investigate a standard framerate/experience you are trying to achieve. This can be a bit subjective, but I do believe on the more popular older titles most people aren't asking if it will be able to run the game, rather, they are asking how well it will run.

Take the hardly ignorable WoW or Crysis. Benchmarks are just graphs without the knowledge of AnandTech there to interpret them for your new viewers. I believe you would gain a larger viewership if you dedicated a bit more narrative and explanation for these games. Contrary to the popular phrase, "but will it run Crysis," people want to more than just that. They want to know what tangible benefits they may see from this new card.

I think it would be beneficial to show a few major games with different options enabled to show how the playing experience is improved. Options such as viewing distance, AA, shadows, and resolution.

On that note, reviews linked to articles describing how to optimize certain games would be helpful. Many gamers have little knowledge as to why you get a certain FPS and they don't. Just as you reference many of your articles with an article, that will really help to spread the education on how and why you are conducting your benchmarks as such.

Which leads to "optimising games." I don't believe you have to actually review a game or provide an in depth technical overview of each game. Instead, I see this as an opportunity for AnandTech to educate viewers about the many options within most games. Perhaps this is a place to encourage your forum to tackle the task of showing for each game what are the different tiers of gaming options users should select when they want to improve their gaming experience.

I would suggest that benchmarking popular games with built in benchmarking tools - such as Resident Evil 5 - would be useful since it is then relatively easy for a reasonable proportion of those reading the article to judge the relative benefit of the reviewed GPU to the GPU they are currently using. Repeatability at home is key to this suggestion.

Also, I agree that a spread of game types is desirable to cover the interests of the maximum number of readers in a given review. Reply

I think that the next battle of GPUs will partly be fought on the GPGPU-front. Games are starting to use DirectCompute or OpenCL for post-processing effects, and nVidia has been using GPGPU-accelerated physics for a while...

So I would like to see some GPGPU-related benchmarks in there. SiSoft Sandra has some, GPU Caps Viewer has some... You could probably also use the samples from the DX SDK, nVidia Cuda SDK and ATi Stream SDK (they contain OpenCL and DirectCompute samples aswell, to keep it on a level playing field).

If an OpenCL- or DirectCompute-based physics solution arrives (Bullet?), it will be very interesting to do some benchmarking on that aswell.

Aside from that, it will be very interesting to have some tessellation-heavy tests in there, as it seems to be the biggest difference between ATi's and nVidia's upcoming DX11 architecture, graphics-wise. Reply

I agree with prior suggestions of adding Aliens VS Predator (2010) as one of the DX11 test beds. It has excellent use of tessellation, and certainly can stress modern cards enough to get a good gauge of performance.

STALKER: Call of Pripyat is another great DX11 to test, as it can really bring cards to their knees at full settings/high resolution.

Synthetic benchmarks are useless to most users, and I disagree with adding them, as it will only distort some readers' views of actual performance. They are so dependent on system settings, tweaking, and overclocking to push figures up, they don't represent realistic performance in any way to most end users. They simply will make reviews take longer, and skew the data in some cases for most readers. Reply

Now I for one would like to see the very compute intensive Football Manager 2010 make it into your suite.

As it's more or less completely CPU based it's not very good for seperating one GPU from another but on the other hand it's shown to be fairly well threaded in later versions suggesting it would be an interesting choice when determening CPU performance.

Other than that I'm gonna agree with most other here. No more synthetic benchmarks. Reply

I'd suggest relevant and popular games to be benched in feature article. That way viewers will be able to see the performance of the game and know which is the minimum required GPU to play the game at satisfactory settings.

Just leave those graphical intensive games to the GPU benchmark suite. Reply

3DMark should always be included in the bench. Why? Because its free and provide a good base on the graphic card performance.
1) Sure, graphic card vendor optimize the drivers for those synthetic benchmarks, but if both nVIDIA and ATI optimizes it, then its fair.
2) Its available for free to download. If the benchmark comes out with games like Crysis and someone don't actually own Crysis, then how do they compare their cards and the benched cards?

I second the suggestion to include some function to auto-detect the user's graphic card and display the scores for the card in the benchmark.

Its also important to keep using the top-end CPU to avoid bottlenecking. If I want to check for CPU performance, I'd go to the CPU section. If they include too many processors, that would consume a lot of their time.

If its possible, I would like for Anandtech to give viewer to be able to download benchmark files (so user can run it themselves using the same suite).

The beauty of PC gaming is that we get a massive variety of games to play on our video cards. We get a lot of the big budget blockbuster cross-platform titles that also hit the 360/PS3 and we also get lots of nice exclusive RTS, MMOs, RPGs and heaps of Indie titles too.

2 x MMO
2 x FPS
2 x RTS
2 x RPG

You'd want games that stress a machine, work with all features if possible (AA ect) and you'd want to cover some of the more popular 3D engines like Unreal Engine 3, VALVe Source ect.

For RTS you could use Supreme Commander II, Napoleon Total War or Starcraft II if it's released before your benchmark update and Sins of a Solar Empire to cover the Indie aspect of PC gaming.

For MMOs you'd want to stick to solid players that have been around for a while like WoW, Warhammer and Age of Conan or maybe risk a newer Cryptic game like Star Trek Online (same engine as Champions Online).

For FPS you'd want to test both ends of the scale in terms of scope, Bad Company 2 and Left 4 Dead 2 would make good choices here.

I'd like to see more OpenGL benchmarks. As an OpenGL developer, I would like to see how stable and optimized the different GPUs are with respect to OpenGL. If there are not enough good OpenGL games, you could use some real-time simulation softwares, such as something based on OpenSceneGraph or Gizmo3D.

Also, I would like to see some OpenGL driver compliance analysis (with respect to the OpenGL specification), as well as performance comparisons for precise OpenGL features that are not typically used by popular games (ex: glReadPixels, glTexSubImage, arb_multisample, complex GLSL shaders, etc.).

In other words, I would like to see how these graphics cards (and drivers) can be used for OpenGL development of real-time, performance-oriented applications. And at the same time, push these companies into providing better OpenGL support. Reply

Don't include Eyefinity / the SLI alternative yet, make it a separate one-off test. At the moment it's way too niche to be useful. Enthusiasts can already gather a lot of easily available data from various forums and other sites if they're interested.

Games to bench:

- keep Crysis: Warhead (just for the sake of it)

- maybe drop Far Cry 2; the engine has had no impact on the market and it's a forgotten game already

- keep Dragon Age: Origins; it's a bit less taxing than The Witcher but more contemporary

- keep Dawn of War II; maybe upgrade to Chaos Rising if it has an in-built benchmark especially since DoW II was a TWIMTBP and Chaos Rising is ATi sponsored now (implications for CF?)

- add Dawn of Discovery (Anno 1404); it's a beautiful game and it's taxing as hell

- keep the Source Engine and go with L4D2; especially viable for low-end and mid-range

- add DiRT 2; you guys need a racing game, the engine is the basis for the new F1 game and it's DX11

- add Metro 2033 or Stalker: Call of Pripyat; exotic, possibly less polished engines and overall good indicator for new unusual games as they're heavily into DX11

- consider Just Cause 2 when it comes out; it looks like a benchmark waiting to happen...

One really nice feature that I'd appreciate would be sound clips of different cooling solutions, maybe in a YouTube video that includes talking so we'd get a good measure of relative loudness and tone. The current dB(A) measurements are not very meaningful in my opinion.

I really love the idea of a comprehensive overview and I'm happy with the current benchmark suite but only half of the games in there seem indicative to me. And good call on excluding synthetics; I don't want this to turn into the early noughts again. Many people seem to have forgotten that time period already. Reply

Also, as well as higher power use cards are now running at higher frequencies. This gives the potential that not carefully designed cards (especially if overclocked) will start emitting excessive EMI causing errors and/or failure in other components(ie they irradiate the other components in the computer). At the moment any failure of the computer is just assumed to be overheating. This needs to be studied more, sadly nobody seems to be checking this at present. If graphic cards are used increasingly for long running computational tasks which require high accuracy(ie Cuda or OpenCl applications) then reliability becomes much more important.

The above 2 issues fit into basic requirements, i think they should come ahead of any performance related testing.

I'd like to see a review or section of a review dedicated to testing mainstream gamer cards with professional art applications. Comparing Geforce 4XX and Radeon 58XX with some of the Fire and Quadro pro cards. Testing things like 3dsmax render times as well as general real time view window performance/responsiveness. Same for Maya. Test real time shader plugins for said programs to see if one brand handles that better than another. I dunno, maybe this is beyond Anandtech and more for a 3D-specific website. But its something I am interested in seeing. Then there are the sculpting programs also. I believe Mudbox is the one that currently uses the graphics card while I think Zbrush still does not. Photoshop, Aftereffects, etc. See if they offer worthwhile value for the pro visual applications vs the ridiculously expensive pro cards. Reply

Well, I know they're few & far between, but some Open CL benchmarks of some type (compression & cryptography come to mind) would be nice. Also, when they come out, some DirectX 11 tesselation heavy games would be good. Reply

Anandtech already focus on the essential with a very good strategy. Only 3 resolutions that reflect the power of the card and only 2 set of resolution: High and lower end. AA is only used when necessary. It's simple, efficient and cover over 90% of real life scenario.

In the light of this data, I think Anandtech should use 2 platforms, 1 for each resolution set. A dual core running at 2.5Ghz with HT Off for the low end resolution set and keep the current quad core 3.33Ghz for the high resolution set.

It's either that or test both set with an overclocked quad running at 4.2Ghz (highest speed on air) and RAM at 2600Mhz to eliminate all platform bottleneck and isolate GPU performance.

Which approach is best? With Crossfire / SLI / Dual GPU card solution, it's clear that they are CPU limited in some scenario. On the other hand, only a minority overclock their quad to 4.2Ghz. Maybe then provide only 1 result @ 4.2Ghz @ 2560 X 1600 and only with multiple GPU?

Now lets talk about the games. There is 2 approaches. The best selling or the best rated. I would consider both and make a top 10 out of that.

I have to agree with the OP on this, 3 resolutions and no more. I'd even argue only 2 resolutions, because two things happen:
1) Running a game that relies on RAM as it fills the GPU's ram
2) Running a game that pushes the limits of what the chip can do

Higher resolutions generally focus on (1) and lower resolutions focus on (2)...so why bother having any more than that (granted the cutoff point will vary, but having 1280x1024 and a higher one like 1920x1080p or 2400x1600 could accomplish that.

Earlier in the thread there was also the argument for more RTS games...most of the gaming I do is RTS/RTT and FPS. From that perspective I'd like to see Starcraft 2 upon release (and I know it doesn't push Graphics as hard as it could) and then any FPS that pushes graphical boundaries.

thanks and keep up the good work

Finally, I still like to see synthetics...I enjoy benchmarking and although I won't buy a card based on a 3Dmark score, I still like to see it. So please keep at least 3Dmakr in y'alls set-up. Reply

I would like to see the following: Shattered horizon, [A very good looking pc exclusive game] Stalker: COP, AvP [DX 11], Battlefield [very popular- dx 11 3d/eyefinity], the newest total war game [Napolean?]. That about sums up my desire for new games. I would however, like to see older card/CPU comparions as well. I do not have an I7 so I would like to know what I can expect out of my cpu coupled with my gpu. Reply

I had a lot of trouble finding CoD MW2 performance reviews. It didn't make sense to me how a game so popular like that one did not get the attention of hardware testers; seems to me like only games that are demanding of high-end hardware are used on reviews, which makes sense of course. However, other popular/mainstream games that virtually everyone plays are not included in the Gaming Performance section of a CPU and/or GPU review. Reply

I'm pretty happy with your current line-up, but yeah you should probably get some newer games in there, but only if they're more demanding than an older benched game.

I want to see:

- GTAIV because it's.. GTAIV. I know after all the patches it's still probably not optimized for PC, but it can't be too bad now. And, it's GTAIV!

- The Witcher Enhanced Edition cause it's pretty demanding I think.

And whatever happened to the mystery of better performance when ATI cards are running with Intel CPUs and when nVidia cards are running with AMD CPUs (or something like that)? Has this been solved? If not, it's something I'd like to see tested. Reply

Metro 2033 - first game that seems to really push DX 11. also "optimal" requirements posted on Steam are simply insanely high, so the game has a chance to stay in the suite for years to come

Shattered Horizon - first and only name I can remember that was written purely for DX 10 and requires a DX 10 card. While I dont think well of the game, it should provide a good benchmark of DX 10 cards

Tom's Hardware recently had an article where they tested 2D performance on recent GPUs and found that it was pretty abysmal.

I'm not sure what apps outside of a benchmark you could use for testing, but it might help keep AMD and NVidia on their toes with up-to-date 2D implementations in their device drivers if they knew it would be tested.

For all mainstream / high end cards, MINIMUM resolution of 1680x1050.
I hate seeing reviews of 295s and 5870s etc with fps graphs for 1280x1024, as if anyone who could afford those cards would use a cheap 4:3 monitor...

Also second a value graph / table i.e. £/fps or $/fps for you guys.
For an example, see www.hexus.net
They 'normalise' the numbers a bit first too e.g. for fps above 60, they half the additional fps as, well, they're half as useful above that sort of level (eg 200fps vs 100fps on L4D is not nearly as useful as 40fps vs 20fps on Crysis. First example both 200 & 100fps are highly playable, second example 40fps much more so than 20fps).
And then, for icing, they also provide value tables for overclocked result too.

The change I'd like to see has less to do with specific games, and more to do with how people use your reviews.

Having 6 games reviewed at 2560 or even higher is pretty pointless. Budget and mid-range cards far outnumber the $600 king cards, and people who are buying mid-range or budget cards probably aren't running huge monitors, so the higher resolutions are less meaningful.

I'd also suggest trying to keep a mix of new and old games in your reviews for two reasons; games on pc's have a lot of replay value and their engines form the core of multiple games, and it also helps create a point of comparison when looking at older reviews. I raise this point because many of us are reading reviews of new graphics cards because we are looking to upgrade our older card. If we can compare old reviews of our card with some of the same games/resolutions to the new card, then thats a bonus.

The other thought I'd have is to try to keep a good mid-range card from each manufacturer from the last 1 or 2 generations in your reviews. This also would help the many of us considering a upgrade just how our current card (or at least a representative of the line) stacks up. Reply

I would love to see more information on how exactly you benchmark. While I generally trust Anandtech to to use meaningful scenarios that somewhat reflect real game performance, you only provide very little info on how you get your data. Or when you do, it's often hard to find as you only mentioned in one article.
Looking at what other sites use as benchmarks I know that they're using a lot of timedemos as benchmarks that have about as much meaning as 3DMark, as they do not reflect the actual game performance.

If you say that you don't want to use synthetic benchmarks as they're "meaningless" then you should explain how you benchmark and why your numbers really do reflect actual in-game performance. More transparency is the way to go here imho.

Maybe you could do a "How we test" article that explains your test methdology.

Also, it's not always possible to choose a "worst case" scenario as a benchmark, so it would be nice if you said how many FPS are actually necessary for stutter-free gameplay. This is for example necessary to give your WoW numbers meaning as right now we don't how a system that gets say 80 FPS in your test performs under non-repeatable raid circumstances.
The goal of benchmarks should be to assess relative performance and absolute performance (i.e. how much performance do I need to play X at details Y?), and your articles often lack a bit in the absolute performance part. It's relatively easy to answer "What GPU should I buy for $200?" after reading your reviews, but you can't really be sure about how much you need to spend to get the performance you want. Reply

I am not sure if you have seen the project out of Microsoft labs called Pivot. Its a way of seeing your data in different ways. If all of your data was compiled together it would be very beneficial to folks. For instance just adding your normal GPU reviews along with your chipsets processor etc. reviews would allow someone to combine all of them together in meaningful ways.
For instance, it could allow someone to build a computer of the various parts you review and see how it would perform under the various scenarios you test against. This would mean someone could get a combination of various parts(some old, some new) and see where exactly how their computers would perform. I might sound like a shill but I seriously suggest you look at this technology. Its free to download atm I think. I am sure if you google Pivot+Microsoft Labs
you would be able to check it out for yourself. I know this has got nothing to do with a new Test bed just a general suggestion on data presentation. I think you should add Metro 2033 when it comes out as it has Dx11 and supposed to be its own engine. Also Cry Engine 3 will be out and you should add that as soon as its available. Reply

I think starcraft 2 would be a good addition, as many people will be playing it.

Also, I would like to see some synthetic benchmarks included - not their overall scores, but their scores on certain functions. It would help explain differences in performance between cards across games. It helps to know why a card performs the way it does in certain games, and from that you can better predict performance in newer games.

Also, I'd like to see scores for medium graphic settings - that people buying mainstream cards will likely be playing at. Some like all settings at high, but with low AA and AF and a resolution of 1280*720 - which looks excellent on a 1920*1080 monitor. But requires a lot less graphic horsepower.
Reply

I was recently burned pretty badly with my purchase of an ATI Radeon 5770, as part of a new Windows 7 build.

To make a long story short, ATI's 5000 series is actually MANY TIMES SLOWER than most pre-2004 GPUs in 2D applications that rely heavily on 2D drawing, like Photoshop, AutoCAD and Illustrator. In short, programs I and many others use to make a living.

Evidently a fix is available, but ATI isn't going to build it into their drivers till Catalyst 10.4 or 10.5.... nearly a full year after the release of the 5000 series. Utterly unacceptable.

I think that part of the reason for this gimpy performance is the excessive focus on framerates and 3D performance in enthusiast sites. The ATI 5000 series was designed to win benchmarks, and it does. But some of us need to also us our PCs to work. And in that case, your options are to either buy a $1000 professional GPU that basically uses last years mid-range chip... or nothing. Reply

To give all you users an idea of what we're talking about here: if I try to select a bunch of objects in Illustrator, or move a complex shape in AutoCAD LT, my system slows to a halt as redrawing these elements takes forever. With my ancient 7800GTX, this worked beautifully - smooth and crisp and responsive.

Nuts, I just made myself nostalgic for that 7800GTX. Single slot cooling, silent operation, drivers that WORKED, 4xFSAA at 1600x1200. It was like they had found the Terminator's head or something.

But I digress. Point is: it might not even be necessary to design a test suite that accurately benches 2D performance, because that's not really what we're after. Rather, we just want a smooth user experience. So a single paragraph describing the subjective experience of those kinds of operations in Illustrator and AutoCAD relative to a GPU that is known to do it well (the 9800 series isn't a bad choice) might be good enough. Reply

I really wanna see old games tested more, especially for mid- and low-range cards.
I would also like to see a combination graph where you add all the numbers together, or some other sort of graphic that shows the combines results.
Its not really what you're asking for, but that is things I wanna see. :)

And also, when it comes to CPU-testing I really feel that the lack of multitasking testing is hurting the overall picture I get. Reply

i believe bad company 2 would make a great addition to the next GPU test suite

also i would like to see different cpus to be included not just clock speeds on the same designed cpu
like

i7 960
i7 870
q9550
x4 965
x2 255
e8600

but that maybe more to do with a cpu benchmark

maybe a game test suite is what we need? test a single game over multiple resolutions with multiple video cards and multiple cpu's
maybe its just me but i would like to see how the different hardware scales especially at higher resolution

Although, to be realistic I'm sure Anandtech would have to carefully choose how many additional CPU's they'd want to test, and for which specific games and GPU's they'd test it, since the possible combinations for testing will easily skyrocket.

But it would be very valuable if Anandtech tried - especially so we can see the impact of dual-, tri-, and quad-core scaling with particular games and at particular GPU performance levels. Reply

Ah. My thoughts exactly. Different CPUs should be used because not everyone has a i7 975. A lot of the results on GPU benchmarks don't reflect those of which the consumer will realistically achieve. Reply

As others have surmised, average frame rate is not half the story. Other sites already use a line graph of their recorded frame rate dataset and it does give a great overview of that particular benchmark. The problem is that more than 2 cards on one chart becomes a mess to read

You could improve upon the idea by using something similar to the Frequency function in Excel and graph the number of times frame rate was recorded at 10fps increments. This would clean up the graph and still reveal trends like, "Average frame rate is 84, but you will be spending most of your firefights 10-20 FPS south of that."

Bad Company 2 - It's a new, high production value FPS, many have voted for this already

Supreme Commander 2 - Should provide a good RTS benchmark; Blizzard is notorious for making their games able to be ran on low end hardware so I think this would be a better benchmark choice than StarCraft 2

Batman: Arkham Asylum - Another hardware taxing game, like BC2

Bioshock 2 - A new, impressive looking RPG/FPS

(when it's out) Crysis 2 - Needs no explanation

I also roll my eyes whenever I see MMO benchmarks. There are way too many variables to get any reliable runs. Same for other games that don't have a benchmark built in, but they aren't as unpredictable as MMOs. Reply

I actually had a chance to talk to the GPG guys at GDC last week. At this point we're not planning on using SupCom2; they told us straight-up that it's not going to be graphically intensive (so they're pulling a Blizzard here). Reply

Ryan, total respect for non syntetic bm, and even more respect for involving your customers. Lets have more of that. That means more to me, than the reviews themselves.

Is the new methology going to be used for the fermi launch?

If so:
Dont you find it problematic? - and if, - how?
You must know a lot speculated that you would alter your bm suite when fermi was comming. Controlling the anticipation like you do here, does not change that. Knowing we anticipated this situation, how do want us to interprete what is happening?

I think we had the same problems with the intel ssd reviews, fortunately it ended with your own excellent bm suite, you made yourself, instead of the 4k random writes all over. Happy ending here. A great benefit for your customers. I hope your will get there to with your gfx suite.

But changing it right now is highly problematic, because the comparison to earlier bm is weadk. I think Fermi should stand by itself, like the ssd should. Reply

My goal as GPU editor is to refresh our benchmark suite roughly every 6 months. It's been made very clear to us that you guys like to see new games used, and this gives us the opportunity to rotate those games in.

This is a particularly critical point since our last refresh was for the Evergreen launch, which means we don't have any DX11 games in our suite. With NVIDIA soon to ship DX11 cards, we finally will be able to do some DX11 performance comparisons, so we're going to go ahead and refresh our suite.

We're not going to throw everything out, and every card is going to have to "stand by itself". Reply

I think I'd drop from the list:
- Any game optimized for the console market, which is hardware limited.
- Any MMO; games that rely on data transmission beyond the local machine are prone to too many variables to be considered a trustworthy test of performance.

I'd like to see more serious simulation games, or RPGs with realistic environmental effects. Reply

For GPU and CPU both. Bioshock 2 is the game that's inducing me to upgrade my computer, and it'd be nice to see an Unreal-engine game in your suite.

I just beat Bioshock 1, and it was like playing Myst on my C2D E6750/Geforce GTS 250. It's the first game I've ever played on this machine that hasn't run silky smooth at 1920x1080. So I'm definitely waiting till I upgrade before I try the sequel.

Also some Distributed Computing benchmarks like RC5-72 or Folding@Home might be nice, if you can find a release client that runs on both Steam and CUDA. Your site still has a bit of a DC crowd (though not nearly as much as Arstechnica, judging by your OGR-27 ranking) ;) , and it would also be a good indicator of GPGPU performance. Reply

You already use 3dsmax, cinebench and povray in your CPU reviews. I know that in older versions the GPU didn't make a difference for those rendering times, but that may have changed: could you check if GPU power makes any difference in the versions you're currently using? if they do, I definitely want to see those included

(I know you're supposed to use quadros and firepros when working with these apps, but the apps themselves have changed, and it's not really necessary anymore) Reply

WoW... regardless of it being an older game built on older technology, the player base is gigantic. I'd wager that a lot of us wonder if that newly released GPU will make any difference during peak hours in the major cities or with all the settings cranked in a 25-man raid boss fight. Though I'll also state the obvious that from what I've been able to piece together, the game is much more CPU limited than GPU. Reply

I agree with bdunosk! WoW has the largest player base in the world for any game. Yes, it's typically more CPU limited than GPU limited, so benchmarks might be more applicable to mid-range to lower-end cards. Thus, perhaps WoW benches should only be featured in those performance classes of GPU reviews. However, I think benchmarking something more intensive like boss raids or Eyeinifinity setups (probably a lot more repeatable than boss raids) at high resolutions and high AA could present a more GPU-limited scenario that WoW players would be interested in.

I would also like to take this opportunity to reiterate that for performance/mid-range to lower-end cards, it would be BRILLIANT if you also paired them up with mid to low-end CPU's as well, since that is more realistic user rig than pairing a Core i7 Extreme with a Radeon HD 5650 or even a 5770, for example.

im going to start off by posting a link to toms hardware since they did a review on the game's performance which i found rather puzzling. http://www.tomshardware.com/reviews/star-trek-onli...">http://www.tomshardware.com/reviews/star-trek-onli... . in this review, nvidia cards literally walked all over ATI cards (it's a TWIMTBP game btw), but the general over all performance in the game is comparable to crysis performance across the GPU bench test. the game is definitely heavily GPU limited, and i want to see what anandtech's expert testers can do to pound out how the game performs. would be interesting to see a separate test set for space and ground combat as well, since im sure the 2 perform differently (ground combat is definitely laggier than space combat) Reply

ooh i almost forgot to add, you need to post max, avg, and min FPS for each benchmark. i base my card choice off min, avg, and % min FPS, since i want it to always perform at minimum of a certain level in my games for framerate stability. i also would be interested in seeing each video card tested on an amd and intel CPU based platform, as i know it has been proven here and elsewhere that some games behave differently with different cpu/gpu combinations even in games that werent CPU limited. it would also be good to test with different speeds of CPU within a certain generation to give a baseline for comparison since not everyone owns an EE i7 CPU. a lot of the users out there are using dual cores, and while im not saying that you should test with them, im saying that an i7 extreme isnt exactly a real world representation of what most users are playing with. most users are playing on stock clocked dual cores or cheap ass quads still, so if you really wanted real world data you would at least toss in an AMD and Intel bench, and at least 1 underperforming CPU so people can see how their CPU can effect their gaming experience. if anything, at least include the data in the anandtech bench at a later date, with a link to the bench in the article. this would get more users to use the bench when looking for information on products and it would be extremely useful for everyone to make recommendations based on an individual users level of hardware. Reply

Every GPU review would have an representative ( or averages ) from other Categories presents. In the Case where some would ruling the graph. Just simply mention its results. And Other then Numbers, % would be nice as well.

So in a Benchmark, I could see, how this Performance GFX ( 100%), Igx is 10% ( Meaning Performance GFX is 10 times faster then IGFX in general.

Other then Games, 2D Benchmark would be nice as well, as well as performance on Windows Aero Acceleration. CAD, Rendering, OpenCL, Photoshop, etc...

All in all, i dont think there is anything we need to add to the test. We just need a different way of presenting these data to consumers and let them easily find the answer they want. Reply

GPGPU benchmarks are something we want to do, but don't expect to see any kind of comprehensive benchmark for it in the near future. We've been poking in to this one looking for an OpenCL/DirectCompute benchmark and thus far have come up empty handed.

There's a somewhat wider range of programs that can use Stream or CUDA, but that means we have to throw out cross-vendor comparisons. So if we used such benchmarks, it would be sparingly. Reply

I'd like to see fraps based benchmarks dropped. There are way too many "random" differences brought in to one run compared to other run of the test if the game is not running in benchmark mode on its own (ie. always exactly the same scene). Reply

Racing games are the only games I play on PC, and so they're the only ones that factor into my choice when buying a card. Often reviews don't have any racing games, or if they do its something from EA with extra motion blur and unrealistic effects.

I understand that often racing games don't take up as many resources as some of the FPS, but often cards that are great at FPS seem to be crappy at racing games.

At the moment Dirt 2 would be the obvious choice for a racing benchmark as its relatively recent and supports DirectX 11. Reply

also a high end i7 does not represent real performance the majority are likely to get,i understand that this has to be done in fairness to the cards so as to bring out the max they can deliver,but no point if we the avg user do not realize that benefit, try using something upper midrange like an i5 750 or one of the low end i7s. Reply

#1. I agree. I would like to see especially MMORPG's tested, because some (like WoW) hav huge user bases. Even though I don't play WoW, I'm curious because I know player base = potential marketability of the card, so it's important. Someone's suggestion that you try something taxing (like a 25-person boss raid, or maybe a 3-screen Eyeinfinity-like setup) with WoW is a good one, since it's less GPU demanding.

#2. Bad Company 2, because it's very popular and adequately GPU intensive.

#3. PLEASE include benches with an Athlon II X4 or Core I3 class CPU, or at least a mid range CPU if possible. Theoretical GPU performance isn't always as practical as anticipating the actual performance you'd see when a mid-range GPU is paired up with a low/mid-range quad core CPU, which is a much more realistic rig. Especially when most games are GPU limited, especially with AA and at today's widescreen monitor resolutions. For example, you might see a 20% FPS difference at 1080p 4xAA in Crysis Warhead, let's say, between an Athlon II X4 and a Core I7 depending on the video card, but potentially a 100% difference if you changed the video card. So lower end CPU configs are valid To pair up with many GPU's, but t would be god to see if there is a signiicant performance hit, and in which games.

#4. Lower resolutions and/or no AA would be very useful as well, especially for mid-range and below cards.

P.S. And Mass Effect 2, just because it's so cool :P EVE-Online, for the same reason. Lol. Reply

I agree with this post. Having a mid range CPU to test the latest graphic cards would be great, as it would show just how much(or lack thereof)difference it makes.

Maybe even a complete new medium system. Its different to see I7 980x ultra turbo extra 4ghz with 6gb of 2200mhz ram and GTX480, but its a whole different story when you put in a phenom II x4 955BE or core I3 CPU with 2GB of ram and then test from GTX480, down to 8800GT.

Simply put, only 1% of people have such systems as the ones you test games on and thus a medium system would be much more real world like and give more realistic results people can identify with.

I would also like benchmarks to be run only on bare operating system, meaning no software(antivirus, winamp, firewall, skype, msn, etc...), other that the required drivers, DX and game software. Reply

Totally agree with number 3. I had spend ALOT updating a nice pc to make it the best I could for gaming (going from an EXCELLENT for general use Athlon le1640 to a athlon 2 x4 620, from 2 gigs of ram to 3, from a 4850 to a 5850, getting a xbox360 control, changing the mobo to one with a decent sound chip since I couldnt put a sound card near a gpu, etc, etc) and I just dont want to start thinking about getting a i5 or phenom and spend more. So, putting average cpus on the betchmatk would be cool. :S

And yeah, I`m a little angry about having spent a lot on pc gaming :( Reply

I agree, definitely need a multimonitor benchmarks, but even if 2 nvidia cards are not available. Eyefinity is on the market, and is defintitely not a gimmik, so it should be tested. Just like intel's turbo mode is constantly used because it is in hardware, the situation with Eyefinity is identical. And there's no doubt AT can afford 2 more monitors, so there's no excuse there. Benchmarking shouldn't just be about comparing competitor's hardware. Reply

There needs to be at least one game that can be utilised to research back through old benchmarks of GPU's for "historical" purposes.
I like being able to see improvements in generations, eg. 8800gt to gtx 280.

I think Crysis fits the bill nicely and will continue to for another 12 months. It is still capable of bringing the latest cards to there knees at high res, high detail also

My only other recommendation would be Battlefield, Bad Company 2 as your modern fps game. Quite demanding on the video card compared to some other cough (mw2) console ports that a PC can punch out at over a 100 frames on high without breaking a sweat. Reply

Ideally, I'd like to see the best selling title of every major engine on the market and also keep the synthetics. I want to know why certain cards are strong in some applications and I also want to see how chip design translates into raw computing power in different ways. Reply

Ideally a graph over time of fps for the different cards while playing real games.

My impression is the Nvidia Fermi is a grunt that give solid performance under all kinds of tough conditions (i.e. higher minimum frame rates)
and that AMD cards can sprint faster on less demanding scenes (higher maximum frame rate).

Personally higher minimum frame rates matter more to me. A detailed pros and cons article needs to be written and benchmarks need to give useful information about this for it to be valued.

The big problem with minimum framerates is that they can often be inconsistent. Unlike averages that neatly average out any quirks, minimums quite often can vary for no good reason, which is a problem given that we shoot for consistency and repeatability here.

For the games we have that do support minimums, I'm going to vet our results and see if it's consistent enough to meet our standards. Reply

Empire Total War: Heavy on both Cpu and Gpu, can easily become more difficult with larger battles.

Battlefield Bad Company 2: Popular, DX11, challenging.
=======================
I would also love to see an increased focus on minimum frame rats. If a the minimum is low or the performance is very up and down it seems very choppy. I'd like to know if a card gives consistent performance rather than high averages. Reply

1) I would like to see one Eyefinity/NV MM dedicated benchmark geared towards a 3-display ~1920x1200 setup, and not just a dismissive "if you have the money for 3 monitors just buy a 5970 or GTX480 SLI" comment. I don't have a specific title in mind, I'd just like one that both actually works and for which you take "standard" benchmarks to make extrapolation to untested titles easier.

2) I would like to see an OpenCL benchmark (or whatever GPGPU library is most applicable/relevant.) Nothing fancy, I'd just like something from which to get a general idea of where things are as we move towards shifting FP to the GPU. Reply

I develop HPC applications for chemistry. I'd like a benchmark that does many matrix operations using very large (10k x 10k) matrices. Another useful application would 4 threads of iterated floating point calculations, a la monte carlo or an MD simulation. In all of these scenarios the floating point variables would all need to be double precision. Reply

It would be really great if you could devise some kind of normalised performance rating that indicates performance relative to some reference hardware and can also be normalised for changes in the test suite.

Everytime you devise a new test suite you run the old test suite and new test suite on the same range of hardware and take a statistical look at what the delta observe in the performance rating, then scale back the new results to be normalised with the old test suite. I might be missing some subtleties in this - but I just gave it 2 mins thought, sure you smart fellas can devise something a bit more robust and future proof...

You'd then get a sense of absolute performance from direct FPS measurements and a sense of relative performance between generatiosn of hardware and test suites.

I guess it would be the ATmark and ultimately not synthetic. You could insert other items into the equation such as render times and even weight these based on your perceptions of what people value.

Would love to see somethign like this: you could circulate to key hardware vendors to gain their buy-in and then keep it confidential to avoid sniping from the side lines about your choice of weights and the validity of the formulation.

I agree that a summary rating would be really useful, but I know it opens an old debate that was never truly closed: the results (of what card is faster than which other one) will depend on the fine details:

* simple average vs. geometric average: I'd use geometric, because it's least affected by a big difference in one particular game, but I'm not totally sure it's the best option, and it definitely will affect the comparisons

* the reference card used: in the old days, it would make one maker look better than the other dependin on whether you used a reference that was good for DX or OGL; now I guess it won't matter so much, but I'd pick an ATI one, as they are the ones dominating the market; and pick a relatively high-end one, otherwise you risk having to redesign your rating system too often

* the weight of each resolution: does 25x16 really matter? I'd only use 19x12, but 12x10 and 16x10 are still popular...

* the weight of each setting: again, I'd love to use AA+AF always, but for the review of a mid-to-low-end card it makes no sense

* the weight of each game: you may think one is more important than others (in the old days that would be quake3)

still, I'd love to see anandtech take a stand on each of these issues, and publish that ATmark in their reviews Reply

Bioshock 2 - Because I play it, I love it and it's new.
Mass Effect 2 - Same reasons.
3D Mark - because it's silly to not have an objective measurement to weigh the subjective games against.
Then just keep up with the latest CryEngine game, not because I like the games, I don't enjoy them at all, but because it stresses systems. Reply

Bioshock and Mass Effect 2 aren't really new anymore. But then again, neither are any of the games I play. I have yet to find a real good reason to upgrade my system and its based on fairly old tech - although it is waning a bit. Good games take a good bit of time to play and there are so many out there that are old now and will run on my current system. Reply

I know the issue of repeatable test runs makes it difficult, but I recommend a couple of MMOs to be benchmarked. For instance, Champions Online seems to do pretty well at putting stress on even the most up-to-date systems.

Maybe you could talk to the MMO developers about setting up a sandbox server just for running benchmarks? It would be very nice to see how a certain video card performs beyond the tutorial zones or main cities. Reply

I'm not sure this exactly falls into the category of the GPU test suite, but I'd like to see what games can be played well on lower end/older/integrated graphics.

It's not very interesting to see that something (like integrated graphics) gets 5 FPS in Crysis Warhead; it's more interesting to see what games can be played, maybe Half-life 2, Quake 4, or even back to Doom 3 (or games from that sort of vintage) to give an idea of the age of games that give a good experience. This would especially be interesting to include in reviews of Ion-style notebooks (which are often CPU limited), CULV notebooks (which tend to be GPU limited) and other low performance/integrated systems. Reply

As a gamer who's not too interested in the latest and greatest eye-candy, I really like the idea of legacy benchmarks for mainstream and low-end graphics cards/IGPs. One thing that drives me insane when shopping around for a new laptop or netbook is trying to find information on "what games can it play?". In all sincerity, the only PC games I play are Halo, WoW, Unreal Tournament (original and 2004), and some occassional Source-based games (CS, DoD). Reply

I would like to see Bad Company 2 in this lineup for two reasons.
1: At this moment it is probably the best selling game on PC so there will be a large user base. This will give a lot of people a good comparison point since they already know what fps their card can do in this game.
2: It's currently the only popular DX11 enabled game. They only use it for soft shadows, but still.

I also disagree about the no synthetic benchmarks argument. They are important, because they allow us to see, which one of the 2 GPU companies has currently the better architecture.
Just take Unigine. Although the reported power of the GTX480 is not so far away from a HD5870 it seems to annihilate it there because Nvidia focused on tesselation power. We wouldn't be able to see this in a simple game bench where anything from a certain shader to lazy programming to the CPU can limit the fps.
They should however not factor into the final rating. Reply

What's really needed is software that stresses the maximum amount of cores in a CPU,supports all the features within DX11,and is stressfull enough to put even multi-GPU setups to the test,and no software exists that can do all that...At least not yet unfortunately.(maybe the next version of 3Dmark supporting DX11?)

Though as a special section,one thing that can make up for the lack of the above,at least in terms of being GPU demanding,is running benchmarks using 3 displays,as a single 30" display costs a fortune still(1000~1500$),and users can easily get 3,24 inch LCD's for much less than that(1/2 as much),and it will be much harder to run those 3 at 5760*1200 resolutions,than it will running a single 30" LCD at 2560*1600.

Direct comparisons between brands in the above scenario should only be allowed when all GPU makers have a single card that can output to 3 displays though,though obvously it would be a moot point in multi GPU shootouts(2 HD 5870's versus 2 Fermi cards for instance),as both can do it from what i've seen. Reply

Too right. Furmark is a brilliant test. Even if they don't use it to bench it, they should at least run a system for say a few hours running benchmarks to get a rought idea of how reliable these cards are and how hot they will get under worst circumstances.

Also they need to test it INSIDE a case so we can know how hot it will be in real life rather than on a bench. Reply

Often we will see only FPS types of games in the benchmarks.. although i feel they are important as many GPU intensive games are made for that genre, let there be some other type of game genre like RPG, MMORPG, RTS, Simulation and etc...

Also, since graphics card today comes with many other features like physics engine and GPGPU functionality, It would be good if these features are taken into consideration when doing benchmarks as GPU are not used only for gaming nowadays. Reply

But that should illustrate a point - that depending on your needs, a top-of-the-line 5970 may be no better than your current 8800GTS.

Just like it was kind of silly to suggest the i7 980 was the best CPU yet for playing WoW because it pushed the frame rates from some rate too high for the human eye to pick up to an even higher rate. Reply

Yeah, but the thing is, if WoW is working fine for you now and it is the only thing you run then why would you be looking at reviews? The point of the review is to show what it is a capable of and if you say wanted to know 'Will I be able to play Crysis because I'm getting bored of WoW?' you can see, yes I can, rather than 'I don't know but it sure as hell gets higher frames in WoW. I'm not trying to bash WoW but I'm 99% sure it is not good for benchmarking, especially seeing it has an FPS limit of like 160 or something. Most GPUs will reach that easily these days so all we'll get is 'The GTX 280 and the 9800GTX and the Radeon 5870 are all equally powerful because they got 160FPS in WoW'.

I concur, please include some strategy and role-playing games as well.

I know Guru 3D have been using Anno 1404 for testing and it taxed their rig decently enough on max settings. Something like Dragon Age might be interesting as well, certainly for me.

Civilization V is coming at the end of the year and that might be a good example of a turn-based strategy games that's bound to be popular.

I realize that strategy and role-playing games often tax the CPU more so than the GPU but modern incarnations do need a bit of both.

Indeed, WoW may not be a good benchmark despite having 12 million players but it would be a good idea to keep an eye out for up-and-coming MMOs. Whenever something that's actually viable is outed, that is. *sigh*

Obviously shooters are going to be the most popular ones still but I'd rather stick to no more than 2-3 different shooters and one game each from other popular genres than 5-6 shooters. Reply

Dungeons and Dragons Online, as well as Lord of the Rings Online are two games that are popular enough to deserve some attention. With DirectX 10 support currently, and from an AMD press release, DirectX 11 support in 2010.

There should be at least one turn-based strategy (these tend to often require a very zoomed-out view of what is going on), one RTS (where you tend to focus on individual smaller units closer in), and one RPG with a mainly above view (where you are usually looking at a group of characters in individual rooms or caves etc). Each type of view has its own unique requirements. So that's three non-FPS games.

I'd add a further two simulation style games: one a flight type (either aircraft or spacecraft) where you're mainly looking at lots of distant objects except when landing or in combat, and one strictly ground level give or take the odd jump over a hill (probably driving but could be something like snowboarding). That makes for five non-FPS games.

Only once a good graphically-demanding representative of each of those genres has been chosen, should the rest of the list be filled with FPS titles, otherwise we'll end up with the usual "all FPS games except the odd one or two of some other genre thrown in if the author happens to like it". Reply

It used to be included in some Anandtech benchmarks and I request that it be reincluded. RPGs remain popular with the PC gaming crowd. Not only is this a good and fairly popular game, it is also a good test of GPUs as it still brings down many midrange GPUs to their knees. Reply

I'm going to start off the comments here with one condition: no synthetic benchmarks. Our editorial policy continues to be that we only want to use real games, as synthetic benchmarks just encourage AMD and NVIDIA to focus on optimizing for something people can't play. So please don't bother asking for 3DMark. Reply

[quote]
I'm going to start off the comments here with one condition: no synthetic benchmarks. Our editorial policy continues to be that we only want to use real games, as synthetic benchmarks just encourage AMD and NVIDIA to focus on optimizing for something people can't play. So please don't bother asking for 3DMark. [/quote]

EXCELLENT!

MAY I SUGGEST A SIMPLE RULE?
For any benchmark to be considered it must be based on an unmodified engine of a published (NA or EU or Asia) GAME.

That's it.

This means the end of 3DMark, Unigine Heaven, SiSoft Sandra etc in this benchmarking suite. Reply

I'd be interested to see a one off article on synthetics. Not really the high level 3dmarks I'm hoping there are some real low level synthetic benchmarks around you could use to test specific parts of each card? Could then extend that to look at how each of those components go together to draw the finished scenes in games, with potential to explain the difference between certain games and then what bottlenecks are restricting each one. I realise this would be a load of work but your pretty much only people that could do it :) Reply

For a major architecture article (e.g. Evergreen launch, GF100 launch, etc) we'll sometimes use tools like that to look at the capabilities of the architecture. But this isn't something we would do on a per-card level like we do with our GPU benchmark suite. Reply

i would like to see minimum framerates and i would like to see most of the tests in a resolution and a quality setting that is playable.
So in most of the tests minimum framerates should not in the 25-30 fps range and avg frames should be in the 50-60 fps range.
If minimum framerates are always in the 30 fps range the avg. fps do not have to be as high.

And typical demanding gameplay situations should be tested. It s not sufficient for a computer to run a game at 60 fps in most situations but also in critical gameplay situations where you have more load on the cpu and gpu, where fluid gameplay is even more important than in a scenario, where you don t have to react in a fast and fluid way a computer has to be able produce certain minimum framerates.
In this scenarios it is often the case that some graphics cards in the charts would switch places. Reply

bring in wow (no joke) on 800*600 -> 1920*1200 and then crysis/warhead/2 on 1680*1050 -> 2560*2048 to cover the top end and low end with varying settings at each res (at least 3 AA/AF combos for each res), this will act as the "synthetic" benchmark of low gfx demand and high gfx demand. If wow is undesirable as a bench due to the connectivity to server issue or lack of benchmarking tool (maybe use some of that tools that created them funny videos with? them machinma deals?), replace with CS 1.6 or freelancer or other low gfx demanding game.

Test each game at 2 resolutions & 2 quality-settings: that shows you how each card scales/doesn't-scale with both quality & resolution!
Anything less than 2x2 and you end up blind to the actual curve of performance for that card.
Anything more than 4 and you are filling-in the curve, instead of diversifying one's dataset with more games/apps.

Bonus rule: make 3/4 of the games be the "problematic" ones ( like that GTS or whatever it was called, racing-game someone mentioned, that doesn't work right on ANY cards... ) so that the mindless-enthusiasm we are biologically prone to can be torpedoed directly with some facts of life...

I find it really odd that in this particular case the testing methods of Anandtech differ from the objective and scientific approach you normally take.

What point is test against some game titles when your argument of not to include 3DMark is not to include title for which the GPU vendors optimise their drivers for? Last time I checked the release notes of GPU driver it consist a huge list of specific titles where the driver brings in some improvement. What is that if not optimizing for single title? I'm not saying you should include 3DMark but the decision to omit synthetic benchmarks is done using exceptionally poor argument which is almost utterly without any ground.

What I want to see from GPU review is the efficency of the HW/SW implementation for some standard API. If you test against a 'real' applications you end up testing the application's implementation of the certain API and/or the driver optimisations for the application under test.

If your site really wants to differentiate from the mass of HW sites do what you have done in server and storage benches, create your own. Use only the standard compliant ways to use the API and run with different CPU setups to reveal the effects of the platform.

If not, you're blending into the gray mass of uninteresting reviewers of batch run game benchies ... Reply

The big problem(which you touch on), is that the drivers are tuned to specific games/applications for performance. As a result, since not everyone plays the same games, it is sometimes a good test to use BOTH titles that the drivers are tuned to do well with, and those that are not.

I am talking here about games like The Witcher, which did not see enough advertising here in the USA, but would be good to test performance on. Do you think AMD or NVIDIA tuned their drivers for that game? Other games that require a lot of GPU power but do not have specific driver optimizations from AMD or NVIDIA are good to test with as well.

I am in the minority it seems, in that I am not a fan of first person shooter, or World War II combat games. As a result, when I find a game, as often as not it hasn't gotten much attention when it comes to checking the GPU performance with games _I_ play.

So, Synthetic Benchmarks have their place in a comparison, but looking for lesser known titles to use in testing would also say a lot.
Reply

I want to start off by saying I've been an Anandtech reader for most of a decade. I disagree with your stand on 3dmark. It really is a lot of fun to run. I would like to see synthetic benchmarks added. Just because games are different than 3dmark doesn't mean end users don't want to know the potential bragging rights a video card has to offer compared to the others. I love to play games but I also love to overclock to the max and run synthetics. When buying a video card I generally go for the one that scores higher because I enjoy running benchmarks almost as much as playing some games. I can spend hours tweaking just to lift another 1000 points. I go to overclockersclub to get my information on the futurmark bench results and I'd rather not have to stray from Anand. Anandtech is really the only place that doesn't run them. Your stand on that really is narrow minded. I may not play 3dmark but I use it alot for fun. I hope AMD and NVIDIA continue to optimize for it just as they do for games.

on another note:
I would also like to see BFBC2 GPU showdown. A list of what will play it what won't.
Thanks anand. Love you guys alot. Reply

The aversion to synthetic benchmarking seems almost religious. I'm pretty sure most people already get that synthetic benchmarks don't directly correlate to ingame performance. Most sights that include these tests normally say as much when they include the results.

If the objective is to make synthetic benchmarks disappear off the market, then Anandtech is being naive, or has an inflated view of it's punching power in the industry.

Synthetics exist because they offer detailed performance anaylsis that isn't possible within a regular application not designed specifically to provide performance metrics.

Yes, it's a flawed system, which is why no relies only on these values, but to banash them entirely from a test-suite is a mistake in my opinion. And yes, Anandtech is still a great site -keep up the good work.
Reply

The thing about synethetic benchmarks is that other sites use it, and you can compare cards online on the developer's website. So it really isn't needed. Older taxing games however, aren't usually used which is why it would really be great to see Anandtech use them! Reply

I know this is about GPU testing. But I would like to add when testing CPU's with games like supreme commander. Check the simulation time ( how long it takes to play through a replay) not just the FPS.

Take Empire TW, It's important to see just how fast the CPU can simulate it. Simulation time in RTS's can be very important. Reply

Afcource this is about GPU testing. However, i never seen written in stone that GPU's must be tested with the latest and greatest of CPU's, highly overclocked.

No, that's a factor "we" made up ourselfs. Wrongly so, if you ask me. It's perfectly fair to test GPU's on more moderate, populair types of CPU's; nothing wrong with that. In fact; most people would
appriciate those results more then "another i7 Extreme @4Ghz results page".

Give us the results with dual's, triple's, quad cores. With cheaper cpu's and the best a man can get. That's what we are waiting for! That will be the day we can see immidiatly how much the cpu affects the performance of the GPU, without having to visit multiple sites and reviews.

Anantech, you have a honorable task lying ahead of you; grab it!
Reply

Just a few notes
I think there should be a base set of games older ones say 2 years old
maybe at least 2 of them that were known to work good on most hardware.
This way when you test mid range & lower end cards they will at least get more than 3 to 4 fps. The other thing I noticed is when testing mid & lower end cards the testers always seem to think a $50 card should be able to run at the same resolutions as a $300 to $700 & tests them at the top of the pile resolutions like they would with the top end cards lets face it a $50 card is not meant to run at 2048x1536 with max details & it should not be tested at that extreme. So maybe having 2 sets of bench marks one for the high end cards & one for the the budget cards that way everyone gets to see how their potential card performs. Integrated graphics would fall into the lower end card test suite. Lets face it the mid & lower end cards out number the high end cards probably 500 to 1 out in the wilds of peoples systems.

The other thing is to test on different CPU's not everyone has a Hex core i7@4Ghz. I think using different CPU's from both Intel & AMD is a good thing it will show potential buyers how a graphics card performs on a host of different CPU's not just the ultra high end Intel platform.

The last point is have a set base of games for each of the two benchmark setups the low end & the high end but always have a selection of current games where you could rotate in to shake things up a bit.

I won't even get into The way it's meant to be played games to much but having to many type's of games like this that are heavily catered towards one company skews the whole test. I am not sure what AMD calls their vender supported titles but same goes for them as well. Maybe there could be a whole extra test suite setup just for those type of games lol.

My point is if test sites stop testing the heavily Nvidia & AMD sponsored games maybe both companies would focus more on a OpenCL & DX 11 platform where everyone is on an even playing field. Reply

I'd love to see a multiscreen eyefinity/whatever nVidia call theirs benchmark although perhaps that is best for individual titles. 5040x1050 would be good as a lot of people will be looking at three cheap 22" screens. Reply

I don't know that more games will help, but to show how the GPU's scale on difference CPU's I think is important. No point showing a card will run a game at 50fps, but only when it's got a 6ghz i7 behind it. Reply

I disagree. If I want to know how fast a game will run on a specific CPU, I'll read a CPU review article.

In a graphics-card review, I want to know what the graphics-card is capable of when it is not bottlenecked, even if that means pairing it up with a far faster CPU than I currently have.

If there's time for CPU scaling on a given card, by all means add it to the review, but a graphics-card reviews should always focus first and foremost on how the card performs with the fastest available system. Reply

There is a place to see how well a video card/GPU will run with limited CPU power though. At the least, a Phenom 2 X4 955 is considered a "fast enough" CPU to see the performance of a video card in most situations. This is why I also posted that it would be good to see GPU performance on different chipsets from time to time. If every test is done on an Intel i7 with Intel chipset, how do we know that a given GPU doesn't have issues with the Intel chipset?
Reply

Is there a way to generate a 2 dimensional contour plot of GPU performance vs. CPU performance?

Let's say you have GPUs ranked from last to first on the x-axis, and CPUs ranked last to first on the y-axis. Color resulting performance by blue (worst) to red (best). The result, assuming perfect scaling, should be blue in the bottom left corner, and red in the top right corner. (example: http://yfrog.com/ja33420199j">http://yfrog.com/ja33420199j )

The difference is subtle, but it's there - in the 2nd image, notice that improvements to CPU speed don't affect performance all that much, while improvements to GPU speed matter a lot.

There's probably a much easier way to explain this, but no site that I know if has ever attempted a multidimensional visualization like this before. I don't know just how much sample data you have to make this possible, but it would be an exceedingly interesting feature.

I agree that there's no need for synthetic benchmarks. Regarding games, I think that a suite (and a set testing rig) is good for a comparison over several reviews, but I think that adding some other games here and there won't hurt. In particular, I'd be interested in games that particularly favour a specific graphics card architecture. I'd like Anandtech to test a lot of games on a limited number of cards, and if there's a game that's interesting (GPU limited but performs well on specific cards), test it on more cards.

In general, though, just have a test suite that's made of best selling games which are GPU bound. Reply

More game benchmarks the better, I do understand time is an issue. Maybe pick 4-5 main games (with 2-3 resolutions) then have about 5-6 EXTRA games were you only bench at a single resolution (such as 1920x1200).

This way we not only know how the cards work on the new games on different resolutions, but we also get a sense (with 1 res) on how they perform on other/older games compared to other cards.

And I highly recommend you keep working on the "Bench (beta)". I think it has potential for interesting feature such as: The ability to customize review pages based on the user reading it. So for example right now I have an "4850 512MB". If this site was dynamic enough, it would be possible to have that video card included as part of the benchmarks in the review. This way on every article GPU review release id instantly be able to compare it with my cards (or cards on my wishlist) while reading the review. Reply

Also in agreeance with game benchmarks only. Also would love to add that Fallout 3 or Oblivion needs to be in there - or any game that uses these types of engines which is heavy on shaders and textures. Reply

Maybe you could "group" games somehow according to what strain they put on certain components. Such as Oblivion/Fallout/Dragon Age? - find a way to make a "group" of tests and then label it "heavy on textures and shaders RPGs" or something easier. That way when we want to play a type of game we have a list of games that gives us some idea of how a card should perform. Again as I listed before, all we really need to know is will this card suit my needs or won't it? If a card does well in FPS but not in other types of games, but all I play is FPS then I just want a card that will do well in games. Recommendations for cards that might perform well in future games would be helpful as well - according to where you see current developers going with PC gaming. Reply

I actually think Fallout/Oblivion aren't all that heavy on the GPU side per se (they depend a lot on CPU also). I would recommend splitting "game" benchmarks into CPU focused (RTSes like Supreme Commander, and I would include Fallout/Oblivion in here), and GPU focused (Crysis, most FPSes, etc) benchmarks. Or to make this easier, you could implement general categories: FPS (Crysis, L4D2, MW2), RTS (Supreme Commander, Starcraft 2 (when it comes out), Battleforge), RPG (Mass Effect, Dragon Age), Simulation (be creative here), etc. Reply

fallout 3 and oblivion are not GPU limited at this point, so there's really no point in including them. they are by far and away CPU limited, especially with heavy duty mods thrown in like deadly reflexes.

on my system (E7200 3.2ghz, G33 chipset, hd 5850), oblivion runs at the same fps no matter what graphic setting i use. but as soon as there are more than four actors on screen, the game absolutely CRAWLS as the CPU struggles with calculating physics interactions on those actors. ironically enough, oblivion running with mods would greatly benefit from intel's new 6-core CPU. Reply

Would be good to see benchmarks of Fallout3/Oblivion using gulftown. Anyone know if there are any? My Opteron 185 (Think FX-60, My Opty goes in Socket 939 as well) OC'd about 250mhz seems to handle Oblivion fairly well with my 8800GTS 640mb card and 2ghz of ram. But the settings aren't all the way at the top on everything and I can't run high AA and AF. I'm wondering what a GTS 250 would do for the game, if anything. Reply

+1 for the bench like idea, being able to include your current card into the graphs would be awesome.

Also I think there should be room for older titles as well, because if you only ever focus on the latest and greatest games the older games tend to get forgotten by the driver teams. For example I play a racing sim called GTR2 that was released Sep 2006 which is still one of the best racing sims ever made. With modern cards this game should run extremely well, but on nvidia cards the game experiences random lockups and on ATI cards the framerates aren't great when the going gets tough. GTR Evolution is a similar game with a similar game engine released in 2008 which has the same problems. I'm sure there are many other older games out there in similar situations. Reply

I like the idea of doing older titles, however it's not practical for our general benchmark suite. However if there's sufficient interest, we could periodically (e.g. once or twice a year) do roundups of older games. Reply

In a way older games are just as if not more important because if they're old and they have good replay value then people are still playing them. The majority of gamers are NOT on the cutting edge of PC gaming. A new game too often requires new and expensive hardware (unless its made by Valve!) and not everyone has a "fast" internet connection either. Nor does everyone want to waste $50-60 on a new game. Leave that for the people buying consoles. A lot of us PC gamers (check general ages of PC gamers) are generally an older group of guys (and gals) who are around thirty years old and spend our money a bit more wisely. We have families, jobs, and not a lot of time to game so we find a good game and stick with it awhile - on the cheap.

Sidenote: I don't use torrents at all fyi, its stealing - although I won't buy a game I haven't seen good footage of or played a demo of so I agree on not blowing $50 on a game I can't return - although part of the reason they started doing this was BECAUSE of pirating.

Anyway my whole point is that tons of us are still just picking up Fallout 3 or Oblivion now that prices are cheap, bugs are fixed, and availability is a plenty. Not to mention we can build a decent little system for under $600 that will play it. Reply

While the bech idea is a good one, it is far from practicle. There are so many graphics cards on the market that Anandtech simply wouldn't have the time or resources to cover the many different cards out there. I can see it already, while they coveered your 4650, sombody elses 2900 xt was left out, so they start complaining about it.

In an ideal world, Anandtech would create their own hierarchy chart so that we could get an even better idea of how our older cards compare to the latest reviews and benchmarks on Anandtech. Because the way that Anandtech tests the cards may not be the same as Toms Hardware and thus Tom's hierarchy chart might not be the best answer to your question. Reply

ComputerGuy2006, And I highly recommend you keep working on the "Bench (beta)". I think it has potential for interesting feature such as: The ability to customize review pages based on the user reading it. So for example right now I have an "4850 512MB". If this site was dynamic enough, it would be possible to have that video card included as part of the benchmarks in the review. This way on every article GPU review release id instantly be able to compare it with my cards (or cards on my wishlist) while reading the review.

A quick search shows that despite the talk about older games being included, nobody has yet mentioned one particular older game that can still bring a new system to its knees; Neverwinter Nights 2.

I played it a few months ago on my new PC (Core i7@3.8GHz + 4870X2 + 4870 all overclocked) and STILL couldn't get playable framerates in the outdoor scenes at 2560x1600. In fact, performance in the outdoor scenes was so bad I ended up playing the game in 1920x1200 with some of the shadow, lighting and detail settings turned down. Even then I hit some truly horrendous locations that brought the FPS down below 5.

I'd be seriously impressed with any system that allows you to walk around the opening town fair scene with everything maxed out at 2560x1600.Reply