Few months ago i read an comparison between Nvidia 295 and Ati 4870. Both cards had same price so article was hitting this as a vantage point for the comparison.

What was more interesing was that Nvidia had almost twice more memory as Ati. The results for most of the tests look the same. Lower resolutions was hitting same FPS limit for both cards. It was certain that the CPU was limiting the performance for both cards.

On higher resolutions the Nvidia was performing better. When i read the construction details of both cards it was certain that Nvidia was the stronger one indeed, and the results were authentical, but when i checked the price and compared it later the vantage point has vanished in time how the prices of both cards were changing... Also i checked another parameter - power comsuption and Nvidia with its dualcore was bit more expensive piece.

so lets talk a bit more about the parameters which can do the difference.

Target resolution
Each 3d based game has its native resolution. In this resolution is engine computing the wireframe and all other processes which generate raw background of image. Everything below or over this resolution will require more CPU computing in upscaling or downscaling the wireframe from its native value.

Since no games provide such information it is hard to get knowledge about it. But there are some hints. If the game is made for Xbox just check which resolution the Xbox use by default (i believe it is 720p). Only thing you can do is to test the game with all possible resolutions and find the value where the game performs best (highest resolution without affecting FPS, but not so sure about it).

If you dont want to determine such number, or you want to play game on higher resolution just check which highest resolution you can display.

Then check some reviews about graphic cards and check the FPS which were hit on your target resolutions. If they were already lower check which variants of cards you are planning to buy. In this case the memory is whats matter - not the GPU. Always buy card with more or faster memory.

Resolution and Antialiasing are mostly affected by speed and amount of memory. Framebuffers can be really big (even more that some hundrets of megabytes) and some AA methods are multiplying this number by same value as AA value. So if the performance is lowering on higher resolutions and AA it is mostly caused by small amount of memory available for Framebuffers and AA calculations.

Target FPS
Still matters that more is better. But beware - FPS on low resolutions are mostly always affected by CPU. You can see it when 5 different cards on 800x600 are having same FPS no matter the settings. This can also give you hint... The first card which lowers FPS on higher resolutions and AA is suffering more from memory bandwiths and capacity, than GPU performance. Therefore look for tests and reviews which does not include AA but Shader effects (or different versions of DX) to see how the card performs in environment which is more GPU intense.

FPS on higher resolutions and AA on high values are then more affected by amount and memory speed, but not GPU. And still check if you will be even able to display such FPS depending on refresh rate.

If you are able to distinguish between results where CPU was the limit, and results where memory was the limit you will be surely able to detect values where GPU was giving reliable results of its own performance. Always try to decide between two GPUs, not the two graphic cards.

For example:
If you are deciding between Ati 4870 or Ati 5870 check whether they have similar parameters. This includes some gpu specifications (frequency, amount of stream processors, version of DX and opengl supported), some memory specifications (amount, bandwith). It is good when both cards have same amount of memory, both have single/double cored construction.

I was looking for some comparisons and tests between those two cards. It was hard to find review where author included cards with same amount of memory, but then i was sucessful.

From technical point of view it is certain that HD 5870 runs twice more stream processors as HD 4870 does. Some people claimed automatically doubled performance of these cards. The real performance in games claimed 30 fps more for hd 5870 basically in almost all resolutions. Author proposed that this difference is gained by the faster GPU, but he missed a point where the memory bandwith of newer card was significantly higher. Tested piece of 4870 had 1 gb of ram, same had 5870.

This can prove that 5870 performs better, but the interesting part is that 4870 with 2gb was not tested... Therefore i looked on ORB for some results, and i was quite impressed that difference between 5870 1gb and 4870 2gb cards on same machine was quite low even in the newest tests... The difference in price is a different story (100 euro less for 4870 2gb )

Whats important:
More graphic memory automatically means less intensive accessing on it and it lowers requirements generated by it to GPU. This increases framerates on higher resolutions and higher AA.

There are two ways how to gain better performance - more and faster graphic memory and stronger GPU. Anytime try to distinguish where was the limit of tested card which caused lower framerates as other graphics. Nvidia 295 with 1,7gb of graphic memory will always be faster than HD 4870 with 1 gb of memory in benchmarks, but try to look for benchmark with hd4870 2gb to see how the card performs in fair environment. These results will say much more about real GPU performances.

Also 2gb in graphic memory is quite interesting number in 32bit computing. 32bit windows can allow one application to use 2gb of system memory. Therefore it is certain that any application will have free hand over graphic memory with only small possibiity of memory problems and performance slowdown due intense communication when loading textures.

Conclusion:
With slower GPU, but larger graphic memory you can gain better performances than with faster GPU and less memory especially when you are trying to hit highest graphic resolutions, no matter that CPU has to upscale it from native game resolution.

As always there is a golden way in running the application where it will always perform best. Running Crysis on 2560x1600 should be impressive, but note that you will be able to reach only 60hz on your display (maximum resolution avalable for hd 5870). This setting will surely slow down your CPU, GPU and lower maximum displayable FPS to 60.

check the comparison between hd 5870 vs hd 4890 1gb and 4870x2. 5870 has best GPU performance, 4890 has lowest memory, and 4870x2 has most memory, most bandwidth and GPU parameters more or less equal to 5870.

From most charts it is certain that HD 5870 outperforms the hd 4890 1gb. It is also certain than faster memory bandwidth of hd5870 affects the FPS in higher resolutions in positive way, but the h
values are changing in different games because of different memory usage of cerain games and by certain AA.

Four cards with 1 gb memory are holding the same value at 59 fps. This shows that many cards are limited by CPU or bad application/driver performance. Only on highest resolution FPS of HD4890 drops. There you can see in comparison with underclocked HD 5870 that something in its performance is worse. This also shows the real percentual difference which is caused by different GPU construction. Twice processing cores, with same amount of memory, and on same frequencies give 14 (32%) more FPS on low details and 19 fps (42) on high details in test where we know that CPU/app is the limiting parameter. Here you can really see the difference in GPU performance.

Now get back to price...

The cheapest HD 5870 by MSI costs 356 euro. 4890 by MSI both with 1 gb costs 164 euro in crossfire 328 euro... Does not seem that crossfire will be worth it, and the price difference is higher than 40 percent.

hmm. i didnt know that crossfire manages graphic memory in this way. Quite waste of potential.

Also i forgot one thing regarding to article about CPU vs GPU.

Newest games and 3d marks cannot be considered as applications for checking downlimit of cpu/gpu coexistence. These can be considered as uplimits, even when there is certain that there are stronger graphics available.

Cards like HD 5870 or Nv 295 are hitting almost same values (sometimes higher, sometimes lower), but the percentual difference from slower cards is measurable only on quite high resolutions with full game details. Since native resolution for games based on Oblivion engine is 1280 x 720, then target FPS should be 85-100 FPS depending on refresh rate of display.

Any graphic card which will be able to hit such performance is enough for this game and you cannot display more of game performance, but when you notice on the tables available benchmarks were made on extremely high resolutions. four times bigger than the game require for optimal performance. Since CRT monitors does not have any native resolutions you can display anything on such display, but on LCD you might experience some trouble with its native resolution... thats something i like on old CRTs.

Once you will find the optimal game resolution (almost sure that it will be around 1024x768 or 720p or 1080p, but really less than 1600p) anything above 30 FPS is hardly to measure by your eyes, and anything higher than refresh rate simply cannot be displayed. Only those values will tell you whether you need something better. Once you can run your game on its native resolution on speed as your display allows you can be sure that there is plenty of hw potential which is not being used now, but you can still count with it with new game...

Only thing that you cannot obtain from benchmarks available on internet right now is FPS with lower resolutions or especially benchmarks on optimal resolution of certain games. For example oblivion running even on Pentium III will perform best on 720p and lower resolutions are also lowering the FPS. When you will have same FPS on lower resolutions it is almost sure that optimal resolution will be a bit lower than tested ones (even 1280x1024 is bit upscaling). While the CPU seems to limit FPS these days it is almost certain that games will not run faster, and higher resolutions are just an illusion of better hardware usage and method to get better graphs for benchmarks.

But while the benchmarks do need higher resolutions, games certainly dont. As the result additional amount of ram on graphics will help gain better performance in benchmarks, while lowering GPU load, but this will not mask the fact that the real performance increase will be quite small, or decreased by bad sw optimalizations.