"As of today, new PowerDVD 9 Ultra customers will receive build 2320, which includes support for HDMI bitstreaming of undecoded (full quality) audio with ATI Radeon 5000 series graphics cards, and the Auzentech X-Fi Hometheater HD sound card. An update patch for current owners of PowerDVD 9 Ultra is scheduled to be available next Friday (11/27).

Though happy that finally we will get hd audio using powerdvd, but why limit it to new customers only. It seems to me that if you can offer it to new customers, you should be able to offer it to everyone at the same time. Reply

I have win 7, a diamond 5870, powerdvd 9 ultra version 2201 and still I don't get he 3rd choice for the HD audio bitstreaming. Has anyone been able to get bitstreaming to work with powerdvd 9. Thanks Reply

"Hi everyone,
Yes... we read these forums regularly. Feedback is forwarded to the appropriate people. These forums aren't the best way for us to provide customer support, so if you have an issue, please open a support ticket through our website.

We can't always comment on new developments... or at least we like to wait until we have a definite answer. I know everyone noticed the reviews for the ATI Radeon 5000 series graphics cards, where certain reviewers were using a pre-release version of PowerDVD 9 to demonstrate HDMI bitstreaming of Blu-ray audio. Support for this feature will be in the next PowerDVD update (patch), which will be available later this month. I don't have a firm date on this, but we're trying to expedite it. Please keep in mind that each of our regular updates includes fixes for new BD titles and new (or forthcoming) hardware / drivers across the PC ecosystem, and so the feature you are anxiously waiting for isn't the only thing that we need to include in the update. Of course, each update has to go through a full quality assurance testing procedure, to make sure that none of the changes introduced any new issues.

So... sorry for going quiet on you... but I and others read every post in this thread and other relevant threads, following up as needed (in some cases, directly through private messages on these forums).

So Eyefinity may use 100 monitors but if we are still gaming on the flat plant then it makes no difference to me.
Come on ATI, go with the real 3D games already..been waiting since the Radeon 64 SE days for you to get on with it.... :-(
GTX 295 for this boy as it is the only way to real 3D on a 60" DLP.

Nice that they have a fast product at good prices to keep the competition going. If either company goes down we all lose so support them both! :-)

wow what a joke this review is but that i mean the reviewer stance on the 5870 sounds like he is a nvidia fan just because it like what 2-3fps off of the gtx 295 doesn't actually mean it can't catch that gpu as the driver updates come out and get the gpu to actually compete against that gpu and if i remember wasn't the GTX 295 the same when it came out .. its was good but it wasn't where we all thought it should have been then BAM a few months go by and it finds the performance it was missing

i don't know if this was a fail on anandtech or the testing practices but i question them as i've read many other review sites and they had a clear view where the 5870 / GTX 295 where neck N neck as i've seen them first hand so i go ahead and state them here head 2 head @ 1920x1200 but at 2560x1600 the dual gpu cards do take the top slot but that is expected but it isn't as big as a margin as i see it.

and clearly he missed the whole point YES the 5870 dose compete with the GTX 295 i just believe your testing practices do come into question here because i've seen many sites where they didn't form the opinion that you have here it seems completely dismissive like AMD has failed i just don't see that in my opinion - I'll just take this review with a gain of salt as its completely meaningless
Reply

@Silicon Doc
Nvidia fan-boy, troll, loser....take your gforce cards and go home...we can now all see how terrible ATi is thanks you ...so I really don't understand why people are beating down their doors for the 5800 series, just like people did for the 4800 and 3800 cards. I guess Nvidia fan-boy trolls like you have only one thing left to do and that's complain and cry like the itty-bitty babies that some of you are about the competition that's beating you like a drum.....so you just wait for your 300 series cards to be released (can't wait to see how many of those are available) so you can pay the overpriced premiums that Nvidia will be charging AGAIN !...hahaha...just like all that re-badging BS they pulled with the 9800 and 200 cards...what a joke !.. Oh my, I must say you have me in a mood and the ironic thing is I do like Nvidia as much as ATi, I currently own and use both. I just can't stand fools like you who spout nothing but mindless crap while waving your team flag (my card is better than your's..WhaaWhaaWhaa)...just take yourself along with your worthless opinions and slide back under that slimly rock you came from.

@Silicon Doc
Nvidia fan-boy, troll, loser....take your gforce cards and go home...we can now all see how terrible ATi is thanks you ...so I really don't understand why people are beating down their doors for the 5800 series, just like people did for the 4800 and 3800 cards. I guess Nvidia fan-boy trolls like you have only one thing left to do and that's complain and cry like the itty-bitty babies that some of you are about the competition that's beating you like a drum.....so you just wait for your 300 series cards to be released (can't wait to see how many of those are available) so you can pay the overpriced premiums that Nvidia will be charging AGAIN !...hahaha...just like all that re-badging BS they pulled with the 9800 and 200 cards...what a joke !.. Oh my, I must say you have me in a mood and the ironic thing is I do like Nvidia as much as ATi, I currently own and use both. I just can't stand fools like you who spout nothing but mindless crap while waving your team flag (my card is better than your's..WhaaWhaaWhaa)...just take yourself along with your worthless opinions and slide back under that slimly rock you came from.

I have the GPU Computing SDK aswell, and I ran the Ocean test on my 8800GTS320. I got 40 fps, with the card at stock, with 4xAA and 16xAF on. Fullscreen or windowed didn't matter.
How can your score be only 47 fps on the GTX285? And why does the screenshot say 157 fps on a GTX280?
157 fps is more along the lines of what I'd expect than 47 fps, given the performance of my 8800GTS. Reply

I've just checked the sourcecode and experimented a bit with changing some constants.
The CS part always uses a dimension of 512, hardcoded, so not related to the screen size.
So the CS load is constant, the larger you make the window, the less you measure the GPGPU-performance, since it will become graphics-limited.
Technically you should make the window as small as possible to get a decent GPGPU-benchmark, not as large as possible. Reply

AMD only introducing new things which merely would make yur frap fps go 1000 and thats it.....no new good or interesting features such as what nvidia did with physx/3d Stereoscopic or similar that would convince me thats the way to the future...

why should I need to buy a new dx11gpu only can do 1000fps...I would still luv my 260+ and 60fps in batman arkhum or other games which supported phsyx or similar...AMD just bring us back to the stone age race ..who has the higher fps race...... Reply

Actualy the delaying of nvidia dx11 card will make introducing new things harder. DX11 and OpenCL means enough that u can forget nvidias physx. At least with open platform dewelopers could finaly merge gpu and cpu code and make some more usefull things than improwed water splashing,unrealistic glass shatering and curtains which just run on top of the code and act as some kind of postprocessing + efects just to maintain compatibility.(miles away from the nvidia demos)
And also dx11 compute shader can make these things. Reply

I think you guys from anandtech could do an article explaining why the new Radeons don't don't double performance, even with doubled specs.
It happened too with the HD 4870, it had more than doubled everything (except bandwidht, that was 80% higher) and was not close from double performance. Reply

I think you may have been much happier with a 512-bit interface which would result in nearly 2.5x the bandwidth of the 4890, however it remains to be seen whether it'd be a waste or not. Having said that, it could mean for slower GDDR5 thus reducing costs, but wouldn't it be far more problematic to incorporate the wider bus anyway?

If ATI throw out such a card with a single GPU on it, a direct comparison with the 5870 (and nVidia's top cards at the time) will be inevitable. The extra bandwidth may be rather under-utilised for anything less than Eyefinity gaming or Crysis at max details ;)

Now all we need is AMD to come back at Intel with a domestic release of its Thuban die (or hurry up with Bulldozer, sheesh) and it'll be a very, very exciting time for people to upgrade. Reply

I want to know how the pinout compares on the 5870 gpu to the 4870/90.
Have they doubled the data pins, or is the data jamming in and jamming out, even at 4800mhz ?
Maybe that's why 512bit would help.
Perhaps faster data rate ram, needs also a wider data path, more pins, more paths in and out of the gpu.
I will check the overclock sites that have already posted on this matter. Reply

I would assume that the pin count on 5870 isn't radically different from 4870. Granted, we know what assuming can get you, but with the same interface width there's not much reason for it to get substantially more pins. A few changes for power leads to deal with having more transistors, and other minor tweaks are likely, but my bet would be it's within 10% of the pin count of 4870. Reply

For all those people clamoring on about why ATI didn't go with a 512-bit memory controller I'm going to chime in here with some ATI 512-bit experience. If you're a sharp one reading this, you have already guessed that means I'm going to talk about the R600. Now you can hate the card all you want, but I quite enjoyed this card. First of all, in the R600 it was the last ring-bus. It was a true 512-bit, and large memory controller. I'm not certain on the amount, but I believe it owned about a quarter of the realstate on the die. That's a lot. It also was some of the cause for the chip running hot and why UVD was scrapped from that chip to save room.

Now, to keep that 512-bit ring-bus fed, you needed to push large amounts of data to it. The more you increased system bandwidth, the faster the card would be in any task. I've run hundreds of benchmarks over the years and I'm pretty sure Jared and Anand can attest to this. Same goes for overclocking the card. Raising the core didn't do much, but cranking up the VRAM to feed that hungry ring-bus sure did. Prices anyone? I believe $450 and up depending on where you were located. It was on heck of a pricey chip for ATI to make. Enter the die shrunk 3000 series with the 256-bit memory controller and voila. A cheaper chip to make. It never came close to the theoretical performance of the 2900XT, but the 3870 was about 90% of the performance for a lot cheaper. Yeas I know the cores were tweaked and so on in the 3000 series, but they are very similar.

If ATI ever went to a 512-bit bus, which means more PCB layers, higher cost in manufacturing and a larger die, I'd think they'd do it on something like Juniper or wait till 32nm. It's not feasible right now. They technically could go the MCM route with Juniper and get a mashed up version of a 512-bit bus, but I don't think the chips have been designed with that in mind.

Anyways, most computers out there are starved to feed something like the 5870 and higher cards with a 512-bit bus. I just replaced my R600 with an RV740 (hah, went from 80nm to 40nm) and now I don't need to OC the heck out of my bus to keep the card fed. I'm running an old FX-60 setup due to a glowing review on here back in early 2006. Am I the norm? NO, I'm waiting to upgrade. Is the Core i7 9xx the norm? No. You have to build a card to a certain set of people. I'm building my pal a new computer and he's happy with the 5850. The 5870 is overkill for him. It's 80% of the 5870 but a hundred bucks cheaper. Now, I'm sure ATI looked at the 512-bit bus in much the same way. "Wow, that 512-bit bus sure flies, look at those numbers! Oh, it's going to cost us this much die space and more manufacturing costs.... Well, those 256-bit bus numbers are still pretty imperssive and within 80% of the gaming benchmark scores, so we'll go that way"

Or something along those lines....I'm sure that's why nVIDIA's GTX300 is delayed. It's a massive chip, 512-bit bus and so on. Great, they'll take the performance crown again. Will they take my money? If they have something in the $200-$300 range, they have a fighting chance, just like ATI does, or soon to be Intel. Best price for performance will win me over there. I don't care what the bus size is, or how the card could have been better, just as long as I'm happy with the performance for my money. In which case, I'll be here looking forward to a GPU roundoup in the best bang for buck in that price range. Of course it will have DX11, or else there's no point in me upgrading again. Reply

The GT200 is a 512 bit bus.
All the whining and complaining about difficulty means NOTHING.
ati goes the cheapskate sloppy lame route, cooks their cores, has 90C heat, few extra features, and a red raging fanbase filled with repeated tantric lies.
I even posted their own forum here with their 4850 90C+ whines, after some rooster told me his worst fan in the world on his 4850 kept it in the cool 60's like the several Nvidia cards, of course.
The 512bit HD2900 XTX was and is a great card, and even the 256 version still holds it's own. It was well over 500 bucks, was limited production, sold out quickly, and there was HD2900 512bit lesser version that could be flashed to full XTX with a bios upgrade, and it disappeared after it went well over $500.
That HD2900XTX has 115GB bandwidth.
It was REAL competition for the 8800GTX.
--
Of course ati cheaped out on producing any decent quantity, has been losing money, overcharged for it (and got it - but apparently like RUIZ, the "leadership" qualifies for "MORONS!"
---
Now, we'll hear endless crying about expense, about 512bit, and endless crying about core size (nvidia's giant monster), then we'll hear how ati just kicks butt because more dies to a wafer, and they can make a profit, and they can then wipe out nvidia and make them lose money....
BUT JUST THE OPPOSITE HAS BEEN GOING ON FOR SOME NUMBER OF YEARS IN A ROW.
If ati is so pathetic it can't handle making 512bit and selling 512bit, well then , they're PATHETIC.
And, yes, it seems they are PATHETIC.
Someone ought to let ati know there's "competition" and the "competition" pumps out 512bit buses all the time.
I guess when ati "finally catches up to the modern world" they can put out a 512bit again.
In the mean time, they can stick with their cheap pcb with less layers, their cooking hot crammed full electromigration core, and have a bunch of looners that for the very first time in their lives, actually believe that the ghetto is better than Beverly Hills, because they goin fps shootin', man.
Oh, it's so very nice so many gamers have as advice and worry ati's imbalanced sheet and how they can maintain it at a higher level. Such a concern on their minds, a great excuse for why ati cheaps out. I've never seen so many gaming enthusiasts with so much whoring for a company's bottom line. At the same time, nvidia is seen as an evil profit center that throws money around influencing the game production industry. LOL
Yes, it's evil for big green to make money, employ reps, toss millions into game channels, be extremely flexible and pump out 20 differing flavors of cards, so it's not so boring, work so games run well on their product - yes what evil , evil ****rds.
...
Perhaps the little red brokers could cheer some more when they feel ati "has improved it bottom line" by producing a cheap, knocked down, thinner, smaller, hotter, less featured, more negative driver issues, red card, because gamers are so concerned with economics, that they love the billions dollar losers plotted and carried out plans, and hate the company rolling in dollars and helping pump out games and a huge variety of gaming cards...
LOL
Yeah, the last red card that really was worth something, the HD2900512XTX.
That's the other thing that is so funny from these little broker economy whizzes. After they start yakkin about ati's dirt cheap product scheme, it really burns em up that the real cadillac of videocards commands a higher price.
Well, there's a reason a better made, more expensive process, more featured, wider supported in games videocard, is higher priced.
"the great economists" then suddenly turn into raging little angry reds, screeching scalping and unfair and greedy... LOL
Oh it's a hoot. Reply

There's no need to double the bus... either double the RAM data rate or double the bus width and you accomplish the same thing. But in a nutshell, everything is doubled relative to HD 4890 except for bandwidth, which only improves by 23%. Similarly, everything is more than double the 4870X2, you don't even need to deal with CrossFire stuff, but the 4870X2 has 50% more total bandwidth.

ATI almost certainly isn't completely bandwidth limited with 4890/4870X2, but I think 5870 might just be fast enough that it's running into bandwidth limitations. On the other hand, bandwidth limitations are largely dependent on the game and algorithm. For instance, the Quake/Quake World/Doom games have been extremely bandwidth intensive in the past, and some of the titles Anand tested fall into that category. However, I know of other games that appear to be far less dependent on bandwidth, and the more programmable stuff going on, the more important shader performance becomes.

In the past, Oblivion was a great example of this. NVIDIA's 7800/7900 cards had a lot of bandwidth relative to shader performance, while ATI went the other route. Oblivion was really a strong ATI title (X1800/X1900 series) up until NVIDIA released 8800, which greatly improved NVIDIA's shader performance. Most modern titles tend to be a combination of things. Reply

Well noone makes double the ram data rate, there is NO SUCH DDR5. (No one ever said there was.)
None of it runs at 7200 for videocards.
NVIDIA is using the 512bit bus and 448bit+ on it's top cards, so what is ATI's problem, when that's the only thing available ? (They don't need it enough to increase the cost of the cards to get it.)
Furthermore, the core is still 850, so have the data pins in and out of the core doubled ? I RATHER DOUBT IT. (Obviously it didn't - the specs say it's 256-bit. Did you not read the post?)
So, concievably, we have twice the data to move, on the same core speed, with less than double the DATA PINS in and out. (No, we don't have twice the data to move, unless the 4890 totally maxed out what the RAM could provide. ATI doesn't think this happened, so they only marginally increased bandwidth.)
If the bandwidth is NOT the problem, as you so claim, why then since everything ELSE you say has doubled, the conclusion we have is the ATI core is not up to the task. (If it truly had doubled in every area, and performance didn't double, we'd have a problem. The conclusion sane people will draw is that ATI looked at cost and benefit and decided a 256-bit bus was sufficient for the present. Otherwise they'd need a more complex circuit board, which would increase complexity and cost.)
That's it, it's core tech is so much the same....
LOL
Just love those ATI arguments. (There was no argument, but I'm a troll so I created one!)
When the CORE is overclocked, we will see a framerate increase.
SOOOOO.....
Tell me how the core handles TWICE THE DATA in and out - unless it's pinout count has doubled ? Is ther that much wasted time on the 4890 pins - on the current 5870 pins ? (No one said the core handles twice as much data; theoretically it can, but then deeper buffers would help.)
It may handle double data or nearly internally, but that has to communicate with the ram- etc onboard.
SORRY, once again, not agreeing. (Agreeing with what, that the bandwidth only increased by 23%? Wow, that's amazing. You'd disagree if someone said the sun rises in the east, wouldn't you? Try reading next time before responding instead of arguing for the sake of argument.)Reply

The meaning of cache on the gpu is so it doesnt need to read and write to dram memmory too often. The speed of texture cache on 5870 is 1 TB/sec and its sram. And thats just the texture chache. It just shows how much speed is needed to utilize that raw comuting power on the chip. They surely tested the chip with higher speed memory and ended with this bandwith compromis.
Also u cant compare the bare peak bandwith. The type of memmory controler and the speed of the GPU(and also cache) should change the real world bandwith like we see with wideferent CPU models and speeds.
When u read xxx GB/s bandwith it doesnt mean it always this fast (they name it peak bandwith always). Reply

The speed of the on chip cache just shows that the external memory bandwith in curent gpus is only to get the data to gpu or recieve the final data from gpu. The raw processing hapenns on chip with those 10 times faster sram cache or else the raw teraflops would vanish. Reply

If SD had any reading comprehension or understanding of tech, he would realize that what I am saying is:

1) Memory bandwidth didn't double - it went up by just 23%
2) Look at the results and performance increased by far more than 23%
3) Ergo, the 4890 is not bandwidth limited in most cases, and there was no need to double the bandwidth.

Would more bandwidth help performance? Almost certainly, as the 5870 is such a high performance part that unlike the 4890 it could use more. Similarly, the 4870X2 has 50% more bandwidth than the 5870, but it's never 50% faster in our tests, so again it's obviously not bandwidth limited.

Was it that hard to understand? Nope, unless you are trying to pretend I put an ATI bias on everything I say. You're trying to start arguments again where there was none. Reply

The 4800 data rate ram is faster vs former 3600 - hence bus width is running FASTER - so your simple conclusions are wrong.
When we overlcock the 5870's ram, we get framerate increase - it increases the bandwidth, and up go the numbers.
---
Not like there isn't an argument, because you don't understand tech. Reply

The bus is indeed faster -- 4800 effective vs. 3900 on the 4890 or 3600 on the 4870. What's "wrong about my simple conclusions"? You're not wrong, but you're not 100% right if you suggest bandwidth is the only bottleneck.

Naturally, as most games are at least partially bandwidth limited, if you overclock 10% you increase performance. The question is, does it increase linearly by 10%? Rarely, just as if you overclock the core 10% you usually don't get 10% boost. If you do get a 1-for-1 increase with overclocking, it indicates you are solely bottlenecked by that aspect of performance.

So my conclusions still stand: the 5870 is more bandwidth limited than 4890, but it is not completely bandwidth limited. Improving the caches will also help the GPU deal with less bandwidth, just as it does on CPUs. As fast as Bloomfield may be with triple-channel DDR3-1066 (25.6GB/s), the CPU can process far more data than RAM could hope to provide. Would a wider/faster bus help the 5870? Yup. Would it be a win-win scenario in terms of cost vs. performance? Apparently ATI didn't think so, and given how quickly sales numbers taper off above $300 for GPUs, I'm inclined to agree.

I'd also wager we're a lot more CPU limited on 5870 than many other GPUs, particularly with CrossFire setups. I wouldn't even look at 5870 CrossFire unless you're running a high-end/overclocked Core i7 or Phenom II (i.e. over ~3.4GHz).

And FWIW: Does any of this mean NVIDIA can't go a different route? Nope. GT300 can use 512-bit interfaces with GDDR5, and they can be faster than 5870. They'll probably cost more if that's the case, but then it's still up to the consumers to decide how much they're willing to spend. Reply

I suppose if we end up seeing a 512-bit card then it'll make for a very interesting comparison with the 5870. With equal clocks during testing, we'd have a far better idea, though I'd expect to see far more RAM on a 512-bit card which may serve to skew the figures and muddy the waters, so to speak. Reply

I'll ask and find out. I know that the comments are supposed to receive a nice overhaul, but more than that...? Of course, if you ignore his posts on this (and the responses), you'd only have about five comments! ;-) Reply

I put it in caps so you could easily avoid them, I was thinking of you and your "problems".
I guess since you "knew this wasn't the right time or place" but went ahead anyway, you've got "lot's of problems".
Let me know when you have posted an "interesting comment" with no "dubios nature" to it.
I suspect I'll be waiting years. Reply

The day you posted your review, i wrote in the forums that according to my perception there are other reasons except bandwidth limitations and driver maturity, that the 850MHz 5870 hasn't doubled its performance in relation with a 850MHz 4890.

Usually when a GPU has 2X the specs of another GPU the performance gain is 2X (of cource i am not talking about games with engines that are CPU limited or engines that seems to scale badly or are poor coded for example)
There are many examples in the past that we had 2X performance gain with 2X the specs. (not in all the games, but in many games)

From the tests that i saw in your review and from my understanding of the AMD slides, i think there are 2 more reasons that 5870 performs like that.

The day of your review i wrote to the forums the additional reasons that i think the 5870 performs like that, but nobody replied me.

I wrote that probably 5870 has:

1.Geometry/vertex performance issues (in the sense that it cannot generate 2X geometry in relation with 4890) (my main assumption)

I guess there are synthetic benchmarks that have tests like that (pure geometry speed, and pure geometry/vertex shader speed, in addition with the classic pixel shader speed tests) so someone can see if my assumption is true.

If you have the time and you think that this is possible and you feel like it is worth your time, can you check my hypothesis please?

Would perhaps some of the subtests in 3DMark06 be able to test this?
(not sure about Vantage, never used that yet) Though given what Jarred
said about the bandwidth and other differences, I suppose it's possible
to observe large differences in synthetic tests which are not the real
cause of a performance disparity.

The trouble with heavy GE tests is, they often end up loading the fill
rates anyway. I've run into this problem with the SGI tests I've done
over the years:

The larger landscape models used in the Inventor tests are a good
example. The points models worked better in this regard for testing
GE speed (stars3/star4), but I don't know to what extent modern PC
gfx is designed to handle points modelling - probably works better
on pro cards. Actually, Inventor wasn't a good choice anyway as it's
badly CPU-bound and API-heavy (I should have used Performer, gives
results 5 to 10X faster).

Anyway, point is, synthetic tests might allow one to infer that one
aspect of the gfx pipeline is a bottleneck when infact it isn't.

Ages ago I emailed NVIDIA (Ujesh, who I used to know many moons ago,
but alas he didn't reply) asking when, if ever, they would add
performance counters and other feedback monitors to their gfx
products so that applications could tell what was going on in the
gfx pipeline. SGI did this ages years ago, which allowed systems like
IR to support impressive functions such as Dynamic Video Resizing by
being able to monitor frame by frame what was going on within the gfx
engine at each stage. Try loading any 3D model into perfly, press F1
and click on 'Gfx' in the panel (Linux systems can run Performer), eg.:

Given how complex modern PC gfx has become, it's always been a
mystery to me why such functions haven't been included long ago.
Indeed, for all that Crysis looks amazing, I was never that keen on
it being used as a benchmark since there was no way of knowing
whether the performance hammering it created was due to a genuinely
complex environment or just an inefficient gfx engine. There's still
no way to be sure.

If we knew what was happening inside the gfx system, we could easily
work out why performance differences for different apps/games crop
up the way they do. And I would have thought that feedback monitors
within the gfx pipe would be even more useful to those using
professional applications, just as it was for coders working on SGI
hardware in years past.

Come to think of it, how do NVIDIA/ATI even design these things
without being able to monitor what's going on? Jarred, have you ever
asked either company about this?

I haven't personally, since I'm not really the GPU reviewer here. I'd assume most of their design comes from modeling what's happening, and with knowledge of their architecture they probably have utilities that help them debug stuff and figure out where stalls and bottlenecks are occurring. Or maybe they don't? I figure we don't really have this sort of detail for CPUs either, because we have tools that know the pipeline and architecture and they can model how the software performs without any hardware feedback. Reply

You will see that a 750MHz 4870 512MB is 20-23% faster than a 625MHz 4850 in all the above 4 tests, so the extra bandwidth (115,2GB/s vs 64GB/s) it doesn't help at all.
But 4850 is extremely bandwidth limited in the color fillrate test (4870 is 60% faster than 4850)

Also it shouldn't be a problem of the dual rasterizer/dual SIMDs engine efficiency since synthetic Pixel Shader tests is fine (more than 2X) while the synthetic geometry shading tests is only 1,2X.

My guess is ATI didn't improve the classic geometry set-up engine and the GS because they want to promote vertex/geometry techniques based on the DX11 tesselator from now on.
Reply

In Dx11 the fixed tesselation units will do much finer geometry details for much less memmory space and on chip so i think there isnt a single problem with that. Also the compute shader need minimal memory bandwith and can utilize plenty of idle shaders. The card is designed with dx11 in mind and it isnt using the wholle pipeline after all. I wouldnt make too early conclusions.(I think the perfomance will be much better after few drivers)

The DX11 tesselator in order to be utilized must the game engine to take advantage of it.
I am not talking about the tesselator.
I am talking about the classic Geometry unit (DX9/DX10 engines) and the Geometry Shader [GS] (DX10 engines only).

I'll check to see if i can find a tech site that has synthetic bench for Geometry related perf. and i will post again tomorrow, if i can find anything.

It's worth noting that when you factor in clock speeds, compared to the 5870 the 4870X2 offers 88% of the core performance and 50% more bandwidth. Some algorithms/games require more bandwidth and others need more core performance, but it's usually a combination of the two. The X2 also has CrossFire inefficiencies to deal with.

More interesting perhaps is that the GTX 295 offers (by my estimates, which admittedly are off in some areas) roughly 10% more GPU shader performance, about 18.5% more fill rate, and 46% more bandwidth than the HD 5870. The fact that the HD 4870 is still competitive is a good sign that ATI is getting good use of their 5 SPs per Stream Processor design, and that they are not memory bandwidth limited -- at least not entirely. Reply

The 4870x2 has somewhere around "double the data paths" in and out of it's 2 cpu's. So what you have with the 5870 putting as some have characterized " 2x 770 cores melded into one " is DOUBLE THE BOTTLENECK in and out of the core.
They tried to compensate with ddr5 1200/4800 - but the fact remains, they only get so much with that "NOT ENOUGH DATA PATHS/PINS in and out of that gpu core."
Reply

Omg these cards look great. Lol Silicon Doc is so gutted and furious he is making hmself look like a dam fool again only this time he should be on suicide watch...Nvidia cards are now obsolete..LOL. Reply

Hehe, indeed. Have you ever seen a scifi series called, "They Came
From Somewhere Else?" S.D.'s getting so worked up, reminds me of
the scene where the guy's head explodes. :D

Hmm, that's an alternative approach I suppose in place of post
moderation. Just get someone so worked up about something they'll
have an aneurism and pop their clogs... in which case, I'll hand
it back to Jarred. *grin*

One has to wonder if you do as you say, and only read the first sentence, and move on, why you would care what I've typed, since you cannot imagine anyone does anything different. Heck you shouldn't even notice this, right liar ?

Yes, another liar, not amazing, not at all.

No need to modify or delete the sentence prior to this JaredWalton, smarty pants insulter won't read it, but I'm sure you can't resist, for "convenience's" sake of course.

Oh, I don't have to bring anything up on topic at all, because neither did lowlife skum not reading, he just got his nose awfully browner. Reply

Very happy to have everyone here convinced you don't know what you're talking about? That's the only "truth" you've brought to this party. Marketing generally wants reputable people to promote a product - the "every man" approach. Funny that we don't see crazy people espousing products on TV (well, excepting stuff like Sham Wow!)

Being crazy like you are in this thread only cements your status as someone who doesn't have a firm grip on reality - someone that can't be trusted. Thanks again for clearing that up so thoroughly.

Im' sure you spend your time drooling in front of a TV after you spank your joystick for fps, so know all about wacky commercials you have memorized, and besides, it's a pathetic, all you have left insult, off topic, who cares, pure hatred, no real response, and the 5870 double epic fail IS THE HOTTEST ATI CARD OF ALL TIME! Reply

Let's have your claimed specialty outlined here in context, let's have you come clean on LAPTOP GRAPHICS, and spread the truth about how NVIDIA is so far ahead and has been for quite some time, that it's a JOKE to buy a gaming laptop with ATI graphics on board.
Come on mmister!
--
Now that is REALLY FUNNY ! You grabbed your arrogant unscrupulous self and proclaimed your fairness, but picked a spot where ati is completely EPIC FAIL, and NVIDIA is 1000% the only way to go, PERIOD, and left that MAJOR slap in the face high and dry.
--
Great job, yeah, you're the "sane one".
LOL Reply

Nvidia fan-boy, troll, loser....take your gforce cards and go home...we can now all see how terrible ATi is thanks you ...so I really don't understand why people are beating down their doors for the 5800 series, just like people did for the 4800 and 3800 cards. I guess Nvidia fan-boy trolls like you have only one thing left to do and that's complain and cry like the itty-bitty babies that some of you are about the competition that's beating you like a drum.....so you just wait for your 300 series cards to be released (can't wait to see how many of those are available) so you can pay the overpriced premiums that Nvidia will be charging AGAIN !...hahaha...just like all that re-badging BS they pulled with the 9800 and 200 cards...what a joke !.. Oh my, I must say you have me in a mood and the ironic thing is I do like Nvidia as much as ATi, I currently own and use both. I just can't stand fools like you who spout nothing but mindless crap while waving your team flag (my card is better than your's..WhaaWhaaWhaa)...just take yourself along with your worthless opinions and slide back under that slimly rock you came from. Reply

You've been insulting in this whole thread, so don't go crying to mamma about someone pointing that out. I did go and delete the posts from the person calling you gay and suggesting you should die in various ways, because as bad as you've been you haven't stooped quite that low (yet).

Laptop issues with ATI... you mean http://www.anandtech.com/mobile/showdoc.aspx?i=356...">like this. Granted, I gave them a chance to address the issues. They failed and my full article on the various Clevo high-end notebooks will make it quite clear how far ahead NVIDIA is in the mobile sector right now.

"Fair" is treating both sides objectively. ATI has major problems with getting updated graphics drivers out on mobile products, and that's horrible. On the desktop, they don't have such issues for the most part. Yeah, you might have to wait a month or so for a driver update to fix the latest hot release and add CrossFire support... but you have to do the exact same thing for NVIDIA with about the same frequency. Only SLI and CF setups really need the regular driver updates, and in many cases the latest 18x and 19x NVIDIA drivers are slower than 16x and 17x on games that are older than six months.

Fair is also looking at these results and saying, "gee, I can get a 5870 for $400 (or $360 if you wait a few weeks for supply to bolster up), and that same card has no CrossFire or SLI wonkiness and costs less than the GTX 295 and 4870X2. Okay, 4870X2 and GTX 295 beat it in raw performance in some cases, but I don't think there's a single game where you can say one HD 5870 offers less than acceptable performance at 2560x1600, and I can guarantee there are titles that still have issues with SLI and CrossFire. (Yeah, you need to turn down some details in Crysis to get acceptable performance, but that's true of anything other than the top SLI and CF configs.) I would be more than happy to give up a bit of performance to avoid dealing with the whole multi-GPU ordeal. Why don't you tell us how innovative and awesome tri-SLI and quad-SLI are while you're at it?

At present, you have contributed more than 20% of the comments to this article, and not a single one has been anything but trolling. Screaming and yelling, insulting others, lying and making stuff up, all in support of a company that is just like any other big company. We don't ban accounts often, but you've more than warranted such action. Reply

I think it is more than absolutely clear, that in fact, I said my peace, my first post, and was absolutely attacked. I didn't attack, I got attacked, and in fact you have done plenty of attacking as well.
I have also provided links, to back up my assertions and counter arguments, added the text for easy viewing, and pointed out in very specific detail whay issues with bias I had and why.
--
Now you've claimed "all I've done is post FUD".
It is nothing short of amazing for you to even posit that, however I can certainly understand anyone pointing out the obvious bias problems (in the article no less) is "on thin ice", and after getting attacked, is solely blamed for "no facts".
---
I certainly won't disagree that the 5870 is a good value as appearing if especially if you don't like to deal with 2 cards or 2 cores.
But my posts never claimed otherwise. I first claimed it was not as good as wanted, was disappointing, and therefore was not the end of what ati had in store.
Since I have posted on the 5890, which will in fact be 512 bit.
Now, you don't like losing your points, or someone adept enough, smart enough, and accurate enough to counter them.
Sorry about that, and sorry that I won't just lay down, as more heaps are shovelled my way.
You skip my actual points, and go some other tangent.
1. PhysX is an advantage and best implementation so far.
your response: "It sucks because only 2 games ar available"
---
Is that correct for you to do ? Is it not the very best so far ? Yes, it is in fact.
I have remained factual and reasonable, and glad enough to throw back when I'm attacked.
But the fact remains, I have made absolutely solid 100% poijnts no matter how many times you claim " lying and making stuff up, "
---
Yet of course, what I just said about NVIDIA and laptp chips, you agreed with. So accoring to your own characterization (quite unfair), all you do is scream and lie, too.
Just wonderful.
The GT300 is going to blow this 5870 away - the stats themselves show it, and betting otherwise is a bad joke, and if sense is still about, you know it as well. Reply

"The GT300 is going to blow this 5870 away - the stats themselves show it, and betting otherwise is a bad joke, and if sense is still about, you know it as well."
If not than it would be a sad day for nvidia after more than 2 years of nothing.
But the 5870 can still blow away any other card around. With DX11 fixed tesselators in pipeline and compute shader postprocessing (which will finaly tax the 1600 stream procesors)it will look much better than curent dx9. The main advantage of this card is the dx11 which of course nvidia doesnt hawe. And maybe the dewelopers will finaly optimize for the ati vector architecture if there isnt any other way meant to be played(payed) :).
Actualy nvidia couldnt shrink the last gen to 40nm with its odd high frequency scalar shaders (which means much more leakage) so i wouldent expect much more from nvidia than double the stats and make dx11 either.
Reply

According to our info, Nvidia's GT 220, 40nm GT216-300 GPU, will be officially announced in about three weeks time.

The Gigabyte GT 220 works at 720MHz for the core and comes with 1GB of GDDR3 memory clocked at 1600MHz and paired up with a 128-bit memory interface. It has 48 stream processors with a shader clock set at 1568MHz.
--
See, there's a picture as well. So, it's not like nothing has been going on. It's also not "a panicky response to regain attention" - it is the natural progression of movement to newer cores and smaller nm processes, and those things take time, but NOT the two years plus ideas that spread around.... Reply

The GT200 was released on June 16th and June17th, 2008, definitely not more than 2 years ago, but less.
The 285 (nm die shrink) on January 15th, 2009, less than 1 year ago.

The 250 on March 3rd this year, though I wouldn't argue you not wanting to count that.

I really don't think I should have to state that you are so far off the mark, it is you, not I, that might heal with correction.

Next, others here, the article itself, and the ati fans themselves won't even claim the 5870 blows away every other card, so that statement you made is wrong, as well.
You could say every other single core card - no problem.
Let's hope the game scenes do look much better, that would be wonderful, and a fine selling point for W7 and DX11 (and this casrd if it is, or remains, the sole producer of your claim).
I suggest you check around, the DX11 patch for Battleforge is out, so maybe you can find confirmation of your "better looking" games.

" Actualy nvidia couldnt shrink the last gen to 40nm with its odd high frequency scalar shaders "
I believe that is also wrong, as they are doing so for mobile.
What is true is ATI was the ginea pig for 40nm, and you see, their yeilds are poor, while NVidia, breaking in later, has had good yeilds on the GT300, as stated by - Nvidia directly, after ATI made up and spread lies about 9 or less per wafer. (that link is here, I already posted it)
---
If Nvidia's card doubles performance of the 285, twice the frames at the same reso and settings, I will be surprised, but the talk is, that or more. The stats leaked make that a possibility. The new MIMD may be something else, I read 6x-15x the old performance in certain core areas.
What Nvidia is getting ready to release is actually EXCITING, is easily described as revolutionary for videocards, so yes, a flop would be a very sad day for all of us. I find it highly unlikely, near impossible, the silicon is already being produced.
The only question yet is how much better.
Reply

I want to note also that the 5870 is hard to find which means the 40nm tsmc is still far from the last gen 55nm. After they manage it to get on the 4800 level the prices will be different. And maybe the gt300 is still far away. Reply

Is there any posting moderation here? Some of the flame/troll posts
are getting ridiculous. The same thing is happening on toms these days.

Jarred, best not to reply IMO. I don't think you're ever going to get
a logical response. Such posts are not made with the intent of having
a rational discussion. Remember, as Bradbury once said, it's a sad
fact of nature that half the population have got to be below average. :D

I would like to see some more tests on power load. Actualy i dont think there are too many people with 2560*1600 displays. There are still plenty people with much lower resolutions like 1680*1050 and play with max imagequality. With VSync on if u reduce 200fps to 60fps u get much less gpu temps and also power load.(things like furmark are miles away from real world) On LCD there is no need to turn it off just if u benchmark and want more fps. I would like to see more power load tests with diferent resolutions and Vsync on.(And of course not with crysis)
Also some antialiasing tests, the adaptive antialiasing on 5800 is now much faster i read.And the FSAA is of course blurry, it was always so. If u render the image in 5120*3200 on your 2560*1600 display and than it combines the quad pixels it will seems like its litle washed out. Also in those resolution even the highress textures will begin to pixalate even in closeup so the combined quad pixels wont resemble the original. Without higher details in game the FSAA in high resolution will just smooth the image even more. Actualy it always worked this way even on my gf4800.

PS - a good friend of mine has an ATI HD2900 pro with 320 shaders (a strange one not generally listed anywhere) that he bought around the time I was showing him around the egg cards and I bought an ATI card as well.
Well, he's a flight simmer, and it has done quite well for him for a about a year, although in his former Dell board it was LOCKED at 500mhz 503mem 2d and 3d, no matter what utility was tried, and the only driver that worked was the CD that came with it.
Next on a 570 sli board he got a bit of relief, an with 35 various installs or so, he could sometimes get 3d clocks to work.
Even with all that, it performed quite admirably for the strange FS9 not being an NVidia card.
Just the other day, on his new P45 triple pci-e slot, after 1-2 months running, another friend suggested he give OCCT a go, and he asked what is good for stability (since he made 1600fsb and an impressive E4500 (conroe) 200x11/2200 stock to a 400x8/ 3200mhz processor overclock for the past week or two). "Ten minutes is ok an hour is great" was the response.
Well, that HD2900pro and OCCT didn't like eachother much either - and DOWN went the system, 30 seconds in, cooking that power supply !
--- LOL ---
Now that just goes to show you that hiding this problem with the 4870 and 4890 for so many months, not a peep here... is just not good for end users...
---
Thanks a lot tight lipped "nothing is wrong with ATI cards" red fans. Just wonderful.
---
He had another PS and everything else is OK

, and to be quite honest and frank and fair as I near always am, that HD2900pro still does pretty darn good for his flight simming ( and a few FPS we all play sometimes) and he has upgraded EVERYTHING else already at least once, so on that card, well done for ATI. (well depsite the drivcer issues, 3d mhz issues, etc)
Oh, and he also has a VERY NICE 'oc on it now(P45 board) - from 600/800 core/mem 3D to 800/1000, and 2d is 500/503, so that's quite impressive. Reply

Ahh, isn't that typical, the 3 or 4 commenters raving for ati "go dead silent" on this furmark and OCCT issue.
--
"It's ok we never mentioned it for a year plus as we guarded our red fan inner child."
(Ed. note: We "heard" nvidia has a similar implementation")
WHATEVER THAT MEANS !
---
I just saw (somewhere else) another red fanatic bloviating about his 5870 only getting up to 76C in Furmark. ROFLMAO
--
Aww, the poor schmuck didn't know the VRM's were getting cut back and it was getting throttled from the new "don't have a heat explosion" tiny wattage crammed hot running ati core. Reply

Well you brought to mind another sore spot for me.
In the article, when the 4870 and 4890 FAIL furmark and OCCT, we are suddenly told, after this many MONTHS, that a blocking implementation in ati driver version 9.2 was put forth by ATI, so their cards wouldn't blow to bits on furmark and OCCT.
I for one find that ABSOLUTELY AMAZING that this important information took this long to actually surface at red central - or rather, I find it all too expected, given the glaring bias, ever present.
Now, the REASON this information finally came forth, other thanb the epic fail, is BECAUSE the 5870 has a "new technology" and no red fan site can possibly pass up bragging about something new in a red core.
So the brag goes on, how protection bits in the 5870 core THROTTLE BACK the VRM's when heat issues arise, hence allowing for cheap VRM's & or heat dissipation issues to divert disaster.
That's wonderful, we finally find out about ANOTHER 4870 4980 HEAT ISSUE, and driver hack by ATI, and a failure to RUN THE COMMON BENCHMARKS we have all used, this late in the game.
I do have to say, the closed mouthed rabid foam contained red fans are to appluaded for their collective silence over this course of time.
Now, further bias revealed in tha article - the EDITOR makes sure to chime in, and notes ~"we have heard Nvidia has a similar thing".
What exactly that similar thing is, and whom the Editor supposedly heard it from, is never made clear.
Nonetheless, it is a hearty excuse and edit for, and in favor of, EXCUSING ATI's FAILURES.

Nonetheless, ALL the nvidia cards passed all the tests, and were 75% in number to the good cooler than the ATI's that were- ATI all at 88C-90C, but, of course, the review said blandly in a whitewash, blaming BOTH competitors nvidia and ati - "temps were all over the place" (on load).
(The winning card GTX295 @ 90C, and 8800GT noted 92C *at MUCH reduced case heat and power consumption, hence INVALID as comparison)
Although certain to mention the GT8800 at 92C, no mention of the 66C GTX250 or GTX260 winners, just the GTX275 @ 75C, since that was less of a MAJOR EMBARRASSMENT for the ATI heat load monsters !
Now THERE'S A REAL MONSTER - AND THAT'S ALL OF THE ATI CARDS TESTED IN THIS REVIEW ! Not just the winning nvidia, while the others that matter (can beat 5870 in sli or otherwise) hung 24C to 14C lower under load.
So, after all that, we have ANOTHER BIAS, GLARING BIAS - the review states " there are no games we know of, nor any we could find, that cause this 5870 heat VRM throttling to occur".
In other words, we are to believe, it was done in the core, just for Furmark and OCCT, or, that it was done as precaution and would NEVER come into play, yes, rest assured, it just won't HAPPEN in the midst of a heavy firefight when your trigger finger is going 1,000mph...
So, I found that just CLASSIC for this place. Here is another massive heat issue, revealed for the first time for 4870 and 4890 months and months and months late, then the 5870 is given THE GOLDEN EXCUSE and massive pass, as that heat reducing VRM cooling voltage cutting framerate lowering safety feature "just won't kick in on any games".
ROFL
I just can't help it, it's just so dang typical. Reply

I always get nostalgic for Tech TV when a new gen of video cards come out. Watching Leo, Patrick, et al. discuss the latest greatest was like watching kids on Christmas morning. And of course there was Morgan.
Reply

SiliconDoc, this is pathetic. Why are you so upset? No one cares about arguing the semantics of hard or paper launches. Besides, where the F is Nvidias Gt300 thingy? You post here more than amd fanboys, yet you hate amd... just hibernate until the gt300 lauunches and then you can come back and spew hatred again.

Seriously... the fact that you cant even formulate a cogent argument based on anything performance related tells me that you have already ceded the performance crown to amd. Instead, you've latched onto this red herring, the paper launch crap. stop it. just stop it. You're like a crying child. Please just be thankful that amd is noww allowing you to obtain more of your nvidia panacea for even less money!

Hooray competition! EVERYONE WINS! ...Except silicon doc. He would rather pay $650 for a 280 than see ati sell one card. Ati is the best thing that ever happened to nvidia (and vice versa) Grow the F up and dont talk about bias unless you have none yourself. Hope you dont electrocute yourself tonight while making love to you nvidia card. Reply

" Hooray competition! EVERYONE WINS! ...Except silicon doc. He would rather pay $650 for a 280 than see ati sell one card."
And thus you have revealed your deep seated hatred of nvidia, in the common parlance seen.
Frankly my friend, I still have archived web pages with $500 HD2900XT cards from not that long back, that would easily be $700 now with the inflation we've seen.
So really, wnat is your red raving rooster point other than you totally excuse ATI tnat does exactly the same thing, and make your raging hate nvidia whine, as if "they are standalone guilty".
You're ANOTHER ONE, that repeats the same old red fan cleche's, and WON'T OWN UP TO ATI'S EXACT SAME BEHAVIOR ! Will you ? I WANT TO SEE IT IN TEXT !
In other words, your whole complaint is INVALID, because you apply it exclusively, in a BIASED fashion.
Now tell me about the hundres of dollars overpriced ati cards, won't you ? No, you won't. See that is the problem. Reply

If you think companies are going to survive without copying what other companies do, you're sadly mistaken.

Yes, nVidia has made advances, but so has ATI. When nVidia brought out the GF4 Ti series, it supported Pixel Shader 1.3 whereas ATI's R200-powered 8500 came out earlier with the more advanced Pixel Shader 1.4. ATI were the first of the two companies to introduce a 256-bit memory bus on their graphics cards (following Matrox). nVidia developed Quincunx, which I still hold in high regard. nVidia were the first to bring out Shader Model 3. I still don't know of any commercially available nVidia cards with GDDR5.

We could go on comparing the two but it's essential that you realise that both companies have developed technologies that have been adopted by the other. However, we wouldn't be so far down this path without an element of copying.

The 2900XT may be overpriced because it has GDDR4. I'm not interested in it and most people won't be.

"In other words, your whole complaint is INVALID, because you apply it exclusively, in a BIASED fashion. " Funny, I thought we were seeing that an nauseum from you?

Why did I buy my 4830? Because it was cheaper than the 9800GT and performed at about the same level. Not because I'm a "red rooster".

ATI may have priced the 5870 a little high, but in terms of its pure performance, it doesn't come too far off the 295 - a card we know to have two GPUs and costs more. In the end, perhaps AMD crippled it with the 256-bit interface, but until they implement one you'll be convinced that it's a limitation. Maybe, maybe not. GT300 may just prove AMD wrong. Reply

You have absolutely zero proof that we wouldn't be further down this path without the "competition".
Without a second company or third of fourth or tenth, the monopoly implements DIVISIONS that complete internally, and without other companies, all the intellectual creativity winds up with the same name on their paycheck.
You cannot prove what you say has merit, even if you show me a stagnant monopoly, and good luck doing that.
As ATI stagnated for YEARS, Nvidia moved AHEAD. Nvidia is still ahead.
In fact, it appears they have always been ahead, much like INTEL.
You can compare all you want but "it seems ati is the only one interested in new technology..." won't be something you'll be blabbing out again soon.
Now you try to pass a lesson, and JARED the censor deletes responses, because you two tools think you have a point this time, but only with your deleting and lying assumptions.
NEXT TIME DON'T WAIL ATI IS THE ONLY ONE THAT SEEMS INTERESTED IN IMPLEMENTING NEW TECHGNOLOGY.
DON'T SAY IT THEN BACKTRACK 10,000 % WHILE TRYING TO "TEACH ME A LESSON".
You're the one whose big far red piehole spewed out the lie to begin with.

Youre spot on about his bias. Every single post consists of trash-talking pretty much every ATI card and bigging up the comparative nVidia offering. I think the only product he's not complained about is the 4770, though oddly enough that suffered horrific shortage issues due to (surprise) TSMC.

Even if there were 58x0 cards everywhere, he'd moan about the temperature or the fact it should have a wider bus or that AMD are finally interested in physics acceleration in a proper sense. I'll concede the last point but in my opinion, what we have here is a very good piece of technology that will (like CPUs) only get better in various aspects due to improving manufacturing processes. It beats every other single GPU card with little effort and, when idle, consumes very little juice. The technology is far beyond what RV770 offers and at least, unlike nVidia, ATI seems more interested in driving standards forward. If not for ATI, who's to say we'd have progressed anywhere near this far?

No company is perfect. No product is perfect. However, to completely slander a company or division just because he buys a competitor's products is misguided to say the least. Just because I own a PC with an AMD CPU, doesn't mean I'm going to berate Intel to high heaven, even if their anti-competitive practices have legitimised such criticism. nVidia makes very good products, and so does ATI. They each have their own strengths and weaknesses, and I'd certainly not be using my 4830 without the continued competition between the two big performance GPU manufacturers; likewise, SiliconDoc's beloved nVidia-powered rig would be a fair bit weaker (without competition, would it even have PhysX? I doubt it). Reply

Well, that was just amazing, and you;re wrong about me not complaining about the 4770 paper launch, you missed it.
I didn't moan about the temperature, I moaned about the deceptive lies in the review concerning temperatures, that gave ATI a complete pass, and failed to GIVE THE CREDIT DUE THAT NVIDIA DESERVES because of the FACTS, nothing else.
The article SPUN the facts into a lying cobweb of BS. Juzt like so many red fans do in the posts, and all over the net, and you've done here. It is so hard to MAN UP and admit the ATI cards run hotter ? Is is that bad for you, that you cannot do it ? Certainly the article FAILED to do so, and spun away instead.
Next, you have this gem " at least, unlike nVidia, ATI seems more interested in driving standards forward."
ROFLMAO - THIS IS WHAT I'M TALKING ABOUT.
Here, let me help you, another "banned" secret that the red roosters keep to their chest so their minions can spew crap like you just did: ATI STOLE THE NVIDIA BRIDGE TECHNOLOGY, ATI HAD ONLY A DONGLE OUTSIDE THE CASE, WHILE NVIDIA PROGRESSED TO INTERNAL BRIDGE. AFTER ATI SAW HOW STUPID IT WAS, IT COPIED NVIDIA.
See, now there's one I'll bet a thousand bucks you never had a clue about.
I for one, would NEVER CLAIM that either company had the lock on "forwarding technbology", and I IN FACT HAVE NEVER DONE SO, EVER !
But you red fans spew it all the time. You spew your fanboyisms, in fact you just did, that are absolutely outrageous and outright red leaning lies, period!
you: " at least, unlike nVidia, ATI seems more interested in driving standards forward...."
I would like to ask you, how do you explain the never before done MIMD core Nvidia has, and will soon release ? How can you possibly say what you just said ?
If you'd like to give credit to ATI going with DRR4 and DDR5 first, I would have no problem, but you people DON'T DO THAT. You take it MUCH FURTHER, and claim, as you just did, ATI moves forward and nvidia does not. It's a CONSTANT REFRAIN from you people.
Did you read the article and actually absorb the OpenCL information ? Did you see Nvidia has an implementation, is "ahead" of ati ? Did you even dare notice that ? If not, how the hell not, other than the biased wording the article has, that speaks to your emotionally charged hate Nvidia mindset :
"However, to completely slander a company or division just because he buys a competitor's products is misguided to say the least."
That is NOT TRUE for me, as you stated it, but IT IS TRUE FOR YOU, isn't it ?
---
You in fact SLANDERED Nvidia, by claiming only ATI drives forward tech, or so it seems to you...
I've merely been pointing out the many statements all about like you just made, and their inherent falsehood!
---
Here next, you pull the ol' switcharoo, and do what you say you won't do, by pointing out you won't do it! roflmao: " doesn't mean I'm going to berate Intel to high heaven, even if their anti-competitive practices have legitimised such criticism.."
Well, you just did berate them, and just claimed it was justified, cinching home the trashing quickly after you claimed you wouldn't, but have utterly failed to point out a single instance, unlike myself- I INCLUDE the issues and instances, pointing them out imtimately and often in detail, like now.
LOL you: " I'd certainly not be using my 4830 without ...."
Well, that shows where you are coming from, but you're still WRONG. If either company dies, the other can move on, and there's very little chance that the company will remain stagnant, since then they won't sell anything, and will die, too.
The real truth about ATI, which I HAVE pointed out before, is IT FELL OFF THE MAP A FEW YEARS BACK AND ALTHOUGH PRIOR TO THAT TIME WAS COMPETITIVE AND PERHAPS THE VERY BEST, IT CAVED IN...
After it had it's "dark period" of failure and depair, where Nvidia had the lone top spot, and even produced the still useful and amazing GTX8800 ultimate (with no competition of any note in sight, you failed to notice, even to this day - and claim the EXACT OPPOSITE- because you, a dead brained red, bought the "rebrand whine" lock stock and barrel), ATI "re-emerged", and in fact, doesn't rteally deserve praise for falling off the wagon for a year or two.
See, that's the truth. The big fat red fib, you liars can stop lying about is the "stagnant technology without competition" whine.
ATI had all the competition it could ever ask for, and it EPIC FAILED for how many years ? A couple, let's say, or one if you just can't stand the truth, and NVIDIA, not stagnated whatsoever, FLEW AHEAD AND RELEASED THE MASSIVE GTX8800 ULTIMATE.
So really friend, just stop the lying. That's all I ask. Quit repeating the trashy and easily disproved ati cleche's.
Ok ? Reply

And what was the 8800 GTX Ultimate other than a pathetic clock-speed bump? After that we waited for the GT200 series which launched at $600. It took ATI to bring the price down, just like it took NVIDIA to bring the ATI prices down.

NVIDIA stagnated while they were on top, just like ATI with the 9700/9800. NVIDIA made a huge misstep with the FX 5800 series, and ATI did the same thing with the X1800 series (and to a lesser extent the X800 parts). All companies have good and bad times. (Pentium 4 ring a bell? What about the Phenom?)

Your posts on this article have contributed nothing whatsoever other than ranting. Paper or hard launch? Paper is when *nothing* is out for a few weeks (or longer). If NVIDIA "launched" GT300 today, that would be paper. ATI has 5870 parts, albeit in limited quantities. GTX 275 certainly wasn't any better than this, but long term it all evens out.

And who cares about how long a company produced the better product? What matters is what they have now. Pentium 4 stunk in comparison to Athlon 64; does that mean no one should even consider Core 2 or Core i7? According to your "logic" that's exactly what we should do. Give it a rest; when NVIDIA launches GT300, we'll see what it can do. We'll also see if it can compete on pricing. Being fastest is only part of the battle, and anything over $300 is going to be a lower selling part. Reply

Well you are ABSOLUTELY LYING about the GTX275 availability, PERIOD.
Next, you didn't refute a single thing I said, but more or less came closer to agreement in many ways, but were WRONG, too.
Now you've decided you can half heartedly claim both sides do the same thing, and even throw in AMD and Intel, let's get to your continuing bias.
You couldn't resist "pathetic clock increase" for the GT8800 Ultimate (would love to see where you said that about the 4890, or the HD2900XTX) , failed to note the OVERPRICED ati card I pointed out, and in your absolute ignorance and CYA, think "stagnation" is something that occurs when "on top" instead of just the natural time it takes to move forward with new technology, after having just completed a round of it.
Once a company makes it "on top" they HAVE SPENT their latest and greatest new tech, and IT TAKES TIME TO GET TO THE NEXT LEVEL.
However, in YOUR MINDS, that is "stagnation". You offer ABSOLUTELY NO TIMETABLE TO EVEN REMOTELY "PROVE" your insane assertion.
You simply want it "ACCEPTED", which is about the DUMBEST theory one can imagine anyway, and I already pointed out EXACTLY why it is SO STUPID.
Let me tell you again, so IT CAN SINK IN FELLA!
When a company "makes it on top" they have just spent their latest greatest newest wad of technology" - AND IT TAKES TIME TO IMPROVE ON THEIR OWN ACHIEVEMENT !
In fact, they, having just OUTDONE the competition, are to be expected to "NOT COME UP WITH SOME MASSIVE NEW WIN" for the second time, in a row, and "quickly" - the SECOND time, as you fools expect, and even SAY SO, without direct words, because of course, you are FOOLISH and have BOUGHT THE SPIN, like 3rd graders who cannot think for themselves.
You basically "expect the impossible" - another leap forward right after the one just accomplished, before anyone else can even catch up.
YES, IT IS IMMENSLY IDIOTIC! Now you know!
---
You finally come to your senses a bit with: " All companies have good and bad times."
YES THEY DO. But not in your paranoid, conspiractorial, world of "stagnation" - once the top is reached. No, you expect a second miracle, in short order, and say so.
----
You also excuse ATI's bad times I pointed - by kicking yourself in the face doing it, negating your OTHER conspiracy rant " And who cares about how long a company produced the better product? "
Well, if that were actually the case for you, you wouldn't have screamed about stagnation once a company is on top, because obviously YOU DEEPLY CARE ABOUT WHO HAS THE BETTER PRODUCT, AND FOR HOW LONG.
Not only that, you claim, once they are there, they turn flaccid and lazy.... and boy it burns you up !
ROFL, you CONTRADICT YOURSELF, and haven't got a clue you're doing it. That of course, means, that I have just made a major contribution TO YOU, straightening out your wacked conspiracy thinking, that no doubt was induced and locked in by the constant red fan hatred for nvidia, here at this site, over several years, and on the net widely, as well. Not like here is unique.
---
Now, if you had sense, you'd be more likely to wonder why when some company is on top, that their competition cannot pass them up, or equal them, not "why they sit there stagnating" - meaning, in another sense, one we all relate to, it just drives you nuts the next thing isn't here already - because you, we, everyone wants the next greatest, and so, you BLAME the top dog for not fufilling your wish immediately, when, they just had, in fact, done so....
Yeah, there is NO END to how insane that rant of yours is, that the reds, widely repeat against Nvidia, and there is absolutely NO BASIS FOR IT AT ALL in reality. Reply

"If either company dies, the other can move on, and there's very little chance that the company will remain stagnant, since then they won't sell anything, and will die, too."

won't do any good. I mean even my 13 year old nephew understands the basics of economy better than this guy.
I think every one in their right mind agrees that competition always leads to lower prices and more innovation.

Also I can't see where there should be any bias - things like the temp of the 2 ati cards are clearly stated in the article and everyone who can read graphs and the text should be able to get a clear picture of the new card.

Just because some people just read every other sentence doesn't mean the review is biased.. Reply

Here, let me point out another problem with your "basic understanding", which is the point you start at, remain at, and finish at:
" won't do any good. I mean even my 13 year old nephew understands the basics of economy better than this guy.
I think every one in their right mind agrees that competition always leads to lower prices and more innovation. "
LET'S TAKE A CURRENT EXAMPLE: PhysX vs Havok vs Bullet Physics vs Pix - all various forms of in game "physics".
Well, what competition done with this ?
You might call it "innovation", but in this case, it should be called FRAGMENTATION, and STAGNATION - due to your "basic understanding" in economy, you can't fathom such a thing, because it doesn't apply to your pat cleche, which you can ATTACK unfairly with.
Now, if there was a MONOPOLY, ( which is what the red fans have been screaming for, a SINGLE STANDARD, thrust down the throats of all the card makers and game makers, they claim, "open standard" is the very best!), a real monopoly, not an EDICT from a "standards board", why we'd already havce advancement far beyond what we currently do with the fragmented players and implementations.
So, NO competition does not always lead to BETTER END USER expereince or FASTER technological implementation.
So much for you and your 13 year old's "understanding".
In this case, competition has led to fragmentation, and lack of implementation in games, and slower advancement, due to the competing players. Reply

That was the worst "counter" to an argument I've ever read. Standards are not the same thing as a monopoly, and I don't even need to use all caps to get that point across. Standards are what we have with memory types, interfaces, and yes even graphics. A "monopoly" on graphics that has everyone move to one standard can be beneficial; certainly having four competing "standards" doesn't really help.

Eventually, the market will select what works best. There used to be a question of OpenGL vs. Direct3D, and that discussion has all but ended. MS put the money and time into DirectX and actually improved it to the point where most programmers stopped caring about using the alternative.

That's why PhysX isn't gaining traction: it has to compete with Havok, which the vast majority of content creators appear to prefer. So NVIDIA can pay companies to use PhysX in games like Batman, but until they actually get people to willingly use their stuff instead of Havok (by improving PhysX), it's not going to "win". And what the companies really want is a standard that works on all hardware, so we're more likely to see OpenCL or Direct Compute take over instead of a proprietary PhysX API. Hence, our discussion in this article about how OpenCL and Direct Compute are promising APIs.

It's not fragmentation, any more than a choice between Chevron, Philips, BP, Texon, etc. is "fragmentation" of the oil industry. Just because one implementation isn't dominant doesn't mean the problem is because of competition. Eventually, some implementation will actually get it right and companies will go that route. Clearly that hasn't happened yet, and your beloved PhysX (two titles where it actually matters so far: Mirror's Edge and Batman) is losing based on merit and nothing else. If it was better, people would use it. End of discussion. I guess all the hyper intelligent programmers making amazing games are too stupid to realize how awesome PhysX is without getting help from NVIDIA. It's so great that they'll pay money to Havok to license that API rather than use PhysX for free.

A monopoly on hardware is a different matter, and again no one is screaming for a monopoly except perhaps for you. Nice job trying to add weight to your position by being a rabid fanboy and accusing the opposition of doing exactly what you're doing. If there is only one hardware vendor, what drives them to improve? Nothing but themselves, which leads to stagnation. It really is basic economics that's apparently too much for a fanboy to grasp. I'd like to see more CPU and GPU vendors (well, *good* vendors), but it's difficult to do properly and thus we remain with the current status quo.

Tell me this: how would it hurt anyone for Company X to enter the graphics market and make something that is clearly superior to ATI and NVIDIA offerings and is 100% compatible with current standards like DirectX and OpenGL? The only people that would potentially hurt would be ATI and NVIDIA employees and shareholders. Similarly, how would it hurt for Company Y to come out with a new API for physics that is clearly superior in every way to PhysX, Havok, etc? If it's better, it would become the new de facto standard. Having competition isn't the problem; the problem is competition between lousy options (i.e. GMC, Chrysler, Ford, and Chevy) when what we want is something better.

Now go ahead and use half-coherent ranting and capitals while you ignore everything meaningful in this post and put up another tirade about how stupid and horrible I am with no clear comprehension of reality. I'm done. Reply

--
ROFLMAO 2 games...
--
Larrabee will use the x86 instruction set with Larrabee-specific extensions
Larrabee will include very little specialized graphics hardware, .... using a tile-based rendering approach
---
Larrabee's early presentation has drawn some criticism from GPU competitors. At NVISION 08, several NVIDIA employees called Intel's SIGGRAPH paper about Larrabee "marketing puff" and told the press that the Larrabee architecture was "like a GPU from 2006".[8] As of June 2009, prototypes of Larrabee have been claimed to be on par with the nVidia GeForce GTX 285.[9]
http://en.wikipedia.org/wiki/Larrabee_(GPU)">http://en.wikipedia.org/wiki/Larrabee_(GPU)---
So in this case we have 3 warring parties (your beloved "beneficial" competition), and endless delays, lack of game developement and content because of that, and the score won't be settled till the MONOPOLY POWER sets "the standard" (opencl you hope it seems/ aka JAVA for physx, or anything so long as it isn't PhysX, right?) as you call it, and even then, with the nature of game coding, it is highly likely that more than one type and implementation will widely survive. The "market competition" picked VHS over BETA, and nearly everyone calls that a mistake to this day (psst, their were powerful players behind the scene just like in the physics game wars).
What really happpens in what we're talking about is POWER picks what is brought forth for all, and you should well know instead of pretending the lie that you have, that OFTEN in this computing world something worse is shoved down everyone's throats because of that.
Your infantile "pure minded rhetoric" is just that, a big pile of BS, as usual.
Reply

PS - I quite understand in your "only framerates matter" deranged high end video red rager gaming card mindset, THE ONLY GAMES THAT MATTER FOR YOU IN YOUR BS ARGUMENT are PC games that wind up on ANANDTECH PC videocard reviews. ROFLMAO
YES IT'S TRUE!
Hence "two games!", only for PC, nothing else, is "your standard".
And IT'S DERANGED, given the facts. Reply

Unlike you, obviously, I've tried NVIDIA and ATI, and both are fine. I've never suggested I want ATI to win, and I don't know why you continue to think that. PhysX is "used" in tons of games... where does it actually matter? How many of the games you list sold more than 100K copies? How many are actually good games? How often does it make a discernible and positive difference? (More trash flying around isn't really better.)

Let's also not count chickens before they hatch and remove games that haven't even shipped. You know, sort of like removing GT300 from benchmark comparisons until it's actually available.

That list includes games that had super lame PhysX (all the Tom Clancy titles for sure), games that are completely trivial (skeeball anyone?), games where it degrades performance relative to not enabling it (umm, that's most of the titles). Unreal Tournament 3 has PhysX support... but only on the released-after-the-patch levels, and even then only the Tornado level is actually impressive visually. Almost no one played/plays these levels.

Since you've got so much time to promote NVIDIA, tell us all which games on this list are "Must Haves" and make good use of PhysX. I said there were "two games where it has mattered: Batman and Mirror's Edge". Now put your fanboy hat on and tell us which games in that list. I'm sure that 50 Cent, 10 Balls 7 Cups, Jericho, Cellfactor, Rock'n'Roll Dice, and Untra Tubes are at the top of the sales/preorder charts! Reply

The other SEVERE PROBLEM your massive bias holds is this:
-
Just a few months ago, HERE, PhysX was given a run in Mirror's Edge, and NEVER BEFORE SEEN or IMPLEMENTED effects were present.
Anand loved it, couldn't get away...from the computer, as he said.
The Master declared it.
--
But, when EMERGING TECHNOLOGY from the card company you must absolutely HATE comes forth, for you as a gamer, an advantage even, you have nothing but a big pile of dung for it.
---
Get over there to the other fellow in the discussion and point out how competition brings innovation, right, and that PhysX IS INNOVATION !
---
( Oh, that's right, after your tagteam preached it, you already breached it, and blew cookies all over your economic lessons.)
LOL -= hahahha -
--
I guess this is another case of "NVidia stagnating" in BOTH your minds. (Yes, of course it is)
I do hope your conditions clear up. Reply

Actually, Anand said Mirror's Edge with PhysX was the first (and at the time only) title where it made a palpable difference to the point where turning it off made him miss it somewhat. Actually, http://www.anandtech.com/video/showdoc.aspx?i=3539..." target="_blank">here are his exact words. He also goes on to discuss such things as Havok porting to OpenCL and how that won't happen for PhysX. Thanks master of hyperbole; taking things out of context is the sure sign of a weak argument. But yes, NVIDIA was highly "innovative" when they bought out a competitor because they couldn't do any better -- a competitor that to date had released hardware no one wanted and a few titles that didn't matter.

You're so set on making me an ATI fanatic and throwing about words like hate and sadness and whatever. It's pathetic and funny that you're so delusional that you could even pretend to think that way. I mean, obviously you don't really think that and you're just some troll trying to stir up crap, but it boggles the mind that you have this much energy to put into spewing vitriol.

Love ATI? Hardly. I've ripped on their mobile components quite thoroughly for the past two years. After all, I review laptops so that's my area of expertise, and up until HD 46xx they had nothing compelling on laptops for years. Even the 4000 series on laptops is marred by their lack of mobile reference drivers, something I've praised NVIDIA for releasing (after saying it was absolutely necessary for the year or two before it happened).

So yes, put your blinders on and act as though you have any idea whatsoever about what people think. Someone disagrees with you and they become spawn of satan, worshiping all that is ATI. It's a reflection of your own insecurities that you can't accept the good of the competitor while at the same time pointing out flaws. Go check into a mental institute, or head back over to nZone and be secure with others that can't be objective when it comes to graphics cards. Reply

No, it's the fact you tell LIES, and always in ati's favor, and you got caught, over and over again.
That is WHAT HAS HAPPENED.
Now you catch hold of your senses for a moment, and supposedly all the crap you spewed is "ok". Reply

Once again, all that matters to YOU, is YOUR games for PC, and ONLY top sellers, and only YOUR OPINION on PhysX.
However, after you claimed only 2 games, you went on to bloviate about Havok.
Now you've avoided entirely that issue. Am I to assume, as you have apparently WISHED and thrown about, that HAVOK does not function on NVidia cards? NO QUITE THE CONTRARY !
--
What is REAL, is that NVidia runs Havok AND PhysX just fine, and not only that but ATI DOES NOT.
Now, instead of supporting BOTH, you have singled out your object of HATRED, and spewed your infantile rants, your put downs, your empty comparisons (mere statements), then DEMAND that I show PhysX is worthwhile, with "golden sellers". LOL
It has been 1.5 years or so since Aegia acquisition, and of course, game developers turning anything out in just 6 short months are considered miracle workers.
The real problem oif course for you is ATI does not support PhysX, and when a rouge coder made it happen, NVidia supported him, while ATI came in and crushed the poor fella.
So much for "competition", once again.
Now, I'd demand you show where HAVOK is worthwhile, EXCEPT I'm not the type of person that slams and reams and screams against " a percieved enemy company" just because "my favorite" isn't capable, and in that sense, my favorite IS CAPABLE.
Now, PhysX is awesome, it's great, it's the best there is, and that may or may not change, but as for now, NO OTHER demonstrations (you tube and otherwise) can match it.
That's just a sad fact for you, and with so many maintaining your biased and arrogant demand for anything else, we may have another case of VHS instead of BETA, which of course, you would heartily celebrate, no matter how long it takes to get there.
LOL
Yes, it is funny. It's just hilarious. A few months ago before Mirror's Edge and Anand falling in love with PhysX in it, admittedly, in the article he posted, we had the big screamers whining ZERO.
Well, now a few months later you are whining TWO.
Get ready to whine higher. Yes, you have read about the uptick in support ? LOL
You people are really something.
Oh, I know, CUDA is a big fat zero according to you, too.
(please pass along your thoughts to higher education universities here in the USA, and the federal government national lab research facilites. Thanks)
Reply

Yes, another excuse monger. So you basically admit the text is biased, and claim all readers should see the charts and go by those. LOL
So when the text is biased, as you admit, how is it that the rest, the very core of the review is not ? You won't explain that either.
Furthermore, the assumption that competition leads to something better in technology for videocards quicker, fails the basic test that in terms of technology, there is a limit to how fast it proceeds forward, since scientific breakthroughs must come, and often don't come, for instance, new energy technologies, still struggling after decades to make a breakthrough, with endless billions spent, and not much to show for it.
Same here with videocards, there is a LIMIT to the advancement speed, and competition won't be able to exceed that limit.
Furthermore, I NEVER said prices won't be driven down by competition, and you falsely asserted that notion to me.
I DID however say, ATI ALSO IS KNOWN FOR OVERPRICING. (or rather unknown by the red fans, huh, even said by omission to have NOT COMMITTED that "huge sin", that you all blame only Nvidia for doing.)
So you're just WRONG once again.
Begging the other guy to "not argue" then mischaracterizing a conclusion from just one of my statements, ignoring the points made that prove your buddy wrong period, and getting the body of your idea concerning COMPETITION incorrect due to technological and scientific constraints you fail to include, is no way to "argue" at all.
I sure wish there was someone who could take on my points, but so far none of you can. Every time you try, more errors in your thinking are easily exposed.
A MONOPOLY, in let's take for instance, the old OIL BARRONS, was not stagnant, and included major advances in search and extraction, as Standard Oil history clearly testifies to.
Once again, the "pat" cleche' is good for some things ( for instance competing drug stores, for example ), or other such things that don't involve inaccesible technology that has not been INVENTED yet.
The fact that your simpleton attitude failed to note such anomolies, is clearly evidence that once again, thinking "is not reuired" for people like you.
Once again, the rebuttal has failed. Reply

This is just sad, and I'm no fanboy. I really wanted a 5870, but only with 100% more speed than a GTX285 - not a lousy 33%. Definitely not worth me upgrading, so I guess ATI saved me some money. I'm certain that my 3 GTX280's in Tri-SLI will destroy 2 5870's in CF - although with slightly less compatability (an important advantage for ATI, but not nearly enough). Reply

I have been a regular at Tomshardware for a while now, nad keep coming back to Anandtech time and again to read reviews I have already read on other sites, and this one is by far the best I have read so far, (guru3d, toms, firing squad, and many others)

The 5870 looks awesome, but from an upgrade point of view, I guess my system will not really benefit from moving on from E7200 @3.8ghz 4gb 1066, HD4870 @850mhz 4400mhz on 1680x1050.

Such a shame that i dont have a larger monitor at the moment or I would have jumped immediately.

Looks like the path is q9550 and 5870 and 1920x1200 monitor or larger to make sense, then might as well go i7, i5, where do you stop..

Well done ATI, well done! But if history follows the Nvidia 3xx chip will be mindblowing compared! Reply

Ryan, any chance you could run Viewperf or other pro-app benchmarks
please? Some professionals use consumer hardware as a cheap way of
obtaining reasonable performance in apps like Maya, 3DS Max, ProE,
etc., so it would most interesting to know how the 5870 behaves when
running such tests, how it compares to Quadro and FireGL cards.
Pro-series boards normally have better performance for ops such as
antialiases lines via different drivers and/or different internal
firmware optimisations. Someday I figure perhaps a consumer card will
be able to match a pro card purely by accident.

Sorry if this has already been asked but does the 5870 support audio over Display Port? I am holding out for a card that does such a thing. I know it does it for HDMI but also want it to do it for Display Port. Reply

With a full double the power and transistors and everything else including optimizations that should get more band for buck out of each one of those, why are we not seeing a full double the performance in games compared to the previous generation of cards? Reply

Umm, because if you actually look at the charts, it's not "double everything".
In fact, it's not double THE MOST IMPORTANT THING, bandwidth.
For pete sakes an OC'ed GTX260 core 192 get's to 154+ bandwidth rather easily, surpassing the 153.6 of ati's latest and greatest.
So, you have barely increased ram speed, same bus width... and the transistors on die are used up in "SHADERS" and ROPS....etc.
Where is all that extra shader processing going to go ?
Well, it goes to SSAA useage for instance, which provides ZERO visual quality improvement.
So, it goes to "cranking up the eye candy settings" at "not a very big framerate improvement".
--
So they just have to have a 384 or a 512 bus they are holding back. I dearly hope so.
They've already been losing a BILLION a year for 3+ years in a row, so the cost excuse is VERY VERY LAME.
I mean go check out the nvidia leaked stats, they've been all over for months - DDR5 and a 512 bit, with Multiple IMD, instead of the 5870 Single IMD.
If you want DOUBLE the performance wait for Nvidia. From DDR3 to DDR5, like going from 4850 to 4890, AND they (NVIDIA) have a whole new multiple instruction super whomp coming down the pike that is never done before, on their hated "gigantic brute force cores" (even bigger in the 40nm shrink -lol) that generally run 24C to 15C cooler than ati's electromigration heat generators.
---
So I mean add it up. Moving to DDR5, moving to multiple data, moving to 40nm, moving with 512 bit bus and AWESOME bandwidth, and the core is even bigger than the hated monster GT200. LOL
We're talking EPIC palmface.
--
In closing, buh bye ati ! Reply

Oh mister, if we're waiting, that means NO BUYING the 5870, WAIT instead.
Oh, yeah, no worries, it's not available.
---
Now, when my statements prove to be true, and your little crybaby snark with NO FACTS used for rebuttal are proven to be wasted stupidity, WILL YOU LEAVE AS YOU STATED YOU WANT ME TO ?
That's what I want to know. I want to know if your little wish applies to YOU.
Come on, I'll even accept a challenge on it. If I turn out to be wrong, I leave, if not you're gone.
If I'm correct, anyone waiting because of what I've pointed out, has been given an immense helping hand.
Which BY THE WAY, the entire article FAILED TO POINT OUT, because the red love goes so deep here.
-
But, you, the little red rooster ati fanner, wants me out.
ROFL - you people are JUST INCREDIBLE, it is absolutely incredible the crap you pull.
NOW, LET US KNOW WHAT LEAKED NVIDIA STATS I GOT INCORRECT WON'T YOU!?
No of course you won't !
Reply

Another problem I have with people like you is the unerring desire to rant and rave without reading things through. I said wait for GT300 before doing a proper comparison. Have you already forgotten the mess that was NV30? Paper specs do not necessarily equal reality. When the GT300 is properly previewed, even with an NDA in place, we can all judge for ourselves. People have a choice to buy what they like regardless of what you or I say.

I'm not an ATI fanboy. I expended plenty of thought on what parts to get when I upgraded a few months back and that didn't just include CPU but motherboard and graphics. I was very close to getting a higher end Core2 Duo along with an nVidia graphics card; at the very least I considered an nVidia graphics card even when I decided on an AMD CPU and motherboard. In the end I felt I was getting better value by choosing an ATI solution. Doesn't make me a fanboy just because my money didn't end up on the nVidia balance sheet.

I'll take back the little comment about letting the door hit you on the way out. It wasn't designed to tell you to go away and not come back again, so my bad. I was annoyed at your ability to just attack a specific brand without any apparent form of objectivity. If you hate ATI, then you hate ATI, but do we really need to hear it all the time?

If the information you've posted about the GT300 is indeed accurate and comparable to what we've been told about the 58x0 series, then that's great, but you're going to need to lay it out in a more structured format so people can digest it more readily, as well as lay off the constant anti-ATI stance because appearing biased is not going to make people more receptive to your viewpoint. I remain sceptical that your leaked specs will end up being correct but in the end, GT300 is on its way and it'll be a monster regardless of whatever information you've posted here. I'm not going to pretend I know anything technical about GT300, but you must realise that what you've essentially done in this article is slate a working, existing product line that is being distributed to vendors as we speak in a manner that's much slower than ATI had intended yet you're attacking people for being interested in it over the GT300 which hasn't been reviewed yet, partly because you think the product is vapourware (which isn't really the case as people are getting hold of the 5870 but at a lower rate than ATI would like). Some people will choose to wait, some people will jump on the 58x0 bandwagon right now, but it's not for you to decide for them what they should buy.

What a LOAD OF CRAP.
I don't have to outline anything, remember, ALL YOU PEOPLE cARE ABOUT IS GAMING FRAMERATE.
And that at "your price point" that doesn't include "the NVIDIA BALANCE SHEET". - which io of course, the STRANGEST WAY for a reddie to put it.
YOU JUST WANT ME TO SHUT UP. YOU DON'T WANT IT SAID. WE'LL I'M SAYING IT AGAIN, AND YOU FAILED TO ACCEPT MY CHALLENGE BECAUSE YOU'RE A CHICKEN, AND CENSOR !
---
Oh, do we have to hear it... blah blah blah blah...
--
YES SINCE THIS VERY ARTICEL WAS ABSOLUTELY IRRESPONSIBLE IN NOT PROPERLY ASSESSING THE COMPETITION. Reply

If a game ran almost entirely on the GPU, the scaling would be more of what you expect. You can put in a new GPU, but the CPU is no faster, main memory is no faster of bigger, the hard disk is no faster, PCIE is no faster, etc.

The game code itself also limits scaling. For example the texture size can exceed the card's memory footprint, which results in performance sapping texture swaps. Each game introduces different bottlenecks (we can't solve them all).

We do our best to get linear scaling, but the fact is that we address less than a third of the game ecosystem. That we do better than 33% out of a possible 100% improvement is I think a testimony to our engineers. Reply

Why no load temp of 5870 in Crossfire??
Load temp is much more important than idle temp.
There is lots of uninteresting stuff like soundlevel at idle with 5870 in crossfire, but the MOST IMPORTANT is missing: load temp with 5870 in crossfire. Reply

I just pulled up the chart on the 4870 CF, and although the 4870x2 was low 400's on load system power useage, the 4870 CF was 722 watts !
So, I think your question may have some validity. I do believe the cards get a bit hotter in CF, and then you have the extra items on the PCB, the second slot used, the extra ram via the full amount on each card - all that adds up to more power useage, more heat in the case, and higher temps communicating with eachother. (resends for data on bus puases waitings, etc. ).
So there is something to your question.
---
Other than all that the basic answer is "red fan review site".
The ATI cards are HOTTER than the nvidia under load as a very, very wide general statement, that covers almost every card they both make, with FEW exceptions.
Reply

Thanks T2k, but the only cards that are in Crossfire in that review are the 58XX's. There are no other comparisons to cards in CF or SLI. Since Ryan included some of the most recent nVida cards in SLI I was hoping to find the 48XX's in CF. Reply

Basically the rule of thumb seems to be that at 1920x1200 a single 5870 is still slightly slower than 4870X2 and probably slightly faster than a 4850X2 2GB.
I own the latter so I will wait this time - either they lower the initial price of the 5870X2 or they release a 5850X2, otherwise I'll pass because single 5870 is simply OVERPRICED as it is already.

Seriously: we get a very nice technical background section - then you top it with this more than idiotic collection of games for testing, leaving out 4850X2 2GB, 5850, using TWO stupid CryEngine-based PoS from Crytek, the most un-optimized code producers or WoW, of which even you admit it's CPU-bounded but now CoD:WaW, no Clear Sky, no UT3 or rather a single current Unreal Eninge-based game?

Benchmarking part is ALMOST WORTHLESS, the only useful info is that unless you go above 1920x1200 the 4870X2 pretty much owns 5870's @ss as of now. Reply

OK, I missed that (probably because I found the game shots ugly and became uninterested.)
But how about ET:QW? Yes, it's not the best looking game but it is still popular, let alone World at War which is both great looking and crazy popular, let alone Clear Sky which is a very demanding DX10.1 game? Where is Fallout 3? Where is Modern Warfare?
FFS the most demanding are the quick ation-shooters and we, FPS players are the first one to upgrade to new cards...
Reply

How would ET:QW be a good benchmark? Last I checked, it's still limited to the 30 FPS animations, which makes running it at more than 30 FPS pointless because everything will look jerky.

I agree something like the CoD games should be included for comparison's sake, but they're hardly a good benchmark or taxing on a system. QW does not fall into the same category though, it has a smaller active playerbase than even L4D which lost a lot of players due to the lack of updates. Reply

quote:How would ET:QW be a good benchmark? Last I checked, it's still limited to the 30 FPS animations, which makes running it at more than 30 FPS pointless because everything will look jerky.
I agree something like the CoD games should be included for comparison's sake, but they're hardly a good benchmark or taxing on a system. QW does not fall into the same category though, it has a smaller active playerbase than even L4D which lost a lot of players due to the lack of updates.

and so on. BTW L4D is a passing fart in the wind while ET:QW is still going strong, stronger than UT3 (unfortunately because UT3 looks 10x better and the old ONS mode was awesome but Epic fucked it in UT3)

I see by your review links the 5870 just doesn't do well in ET:QW, everything else is closer to it and it gets beat worse, at 2560 high aa&af GTX295 slams it by 20%, so, of course, it was left out, in order to "pump up the percieved red number".
-
It's "not a good appearance".

Is this card any closer to saturating the pci-express 2.0 x16 slot? That was one of the big arguments in the P55 vs X58 debate. Would there be any loss of performance using these in crossfire on P55 compared to X58?
Otherwise, great article. I think I'll be hanging on to my 4890 for a while though. Reply

Not just a review but an excellent technical discussion. It must have been a lot of work. Thanks for all of that, and do please follow up where you can on people's requests for more info on this very important card.

Kudos to ATI/AMD for such an achievement. It looks like Nvidia is in big trouble.

rofl - Nvidia is in big trouble ? LOL
Have you looked at the charts on wiki that predicted what this review shows ? Maybe you should take a gander at the Nvidia chart, now, and cry.
--
It is a nice review, although biased red here as usual.
--
I'd like to mention as he fretted about heat because of the 1/2 vent on the back of the card, I didn't notice overt despairing mention of the TWO rectangular exhaust ports on the fan end BLOWING HEAT INTO THE CASE.... which of course is deserved. Nice how that was held back as if overlooked. (Boy what trouble for Nvidia! hahha)
Next we'll be told the red rooster tester put his hand next to the two internal exhaust ports, and "the air felt rather cool" so "rest assured not much heat is being pushed into your case", and "this doesn't matter".
See, the trouble is already in the article, the trouble it took to spin it just so for Ati... lol... it's hilatrious!
Reply

No, this is me, the same me I've always been.
Would you like to comment on the two internal exhaust ports of the 5870 that put sweltering heat into your case ?
I guess you tried to ignore that entirely.
You can always insult me again and avoid commenting on the heating exhaust ports ramping up your case temps on the 5870...
That of course would be "the right thing to do".
Perhaps call me crazy, and avoid the topic, right, you're good at that, huh.
Reply

ClownPuncher gets a cookie. This is exactly correct; the actual fan shroud is sealed so that air only goes out the front of the card to go outside of the case. The holes do serve a cooling purpose though; allow airflow to help cool the bits of the card that aren't hooked up to the main cooler; various caps and what have you. Reply

Ok good, now we know.
So the problem now moves to the tiny 1/2 exhaust port on the back, did you stick your hand there and see how much that is blowing ? Does it whistle through there ? lol
Same amount of air(or a bit less) in half the exit space... that's going to strain the fan and or/reduce flow, no matter what anyone claims to the contrary.
It sure looks like ATI is doing a big favor to aftermarket cooler vendors.

Developers arent pushing graphics anymore. Its not economnical, PC game supports is slowing down, everything is console now which is DX9. what purpose does this ATI serve with DX11 and all this other technology that won't even make use of games 2 years from now?

So you're echoing what nvidia recently said, when they claimed dx11/gaming on the PC isnt all that (anymore)? I guess nvidia can close shop (at least the gaming relevant part of it) now and focus on GPGPU. Why wait for GT300 as a gamer?

Oh right, its gonna be blasting past the 5xxx and suddenly dx11 will be the holy grail again... I see how it is. Reply

rofl- It's great to see red roosters not crowing and hopping around flapping their wings and screaming nvidia is going down.
Don't take any of this personal except the compliments, you're doing a fine job.
It's nice to see you doing my usual job, albiet from the other side, so allow me to compliment your fine perceptions. Sweltering smart.
But, now, let's not forget how ambient occlusion got poo-pooed here and shading in the game was said to be "an irritant" when Nvidia cards rendered it with just driver changes for the hardware. lol
Then of course we heard endless crowing about "tesselation" for ati.
Now it's what, SSAA (rebirthed), and Eyefinity, and we'll hear how great it is for some time to come. Let's not forget the endless screeching about how terrible and useless PhysX is by Nvidia, but boy when "open standards" finally gets "Havok and Ati" cranking away, wow the sky is the limit for in game destruction and water movement and shooting and bouncing, and on and on....
Of course it was "Nvidia's fault" that "open havok" didn't happen.
I'm wondering if 30" top resolution will now be "all there is!" for the next month or two until Nvidia comes out with their next generation - because that was quite a trick switching from top rez 30" DOWN to 1920x when Nvidia put out their 2560x GTX275 driver and it whomped Ati's card at 30" 2560x, but switched places at 1920x, which was then of course "the winning rez" since Ati was stuck there.
I could go on but you're probably fuming already and will just make an insult back so let the spam posting IZ2000 or whatever it's name will be this time handle it.
BTW there's a load of bias in the article and I'll be glad to point it out in another post, but the reason the red rooster rooting is not going beyond any sane notion of "truthful" or even truthiness, is because this 5870 Ati card is already percieved as " EPIC FAIL" !
I cannot imagine this is all Ati has, and if it is they are in deep trouble I believe.
I suspect some further releases with more power soon.

is DirectX11 going to be as worthless as 10? in terms of being used in any meaningful way in a meaningful amount of games?

my 2 4850's are still keeping me very happy in my 'ancient' E8500.

curious to see how this compares to whatever nvidia rolls out, probably more of the same, better in some, worse in others, bottom line will be the price.... maybe in a year or two i'll build a new system.

Well it's certainly going to be less useful than PhysX, which is here said to be worthless, but of course DX11 won't get that kind of dissing, at least not for the next two months or so, before NVidia joins in.
Since there's only 1 game "kinda ready" with DX11, I suppose all the hype and heady talk will have to wait until... until... uhh.. the 5870's are actually available and not just listed on the egg and tiger.
Here's something else in the article I found so very heartwarming:
---
" Wrapping things up, one of the last GPGPU projects AMD presented at their press event was a GPU implementation of Bullet Physics, an open source physics simulation library. Although they’ll never admit it, AMD is probably getting tired of being beaten over the head by NVIDIA and PhysX; Bullet Physics is AMD’s proof that they can do physics too. "
---
Unfortunately for this place,one of my friends pointed me to this little expose' that show ATI uses NVIDIA CARDS to develope "Bullet Physics" - ROFLMAO
-
" We have seen a presentation where Nvidia claims that Mr. Erwin Coumans, the creator of Bullet Physics Engine, said that he developed Bullet physics on Geforce cards. The bad thing for ATI is that they are betting on this open standard physics tech as the one that they want to accelerate on their GPUs.

"ATI’s Bullet GPU acceleration via Open CL will work with any compliant drivers, we use NVIDIA Geforce cards for our development and even use code from their OpenCL SDK, they are a great technology partner. “ said Erwin.

You seem to have all this time on your hands to go around the net looking for links to spread FUD...sitting on new egg watching these cards come in and out of stock like you have a vested interest in seeing ATI fail...unlike any sane person it appears you want nvidia to have a monopoly on the industry?

Maybe you are privy to some inside info over at nvidia and know they have nothing to counter the 5870 with?

Maybe the cash they paid you to spin these BS comments would have been better spent on R&D? Reply

That's a nice personal, grating, insulting ripppp, it's almost funny, too.
---
The real problems remain.
I bring up this stuff because of course, no one else will, it is almost forbidden. Telling the truth shouldn't be that hard, and calling it fairly and honestly should not be such a burden.
I will gladly take correction when one of you noticing insulters has any to offer. Of course, that never comes.
Break some new ground, won't you ?
I don't think you will, nor do I think anyone else will - once again, that simply confirms my factual points.
I guess I'll give you a point for complaining about delivery, if that's what you were doing, but frankly, there are a lot of complainers here no different - let's take for instance the ATI Radeon HD 4890 vs. NVIDIA GeForce GTX 275 article here.
http://www.anandtech.com/video/showdoc.aspx?i=3539">http://www.anandtech.com/video/showdoc.aspx?i=3539 Boy, the red fans went into rip mode, and Anand came in and changed the articles (Derek's) words and hence "result", from GTX275 wins to ATI4890 wins.
--
No, it's not just me, it's just the bias here consistently leans to ati, and wether it's rooting for the underdog that causes it, or the brooding undercurrent hatred that surfaces for "the bigshot" "greedy" "ripoff artist" "nvidia overchargers" "industry controlling and bribing" "profit demon" Nvidia, who knows...
I'm just not afraid to point it out, since it's so sickening, yes, probably just to me, "I'm sure".
How about this glaring one I have never pointed out even to this day, but will now:
ATI is ALWAYS listed first, or "on top" - and of course, NVIDIA, second, and it is no doubt, in the "reviewer's minds" because of "the alphabet", and "here we go in alphabetical order".
A very, very convenient excuse, that quite easily causes a perception bias, that is quite marked for the readers.
But, that's ok.
---
So, you want to tell me why I shouldn't laugh out loud when ATI uses NVIDIA cards to develope their "PhysX" competition Bullet ?
ROFLMAO
I have heard 100 times here (from guess whom) that the ati has the wanted "new technology", so will that same refrain come when NVIDIA introduces their never before done MIMD capable cores in a few months ? LOL
I can hardly wait to see the "new technology" wannabes proclaiming their switched fealty.
Gee sorry for noticing such things, I guess I should be a mind numbed zombie babbling along with the PC required fanning for ati ? Reply

You dedicated a full page to the flawless performance of its A/V output, but didn't mention it in the "features" part of the conclusion. It's a very powerful feature, IMO. Granted, this card may be a tad too hot and loud to find a home in a lot of HTPCs, but it's still an awesome feature and you should probably append your conclusion... just a suggestion though.

Ultimately, I have to admit to being a little disappointed by the performance of this card. All the Eyefinity hype and playable framerates at massive 7000x3000 resolutions led me to believe that this single card would scale down and simply dominate everything at the 30" level and below. It just seems logical, so I was taken aback when it was beat by, well, anything else. I expected the 5870 and 5870CF to be at the top of every chart. Oh well.

Yes, because the ati core "really sucks". It needs DDR5, and much higher MHZ to compete with Nvidia, and their what, over 1 year old core. LOL Even their own 4870x2.
Or the 3 year old G92 vs the ddr3 "4850" the "topcore" before yesterday. (the ati topcore minus the well done 3m mhz+ REBRAND ring around the 4890)
That's the sad, actual truth. That's the truth many cannot bear to bring themselves to realize, and it's going to get WORSE for them very soon, with nvidia's next release, with ddr5, a 512 bit bus, and the NEW TECHNOLOGY BY NVIDIA THAT ATI DOES NOT HAVE MIMD capable cores.
Oh, I can hardly wait, but you bet I'm going to wait, you can count on that 100%.

Thank you Wreckage, now, I was going to say draw up your shields, but it's too late, the attackers have already had at it.
--
Thanks for saying what everyone was thinking. You are now "a hated fanboy", "a paid shill" for the "corporate greedy monster rip off machine", according to the real fanboy club, the ones who can't tell the truth, no matter what, and prefer their fantasiacal spins and lies. Reply

So if YOU compare one loud design of ati's fan to another fan and as loud ati card
( they're all quieter than 5870* but we'll make believe for you for now),
and they're both loud, anyone complaining about one of them being loud is "an nvidia fanboy" because he isn't aware of the other loud as heck ati cards, which of course, make another loud one "just great" and "not loud". LOL

It's just amazing, and if it was NV:

" This bleepity bleep fan and card are like a leaf blower again, just like the last brute force monster core power hog but this **tard is a hurricane with no eye."

But since it's the red cards that are loud, as YOU pointed out in the plural, not singular like the commenter, according to you HE's the FANBOY, because he doesn't like it. lol

ULTIMATE CONCLUSION: The commenter just told the truth, he was hoping for more, but was disappointed. YOU, the raging red, jumped his case, and pointed out the ati cards are loud "vom" prior.. and so he has no right to be a big green whining fanboy...
ROFLMAO
I bet he's a "racist against reds" every time he validly criticizes their cards, too.
---
the 5870 is THE LOUDEST ATI CARD ON THE CHART,AND THE LOUDEST SINGLE CORE CARD.
--
Next, the clucking rooster will whiplash around and flap the stubby wings at me, claiming at idle it only draws 27 watts and is therefore quiet.
As usual, the sane would them mention it will be nice not playing any 3d games with a 3d gaming card, and enjoying the whispery hush.
--
In any case:
Congratulations, you've just won the simpleton's red rooster raving rager thread contest medal and sarcastic unity award.(It's as real as any points you've made)

Anyhow thanks, you made me notice THE 5870 IS THE LOUDEST CARD ON THE CHARTS. I was too busy pointing out the dozen plus other major fibboes to notice.
It's the loudest ati card, ever.
Reply

I thought the technical portion of your review was well written. It is clear, concise and written to the level of understanding of your target audience. However, I am less than impressed with your choice of benchmarks. Why is everything run at 4xAA, 16xAF? Speaking for most PC gamers, I would have maxed the settings in Crysis Warhead before adding AA and AF. Also, why so many console ports? Neither I, nor anyone else I personally know have much interest in console ports (excluding RPGs from Bethesda). Where is Stalker: Clear Sky? As you note its sequel will be out soon. Given the short amount of time they had to work with DX11, I imagine it will run similarly to Stalker: Call of Pripyat. Also, where is ArmA II? Other than Crysis and Stalker it is the game most likely to be constrained by the GPU.

I don't want to sound conspiratorial, but your choice of games and AA/AF settings closely mirror AMD's leaked marketing material. It is good that you put their claims to the test, as I trust Anandtech as an unbiased review site, but I don't think the games you covered properly cover the interests of PC gamers. Reply

For the settings we use, we generally try to use the highest settings possible. That's why everything except Crysis is at max quality and 4xAA 16xAF (bear in mind that AF is practically free these days). Crysis is the exception because of its terrible performance; Enthusiast level shaders aren't too expensive and improve the quality more than anything else, without driving performance right off a cliff. As far as playing the game goes, we would rather have AA than the rest of the Enthusiast features.

As for our choice of games, I will note that due to IDF and chasing down these crazy AA bugs, we didn't get to run everything we wanted to. GRID and Wolfenstein (our OpenGL title) didn't make the cut. As for Stalker, we've had issues in the past getting repeatable results, so it's not a very reliable benchmark. It also takes quite a bit of time to run, and I would have had to drop (at least) 2 games to run it.

Overall our game selection is based upon several factors. We want to use good games, we want to use popular games so that the results are relevant for the most people, we want to use games that give reliable results, and ideally we want to use games that we can benchmark in a reasonable period of time (which means usually having playback/benchmark tools). We can't cover every last game, so we try to get what we can using the criteria above. Reply

Popularity and quality are strong arguments for World of Warcraft, Left 4 Dead, Crysis, Far Cry, the newly released Batman game... and *maybe* Resident Evil (though it is has far greater popularity among console gamers). However, HAWX? Battleforge? I would never have even heard of these games had I not looked them up on Wikipedia. In retrospect I can see you using Battleforge due to it being the only DirectX 11 title, but I still don't find your list of games compelling or comprehensive.

To me *PC* gaming needs to offer something more than simple action to justify its cost of entry. In the past this included open worlds, multiplayer, greater graphical realism and attempts at balancing realistic simulation with entertaining game play. Console gaming has since offered the first two, but the latter are still lacking.

It's games like Crysis, Stalker and ArmA II along with the potential of modding that attract people to PC gaming in the first place... Reply

We do have Cyberlink's software, but as it uses different code paths, the results are near-useless for a hardware review. Any differences could be the result of hardware differences, or it could be that one of the code paths is better optimized. We would never be able to tell.

Our focus will always be on benchmarking the same software on all hardware products. This is why we bent over backwards to get something that can use DirectCompute, as it's a standard API that removes code paths/optimizations from the equation (in this case we didn't do much better since it was a NVIDIA tech demo, but it's still an improvement). Reply

roflmao - Gee no more screaming the 4850x2 and the 4870x2 are best without pointing out the two gpu's needed to get there.
--
Nonetheless, this 5870 is EPIC FAIL, no matter what - as we see the disappointing numbers - we all see them, and it's not good.
---
Problem is, Nvidia has the MIMD multiple instructions breakthrough technology never used before that according to reports is an AWESOME advantage, lus they are moving to DDR5 with a 512 bit bus !
--
So what is in the works is an absolute WHOMPING coming down on ati that BIG GREEN NVIDIA is going to deliver, and the poor numbers here from what was hoped for and hyped over (although even PREDICTED by the red fan Derek himself in one portion of one sorrowful and despressed sentence on this site) are just one step closer to that nail in the coffin...
--
Yes I sure hope ati has something major up it's sleeve, like 512 bit mem bus increased card coming, the 5870Xmem ...
I find the speculation that ATI "mispredicted" the bandwidth needs to be utter non-sense. They are 2-3 billion in the hole from the last few years with "all these great cards" they still lose $ on every single sale, so they either cannot go higher bit width, or they don't want to, or they are hiding it for the next "strike at NVidia" release. Reply

So you're comparing this product with a not yet release product and saying that the not yet released product is going to trounce it, without any facts to back it up? Do you have the hardware? If not, then you're simply ranting.

Will the GT300 beat out the 5870? I dunno, probably. If it didn't, that would imply that the move from GT200 to GT300 was a major disappointment for NVidia.

I think that EPIC FAIL is completely ludicrous. I can see "epic fail" applied to the Geforce FX series when it came out. I can also see "epic fail" for the Radeon MAXX back in the day. But I don't see the 5870 as "epic fail". If you look at the card relative to the 4870 (the card it replaces), it's quite good - solid 30% increase. That's what I would expect from a generation improvement (that's what the gt200's did over the 9800's, and what the 8800 did over the 7900, etc).

BTW, I'm seeing the 5870 as pretty good - it beats out all single card NVidia by a reasonable and measureable amount. Sounds like ATI has done well. Or are you considering anything less than 2x the performance of the NVidia cards "epic fail"? In that case, you may be disappointed with the GT300, as well. In fact, I'll say that the GT300 is a total fail right now. I mean jeez! It scores ZERO FPS in every benchmark! That's super-epic fail. And I have the numbers to back that statement up.

Since you are making claims about the epic fail nature of the 5870 based on yet to be released hardware, I can certainly play the same game, and epic fail anything you say based on those speculative musings. Reply

Glad you are having fun.
Just let me know when you disagree, and why. I'm certain your fun will be "gone then", since reality will finally take hold, and instead of you seeing foam, I'll be seeing drool. Reply

If you can't see the 5870 is the longest card in that pic, then you've got different problems than just lying issues.
Another one, another big fat fib, with pic included, that proves the fib to be a fab fib.
It's amazing.
---
"No, everyone, do not believe your lying eyes, it's a standard 10.5" measured in the new red rooster barnyard stick."

Cripes call International Weights and Measures, we have warped space time around the new ati card it's so powerful. Reply

Mildly disappointed in the initial results for this card. It is powerful,admittedly, but it has less than twice the performance of the 4870. Also does not use less power. Maybe I was expecting too much. It surely is not a giant step up like the introduction of the 4xxx series cards. I was hoping for another big leap in performance per watt.

I have a feeling that nVidia's 300 series will beat it when it finally comes out, although hopefully ATI will still be ahead in performance per watt and performance per dollar.
I hope they make a good low/midrange card like the 4670 that does not require external power but has increased performance. Reply

I reckon you'll see performance improve the coming months, as ATi's compiler improves to use the new instructions more efficiently. The card you see here today will not be the one fighting GT300.

Granted, it won't make huge strides, but I guesstimate it to be 5-10% better, depending on circumstance/game, by the november-january timeframe (when GT300 hopefully lands).

Nvidia's baby will probably still be faster, but if the margin turns out to be slim, we'll have another rv770 vs GT200 situation playing out. And when the first wave of DX11 games have hit, somewhere next year, it'll be time to reevaluate again ;). Reply

"Sometimes a surprise is nice. Other times it’s nice for things to go as planned for once.

Compared to the HD 4800 series launch, AMD’s launch of the HD 5800 series today is going to fall in to the latter category. There are no last-minute announcements or pricing games, or NDAs that get rolled back unexpectedly. Today’s launch is about as normal as a new GPU launch can get."

If it's normal for all that crap to happen, wouldn't it be ABNORMAL for AMD to have a great launch? :) Reply

LOL
" . Today’s launch is about as normal as a new GPU launch can get."
I guess he meant "for reviewers" - since the "normal launch" today is "listed on the websites for sale, but greyed out, not available, or pre-order" - meaning NO STOCK.
---
Yehah buddy, that is a "as normal as any launch" so long as it's the immensely favored ATI launching, and not that hated greedy Nvidia...
A normal launch and you can't buy the card...rofl. Reply

Oh really ? Now wait a minute, spin master. When the site here whined about "paper launch" it was Derek who brought up a two or three year old nvidia card, and cried and whined about it. Then speculated the GTX275 was paper, and then "a phantom card".
Well, that didn't happen.... no apologies about it ever either.
---
The PAPER launches of late are ATI ATI ATI ! ! !
We have the 4770, and now this one !
----
Gee, when ATI BLOWS IT, we suddenly talk in vague terms about "the companies" having "papery launches" as " the general rule of thumb of how it's done.." - and that makes us "not a fan boy!??!"
R0FLMAO !!!!
Yes, of course, since the red ati is bleeding paper launches and the last one from nvidia one can actually cite is YEARS AND YEARS ago, yes, of course, you're correct, it's "unnamed companies in the multiple" that "do it"....
---
I swear to god, I cannot even believe the massive brainwashing that is a gigantic pall all over the place.
---
If I'm WRONG, please be kind, and tell me what nvidia paper launches I missed.... PLEASE LET ME KNOW. Reply

Is\was a good idea to shoot for as this is most certainly what Nvidia is going to attempt to achieve. But I am a bit disappointed this care rarely achieved it.

I do like angle independent AF though. Should be interesting to see what Nvidia brings to the table. But kind of like the CPU situation(i5) I am kind of meh. But will say this has more potential compared to its predecessor than the i5 series does compared to Core 2 Duo. Reply

I thought that was just great, those pretty pictures, and then I get to reading. I see the 4890 and SQUARES. I see the GTX285, with CIRCLES and an outer rounded octagon.
Then the 5870- and it's "perfectly round" angle independent algorithm, but I still see some distortions.
--
So I get to reading and am told "the 4890 and 285 are virtually the same". I guess the wheel was first made square, and rolled as well as when it became round. No chance the reviewer could tell the truth and remark that NVidia has the best, until today.. NOPE can't do that!
---
Then, of course, the celebration for the "perfection" of the 5870 and ATI's superb success in the "round" category...
EXCEPT:
We get to the actual implementations and NO PERCIEVED DIFFERENCE IS VISUALLY THERE. It cannot be seen. The article even states they searched in vain for some game to show the difference. LOL
All that extra effort to for pete sakes show that ATI superiority...all WASTED EFFORT, but for red roosters I'm certain it was a very exciting quest, titillating, gee a change to take down big green...
---
So bottom line is IT'S A BIG FAT ZERO, even the older, worse ati implementation is apparently "non distinguishable".
It is remarked that NVidia doesn't "officially" support this method in game, and of course, after much red rooster effort, one finds out why.
THERE IS NO DIFFERENCE in visual quality. Another phantom red "win".
Another reason NVidia makes money (why waste it on worthless crap in developement that makes no difference), while ATI does not.
Yeah, that was so cool.
So happy the "mental ideation of perfection in the card for ati fans" was furthered. ROFL
Reply

1> Was there a separate NDA for the 2 cards?
2> Were there no sample cards given by AMD to reviewers?
3> Did AMD ask reviewers to postpone said reviews due to market supply problems/glitches?
4> Was this a strategy decision by AMD, for marketing or other reasons? Reply

To get enough 5870 cards in the channel for a hard launch, they used every possible die.

There are probably not enough harvested dies to create the 5850 line just yet. And they're not gonna use fully functional ones that can go in a 5870 when supply for them is tight already.

Once the 5850 is launched, demand for them is up and yields matured, they'll have to use fully functional dies to keep supply up, but now they're building up inventory for a hard launch during the coming weeks. Reply

Uhh, just a minute there feller. The SOFT or PAPER LAUNCH has already hit, the big LAUNCH DATE is today....
Newegg is a big ZERO available... (one Powercolor was there 30 mins ago, the other 3 listed are NOT avaailable, I watched them appear last night).
---
So, when Ati has a "hard launch" they get "many weeks after the launch date" to "ramp up production" and "fill the need".
ROFLMAO
I was here when this site and the red roosters whined about Nvidia and appear launches, and I believe it was the GTX275 that was predicted to be PAPER (not very long ago in fact) here, and the article EVEN SPECULATED IT WAS A PHANTOM CARD.
All the red roosters piled on, but.... the card was available on launch, it wasn't a PHANTOM, and all that bs was quickly forgotten and shoved into the memory hole like it never happened...
---
Oh, but when it's ATI and not Nvidia, the 4770 can remain almost pure paper near forever, and this one, golly it can be 95% paper and it's just " getting ready for a hard launch" WEEKS BEYONDS the launch date!
ROFLMAO
--
No bias here?!? "Where's da' bias?!?!" said the red rooster (to the green goblin)...
Give me a break. Reply

Just because you can't find it in the states doesn't mean it's a fake launch. And fake launch? What are you a 12 year old or something? You're like the nvidia version of snakeoil. Just go play with your nvidia part m'kay ?

Well, since you insulted, and mischaracterized, I came across the reminder about the 4870 paper launch.
Yes, that's correct, this is how ATI rolls, a big fat lying launch date, a piddle of a few cards, then wait a couple weeks or a month.
--
" The cards are fast, but as many pointed out HD 5870 is not faster than Geforce GTX 295, which is something that many have expected. Radeon 5850 will also start selling in October time, but remember, last summer when ATI launched 4870, the card was almost impossible to buy and weeks if not months after, the availability finally improved. "
http://www.fudzilla.com/content/view/15643/1/">http://www.fudzilla.com/content/view/15643/1/--
Like I've kept saying, the bias is so bad... I keep discovering more big fat ati blunderous moves, that are instead ascribed to imaginarily to Nvida.
Thanks for the incorrect whining, anger, and standard PC e-mindless chatroom repeated, non original, heard ten thousand times, brainless insult, it actually helped me.
I learned ATI blew their 4870 launch with paper lies as well.
You're a great help friend. Reply

It may be paper-launch in the US, but here somewhere in South East Asia I can already grab a Powercolor 5870 1GB if I so desire. Powercolor is quite aggresive here promoting their ATI 5xxx wares just like Sapphire does when the 4xxx series comes out. Reply

I believe you. I've also seen various flavors of cards not available here in the USA, banned by the import export deals and global market and manufacturer and vendor controls and the powers that be, and it doesn't surprise me when it goes the other way.
Congratulations on actually having a non fake launch. Reply

"The engine allows for complete hardware offload of all H.264, MPEG-2 and VC1 decoding".

This has afaik never been true for any previous card of ATi, and I doubt it has been tested to be true this time as well.

I have detailed this problem several times before in the comment section and never got a reply, so I'll summarize: ATi's UVD only decodes level 4 AVC (i.e. bluray) streams, if you have a stream with >4 reference frames, you're out of luck. NVIDIA does not have this limitation. Reply

Yeah and my GTX 280 has to run full throttle (3D frequency) just to play a 720p content and temp climbs the same as if it were a 3D game. Yeah it can decode some *underground* clips from Japan, big deal. Oh and it does that for only H.264. No VC-1 love there. I am sure you'd think that is not a big deal, but the same applies to those funky clips with 13+ reference frames. Not a big deal. Especially when AMD can decode all 3 major codecs effortlessly (performance 2D frequency instead of 3D frequency)
Reply

Great article, but there is an inconsistancy. You say that thanks to there only being 2 TDMS controllers, you can't use both DVI connectors at the same time as the HDMI output for three displays, but then go onto say later that you can use the DVI(x2), DP and HDMI in any combination to drive 3 displays. Which is correct?

Also, can you play HDCP protected content (a Blu-Ray disc for example) over a panel connected to a Display Port connector?

I'd like to see a benchmark using an amd cpu. I think it was the Athlon II 620 article that pointed out how Nvidia hardware ran better on AMD cpus and AMD/ATI cards ran better on Intel cpus. It would be interesting to see if the 5870 stacks up against Nv's current gen with other setups. Reply

I've send a detailed solution for this problem to you ( Aero disabled when running and video with UVD accerelation ), but the basic for all readers here is just install HydraVision & Avivo Video Convertor packages, and this should fix the problem... Reply

Well, I've been looking forward to reading your review of this new card for a little while now, especially since the realistic sounding specs leaked out a week or so ago. My first honest impression is that it looks like it'll be a little bit longer till the drivers mature and the game developers figure out some creative ways to bring the processor up to its full potential, but in the mean time, it looks like I'd have to agree with everyone's conclusion that even 150+GB/s isn't enough memory bandwidth for the beast, I'm sure it was a calculated compromise while the design was still on the drawing board. Unfortunately, increasing the bus isn't as easy as stapling on a couple more memory controllers, they probably would've had to resort back to a wider ring bus and they've already been down that road. (R600 anyone) Already knowing how much more die area and power would've been required to execute such a design probably made the decision rather easy to stick with the tested 4 channel GDDR5 setup that worked so well for the RV770 and RV790. As for how much a 6 or 8 channel (384/512 bit) memory controller setup would've improved performance, we'll probably never know; as awesome as it would be, I don't foresee BoFox's idea of ATI pulling a fast one on nVidia and the rest of us by releasing a 512-bit derivative in short order, but crazier things have happened. Reply

Oh, I also noticed, they must of known that the new memory controller topology wasn't going to cut it all the time judging from all the cache augmentations performed. But like i said, given time, I'm sure they'll optimize the drivers to take advantage of all the new functionality, I'm betting that 90% of the low level functions are still handled identically as they were in the last generations architecture and it will take a while till all the hardware optimizations like the cache upgrades are fully realized. Reply

The single most disappointing thing about this card is the noise and heat.

Sapphire have been coming out with some great coolers recently, in their VAPOR-X line. Why doesn't ATI stop using the same dustbuster cooler in a new shiny cover, and create a much, much quieter cooler, so the rest of us don't have to wait for a million OEM variations until there's one with a good cooler (or fuck about and swap the cooler ourselves, but unless you have one that covers the GPU AND the RAM chips, forget it. Those pathetic little sticky pad RAM sinks suck total donkey balls.) Reply

the answer you are looking for is simple. Another, more elaborate cooler would raise prices more, and vendors don't like that. Remember what happened to the more expensive stock cooler for the 4770? ...;) Reply

When running dual/multi monitors with past generation cards, the cards would always run at full 3d clocks or else face instabilities, screen corruptions etc. So even if the cards are at idle, it would never ramp down to the 2d clocks to save power, rending impressive low idle power consumption numbers useless (especially on the GTX200 series cards).

Golly, another red rooster lie, they just NEVER stop.
Let's take it right from this site, so your whining about it being nv zone or fudzilla or whatever shows ati is a failure in the very terms claimed is not your next, dishonest move.
---
NVIDIA w/ GT200 spanks their prior generation by 60.96% !

That's nearly 61% average increase at HIGHEST RESOLUTION and HIGHEST AA AF settings, and it right here @ AT - LOL -

- and they matched the clock settings JUST TO BE OVERTLY UNFAIR ! ROFLMAO AND NVIDIA'S NEXT GEN LEAP STILL BEAT THE CRAP OUT OF THIS LOUSY ati 5870 EPIC FAIL !
http://www.anandtech.com/video/showdoc.aspx?i=3334...">http://www.anandtech.com/video/showdoc.aspx?i=3334...--
roflmao - that 426.70/7 = 60.96 % INCREASE FROM THE LAST GEN AT THE SAME SPEEDS, MATCHED FOR MAKING CERTAIN IT WOULD BE AS LOW AS POSSIBLE ! ROFLMAO NICE TRY BUT NVIDIA KICKED BUTT !
---
Sorry, the "usual" is not 15-30% - lol
---
NVIDIA's last usual was !!!!!!!!!!!! 60.69% INCREASE AT HIGHEST SETTINGS !
-
Now, once again, please, no lying.
Reply

Re: Shader Aliasing nowhere to be found in DX9 games--
Shader aliasing is present all over the Unreal3 engine games (UT3, Bioshock, Batman, R6:Vegas, Mass Effect, etc..). I can imagine where SSAA would be extremely useful in those games.

Also, I cannot help but wonder if SSAA would work in games that use deferred shading instead of allowing MSAA to work (examples: Dead Space, STALKER, Wanted, Bionic Commando, etc..), if ATI would implement brute-force SSAA support in the drivers for those games in particular.

I am amazed at the perfectly circular AF method, but would have liked to see 32x AF in addition. With 32x AF, we'd probably be seeing more of a difference. If we're awed by seeing 16x AA or 24x CFAA, then why not 32x AF also (given that the increase from 8 to 16x AF only costs like 1% performance hit)?

Why did ATI make the card so long? It's even longer than a GTX 295 or a 4870X2. I am completely baffled at this. It only has 8 memory chips, uses a 256-bit bus, unlike a more complex 512-bit bus and 16 chips found on a much, much shorter HD2900XT. There seems to be so much space wasted on the end of the PCB. Perhaps some of the vendors will develop non-reference PCB's that are a couple inches shorter real soon. It could be that ATI rushed out the design (hence the extremely long PCB draft design), or that ATI deliberately did this to allow 3rd-party vendors to make far more attractive designs that will keep us interested in the 5870 right around the time of GT300 release.

Regarding the memory bandwidth bottleneck, I completely agree with you that it certainly seems to be a severe bottleneck (although not too severe that it only performs 33% better than a HD4890). A 5870 has exactly 2x the specifications of a 4890, yet it generally performs slower than a 4870X2, let alone dual-4890 in Xfire. A 4870 is slower than a 4890 to begin with, and is dependent on Crossfire.

Overall, ATI is correct in saying that a 5870 is generally 60% faster than a 4870 in current games, but theoretically, a 5870 should be exactly 100% faster than a 4890. Only if ATI could have used 512-bit memory bandwidth with GDDR5 chips (even if it requires the use of a 1024-bit ringbus) would the total memory bandwidth be doubled. The performance would have been at least as good as two 4890's in crossfire, and also at least as good as a GTX295.

I am guessing that ATI wants to roll out the 5870X2 as soon as possible and realized that doing it with a 512-bit bus would take up too much time/resources/cost, etc.. and that it's better to just beat NV to the punch a few months in advance. Perhaps ATI will do a 5970 card with 512-bit memory a few months after a 5870X2 is released, to give GT300 cards a run for its money? Perhaps it is to "pacify" Nvidia's strategy with its upcoming next-gen that carry great promises with a completely revamped architecture and 512 shaders, so that NV does not see the need to make its GT300 exceed the 5870 by far too much? Then ATI would be able to counter right afterwards without having to resort to making a much bigger chip?

Read some of the other 5780 articles that cover SSAA image quality. It actually makes most modern games look worse, but that is through no fault of ATi, just the nature of the SS method that literally AA's everything, and in the process, can/does blur textures. Reply

I don't know much about video games, but in photography it is known that reducing the size of an image reduces the appearance of sharpness as well, so final sharpening should be done at the output size. Reply

Agreed and to add to that, the fact the third channel means very little when it comes to actual gaming performance makes it even less signficant. As compared to Lynnfield clock for clock, which is only dual channel:

Could someone enlighten me as to why the 4870 X2 could be faster than the 5870 in some situations? It was noted it the article but never really explained. They have the same number of SP's, and one would expect crossfire scaling to be detrimental to the 4870 X2"s performance. Would this be indicative of the 5870 being starved for memory bandwidth in these situations or something else? Reply

Doesn't using dual GPU's effectively halve the onboard memory, as significant portions of the textures, etc. need to be duplicated? So, the 4870x2 has a memory disadvantage by requiring 2x memory to accomplish the same thing. Reply

Right, with an X2 each GPU has a copy of the same frame buffer, so the total memory onboard is effectively halved. A 2GB frame buffer with 2 GPU is two of the same 1GB frame buffer mirrored on each.

With the 5870 essentially being 2xRV790 on one chip, in order to accomplish the same frame rates on the same sized 1GB frame buffer, you would expect to need additional bandwidth to facilitate the transfers to and from the frame buffer and GPU. Reply

Ya he mentions bandwidth being a potential issue preventing the 5870 from mirroring the 4870X2's results.

It could also be that the 5870's scheduler/dispatch processor aren't as efficient at extracting performance as driver forced AFR. Seems pretty incredible, seeing as physically doubling GPU transistors on a single die has always been traditionally better than multi-GPU scaling.

Similarly, it could be a CPU limitation where CF/SLI benefit more from multi-threaded driver performance, whereas a single GPU would be limited to a single fast thread or core's performance. We saw this a bit as well last year with the GT200s compared to G92s in SLI. Reply

Informative and well-written. My main question was "how future-proof is it?" I got the Radeon 9700 for DirectX9, the 8800GTS for DirectX10, and it looks like I may very well be picking this up for DirectX11. It's nice there's usually one card you can pick up early that'll run games for years to come at acceptable levels. Reply

One wonders how the 8800GT ended up on the Temp/Heat comparison, until you READ the text, and it claims heat is "all over the place", then the very next line is "ALL the Ati's are up @~around 90C" .

Yes, so temp is NOT alkl over the place, it's only VERY HIGH for ALL the ATI cards... and NVIDIA cards are not all very high...

-so it becomes CLEAR the 8800GT was included ONLY so the article could whine it was at 92C, since the 275 is @ 75C and the 260 is low the 285 is low, etc., NVidia WINS HANDS DOWN the temperature game...... buit the article just couldn't bring itself to be HONEST about that.
---
What a shame. Deception, the name of the game.
Reply

The 8800GT, as was the 3870, was included to offer a snapshot of an older value product in our comparisons. The 8800GT in particular was a very popular card, and there are still a lot of people out there using them. Including such cards provides a frame of reference for performance for people using such cards. Reply

Turning green was something the 40nm 5870 was supposed to do wasn't it ?
Instead it turned into another 3D HEAT MONSTER, like all the ati cards.
Take a look at the power charts, then look at that "wonderful tiny ATI die size that makes em so much money!" (as they lose a billion plus a year), and then calculate that power into that tiny core, NOT minusing failure for framerates hence "less data", since of course ati cards are "faster" right ?
So you've got more power in a smaller footprint core...
HENCE THE 90 DEGREE CELCIUS RUNNING RATES, AND BEYOND.
---
Yeah, so sorry that it's easier for you to call names than think. Reply

SiliconDoc, you should try thinking instead of trolling. Why would the maximum be around 90C? Because that's what the cards are designed to target under load. If they get hotter, the fan speeds would ramp up a bit more. There's no need to run fans at high rates to cool down hardware if the hardware functions properly.

Reviewing based on max temperatures is a stupid idea when other factors come into play, which is why one page has power draws, temperatures, and noise levels. The GTX 295 has the same temperature not because it's "as hot" but because the fan kicked up to a faster speed to keep that level of heat.

The only thing you can really conclude is that slower GPUs generate less heat and thus don't need to increase fan speeds. The 275 gets hotter than the 285 as well by 10C, but since the 285 is 11.3 dB louder I wouldn't call it better by any stretch. It's just "different". Reply

Are you seriously going to claim that all ATI are not generally hotter than the nvidia cards ? I don't think you really want to do that, no matter how much you wail about fan speeds.
The numbers have been here for a long time and they are all over the net.
When you have a smaller die cranking out the same framerate/video, there is simply no getting around it.
You talked about the 295, as it really is the only nvidia that compares to the ati card in this review in terms of load temp, PERIOD.
In any other sense, the GT8800 would be laughed off the pages comparing it to the 5870.
Furthermore, one merely needs to look at the WATTAGE of the cards, and that is more than a plenty accurate measuring stick for heat on load, divided by surface area of the core.
No, I'm not the one not thinking, I'm not the one TROLLING, the TROLLING is in the ARTICLE, and the YEAR plus of covering up LIES we've had concerning this very issue.
Nvidia cards run cooler, ati cards run hotter, PERIOD.
You people want it in every direction, with every lying whine for your red god, so pick one or the other:
1.The core sizes are equivalent, or 2. the giant expensive dies of nvidia run cooler compared to the "efficient" "new technology" "packing the data in" smaller, tiny, cheap, profit margin producing ATI cores.
------
NOW, it doesn't matter what lies or spin you place upon the facts, the truth is absolutely apparent, and you WON'T be changing the physical laws of the universe with your whining spin for ati, and neither will the trolling in the article. I'm going to stick my head in the sand and SCREAM LOUDLY because I CAN'T HANDLE anyone with a lick of intelligence NOT AGREEING WITH ME! I LOVE TO LIE AND TYPE IN CAPS BECAUSE THAT'S HOW WE ROLL IN ILLINOIS!Reply

Ultimately, the true measure of how much waste heat a card generates will have to look at the power draw of the card, tempered with the output work that it's doing (aka FPS in whatever benchmark you're looking at). Since I haven't seen that kind of comparison, it's impossible to say anything at all about the relative heat output of any card. So your conclusions are simply biased towards what you think is important (and that should be abundantly clear).

Given that one must look at the performance per watt. Since the only wattage figures we have are for OCCT or WoW playing, so that's all the conclusions one can make from this article. Since I didn't see the results from the OCCT test (in a nice, convenient FPS measure), we get the following:

That means that the 5870 wins by at least 36% over the other 3 cards. That means that for this observation, the 5870 is, in fact, the most efficient of these cards. It therefore generates less heat than the other 3 cards. Looking at the temperatures of the cards, that strictly measures the efficiency of the cooler, not the efficiency of the actual card itself.

You can say that you think that I'm biased, but ultimately, that's the data I have to go on, and therefore that's the conclusions that can be made. Unfortunately, there's nothing in your post (or more or less all of your posts) that can be verified by any of the information gleaned from the article, and therefore, your conclusions are simply biased speculation. Reply

That's all the further I should have to go.
3870 has THE LOWEST LOAD POWER USEAGE ON THE CHARTS
- but it is still 90C, at the very peak of heat,
because it has THE TINIEST CORE !
THE SMALLEST CORE IN THE WHOLE DANG BEJEEBER ARTICLE !
It also has the lowest framerate - so there goes that erple theory.
---
The anomlies you will notice if you look, are due to nm size, memory amount on board (less electricity used by the memory means the core used more), and one slot vs two slot coolers, as examples, but the basic laws of physics cannot be thrown out the window because you feel like doing it, nor can idiotic ideas like framerate come close to predicting core temp and it's heat density at load.
Older cpu's may have horrible framerates and horribly high temps, for instance. The 4850 frames do not equal the 4870's, but their core temp/heat density envelope is very close to indentical ( SAME CORE SIZE > the 4850 having some die shaders disabled and ddr3, the 4870 with ddr5 full core active more watts for mem and shaders, but the same PHYSICAL ISSUES - small core, high wattage for area, high heat)
Reply

I didn't say that the 3870 was the most efficient card. I was talking about the 5870. If you actually read what I had typed, I did mention that you have to look at how much work the card is doing while consuming that amount of power, not just temperatures and wattage.

You sir, are a Nazi.

Actually, once you start talking about heat density at load, you MUST look at the efficiency of the card at converting electricity into whatever it's supposed to be doing (other than heating your office). Sadly, the only real way that we have to abstractly measure the work the card is doing is "FPS". I'm not saying that FPS predict core temperature.
Reply

No, the efficiency of conversion you talk about has NOTHING to do with core temp AT ALL. The card could be massively efficient or inefficient at produced framerate, or just ERROR OUT with a sick loop in the core, and THAT HAS ABSOLUTELY NOTHING TO DO WITH THE CORE TEMP. IT RESTS ON WATTS CONSUMED EVEN IF FRAMERATE OUTPUT IS ZERO OR 300SECOND.
(your mind seems to have imagined that if the red god is slinging massive frames "out the dvi port" a giant surge of electricity flows through it to the monitor, and therefore "does not heat the card")

I suggest you examine that lunatic red notion.

What YOU must look at is a red rooster rooter rimshot, in order that your self deception and massive mistake and face saving is in place, for you. At least JaredWalton had the sense to quietly skitter away.
Well, being wrong forever and never realizing a thing is perhaps the worst road to take.

PS - Being correct and making sure the truth is defended has nothing to do with some REDEYE cleche, and I certainly doubt the Gregalouge would embrace red rooster canada card bottom line crumbled for years ever more in a row, and diss big green corporate profits, as we both obviously know.

" at converting electricity into whatever it's supposed to be doing (other than heating your office). "
ONCE IT CONVERTS ELECTRICITY, AS IN "SHOWS IT USED MORE WATTS" it doesn't matter one ding dang smidgen what framerate is,

it could loop sand in the core and give you NO screeen output,

and it would still heat up while it "sat on it's lazy", tarding upon itself.

The card does not POWER the monitor and have the monitor carry more and more of the heat burden if the GPU sends out some sizzly framerates and the "non-used up watts" don't go sailing out the cards connector to the monitor so that "heat generation winds up somewhere else".

When the programmers optimize a DRIVER, and the same GPU core suddenly sends out 5 more fps everything else being the same, it may or may not increase or decrease POWER USEAGE. It can go ANY WAY. Up, down, or stay the same.
If they code in more proper "buffer fills" so the core is hammered solid, instead of flakey filling, the framerate goes up - and so does the temp!
If they optimize for instance, an algorythm that better predicts what does not need to be drawn as it rests behind another image on top of it, framerate goes up, while temp and wattage used GOES DOWN.
---
Even with all of that, THERE IS ONLY ONE PLACE FOR THE HEAT TO ARISE... AND IT AIN'T OUT THE DANG CABLE TO THE MONITOR! Reply

You can modify that, or be more accurate, by using core mass, (including thickness of the competing dies) - since the core mass is what consumes the electricity, and generates heat. A smaller mass (or die size, almost exclusively referred to in terms of surface area with the assumption that thickness is identical or near so) winds up getting hotter in terms of degrees of Celcius when consuming a similar amount of electricity.
Doesn't matter if one frame, none, or a thousand reach your eyes on the monitor.
That's reality, not hokum. That's why ATI cores run hotter, they are smaller and consume a similar amount of electricty, that winds up as heat in a smaller mass, that means hotter.
Also, in actuality, the ATI heatsinks in a general sense, have to be able to dissipate more heat with less surface area as a transfer medium, to maintain the same core temps as the larger nvidia cores and HS areas, so indeed, should actually be "better stock" fans and HS.
I suspect they are slightly better as a general rule, but fail to excel enough to bring core load temps to nvidia general levels. Reply

You understand that if there were no heatsink/cooling device on a GPU, it would heat up to crazy levels, far more than would be "healthy" for any silicon part, right? And you understand that measuring the efficiency of a part involves a pretty strong correlation between the input power draw of the card vs. the work that the card produces (which we can really only measure based on the output of the card, namely FPS), right?

So I'm not sure that your argument means anything at all?

Curiously, the output wattage listed is for the entire system, not just for the card. Which means that the actual differences between the ATI cards vs. the nvidia cards is even larger (as a percentage, at least). I don't know what the "baseline" power consumption of the system (sans video card) is for the system acting as the test bed is.

Ultimately, the amount of electricity running through the GPU doesn't necessarily tell you how much heat the processors generate. It's dependent on how much of that power is "wasted" as heat energy (that's Thermodynamics for you). The only way to really measure the heat production of the GPU is to determine how much power is "wasted" as heat. Curiously, you can't measure that by measuring the temperature of the GPU. Well, you CAN, but you'd have to remove the Heatsink (and Fan). Which, for ANY GPU made in the last 15 years, would cook it. Since that's not a viable alternative, you simply can't make broad conclusions about which chip is "hotter" than another. And that is why your conclusions are inconclusive.

BTW, the 5870 consumes "less" power than the 275, 285 and 295 GPUs (at least, when playing WoW).

I understand that there may be higher wattage per square millimeter flowing through the 5870 than the GTX cards, but I don't see how that measurement alone is enough to state whether the 5870 actually gets hotter. Reply

NO, WRONG.
" Ultimately, the true measure of how much waste heat a card generates will have to look at the power draw of the card, tempered with the output work that it's doing (aka FPS in whatever benchmark you're looking at)."
NO, WRONG.
---
Look at any of the cards power draw in idle or load. They heat up no matter how much "work" you claim they do, by looking at any framerate, because they don't draw the power unless they USE THE POWER. That's the law that includes what useage of electricity MEANS for the law of thermodynamics, or for E=MC2.
DUHHHHH.
---
If you're so bent on making idiotic calculations and applying them to the wrong ideas and conclusions, why don't you take core die size and divide by watts (the watts the companies issue or take it from the load charts), like you should ?
I know why. We all know why.
---
The same thing is beyond absolutely apparent in CPU's, their TDP, their die size, and their heat envelope, including their nm design size.
DUHHH. It's like talking to a red fanboy who cannot face reality, once again.
Reply

What the heck are you talking about? Are you saying that electricity consumed by a device divided by the "volume" of the device is the only way to measure the heat output of the device? Every single Engineering class I took tells me that's wrong, and I'm right. I think you need to take some basic courses in Electrical Engineering and/or Thermodynamics.

(simplified)
power consumed = work + waste

You're looking for the waste heat generated by the device. If something can completely covert every watt of electricity that passes through it to do some type of work (light a light bulb, turn a motor, make some calculation on a GPU etc), then it's not going to heat up. As a result, you HAVE to take into consideration how inefficient the particular device is before you can make any claim about how much the device heats up.

I'll bet that if you put a Liquid Nitrogen cooler on every ATI card, and used the standard air coolers on every NVidia card, that the ATI cards are going to run crazy cooler than the NVidia cards.

Ultimately the temperature of the GPU's depends a significant amount on the efficiency of the cooler, and how much heat the GPU is generating as waste. My point is that we don't have enough data to determine whether the ATI die runs hot because the coolers are less than ideal, Nvidia ones are closer to ideal, the die is smaller, or whatever you have. You have to look at a combination of the efficiency of the die (how well it converts input power to "work done"), the efficiency of the cooler (how well it removes heat from it's heat source), and the combination of the two.

I'd posit that the ATI card is more efficient than the NVidia card (at least in WoW, the only thing we have actual numbers of the "work done" and "input power consumed").

Now, if you look at the measured temperature of the core as a means of comparing the worthiness of one GPU over another, I think you're making just as meaningful a comparison as comparing the worthiness of the GPU based on the size of the retail box that it comes in. Reply

You simply repeated my claim about watts, and replaced core size, with fps, and created a framerate per watt chart, that has near nothing to do with actual heat inside the die, since the SIZE of the die, vs the power traversing through it is the determining factor, affected by fan quality (ram size as well).
Your argument is "framerate power efficiency", as in watts per framerate, and has nothing to do with core temperature (modified by fan cooling of course to some degree), that the article indeed posts except for the two failed ati cards.
The problem with your flawwed "science" that turns it into hokum, is that no matter what outputs on the screen, the HEAT generated by the power consumption of the card itself, remains in the card, and is not "pumped through the videoport to the screen".
If you'd like to claim "wattage vs framerate" efficency for 5870, fine I've got no problem, but claiming that proves core temps are not dependent on power consumption vs die size ( modified by the rest of the card *mem size/power useage/ and the fan heatsink* ) is RIDICULOUS.
---
The cards are generally equivalent manufacturing and component additions, so you take the wattage consumed (by the core) and divide by core size, for heat density.
Hence, ATI cards, smaller cores and similar power consumption, wind up hotter.
That's what the charts show, that's what should be stated, that is the rule, and that's the way it plays in the real world, too.
---
The only modification to that is heatsink fan efficiency, and I don't find you fellas claiming stock NVIDIA fans and heatsinks are way better than the ATI versions, hence 66C for NVIDIA, 75C, 85C, etc, and only higher for ATI, in all their cards listed.
Would you like to try that one on for size ? Should I just make it known that NVIDIA fans and heatsinks are superior to ATI ?
What is true is a lager surface area (die side squared) dissipates the same amount of heat easier, and that of course is what is going on.
ATI dies are smaller ( by a marked surface area as has so foten been pointed out), and have similar power consumption, and a higher DENSITY of heat generation, and therefore run hotter.
Reply

Let's take that LOAD TEMP chart and the article's comments. Right above it, it is stated a good cooler includes the 4850 that ILDE TEMPs in at around 40C (it's actually 42C the highest of those mentioned).
"The floor for a good cooler looks to be about 40C, with the GTS 250(39C), 3870(41C), and 4850 all turning in temperatures around here"
OK, so the 4850 has a good cooler, as well as the 3870... then right below is the LOAD TEMP.. and the 4850 is @ 90C -OBVIOUSLY that good cooler isn't up to keeping that tiny hammered core cool...

3870 is at 89C, 4870 is at 88C, 5870 is at 89C ALL ati....
but then, nvidia...
250, 216, 285, 275 all come in much lower at 66C to 85C.... but "temps are all over the place".
NOT only that crap, BUT the 4890 and 4870x2 are LISTED but with no temps - and take the "coolest position" on the chart!
Well we KNOW they are in the 90C range or higher...
So, you NEVER MENTION why 4870x2 and 4980 are "no load temp shown in the chart" - you give them the WINNING SPOTS anyway, you fail to mention the 260's 65C lowest LOAD WIN and instead mention GTX275 at 75C...LOL

The bias is SO THICK it's difficult to imagine how anyone came up with that CRAP, frankly.
So the superhot 4980 and 4870x2 are given #1 and #2 spots repsectively, a free ride, the other Nvidia cards KICK BUTT in lower load temps EXCEPT the 295, but it makes sure to mention the 8800GT while leaving the 4980 and 4870x2 LOAD TEMP spots blank ?
roflmao
---
What were you saying about "why" ? If why the 8800GT was included is TRUE, then comment on the gigantic LOAD TEMP bias... tell me WHY. Reply

AND, you don't take temps from WOW to use for those two, which no doubt even though it is NOT gpu stressing much, will yeild the 90C for those two cards 4870x2 and 4980, anyway.
So they FAIL the OCCT, but you have NOTHING on them, which would if listed put EVERY SINGLE ATI CARD @ near 90C LOAD, PERIOD...
---
And we just CANNOT have that stark FACT revelaed, can we ? I mean I've seen this for well over a year here now.
LET's FINALLY SAY IT.
---
LOAD TEMPS ON THE ATI CARDS ARE ALL, EVERY SINGLE ONE NEAR 90c, much higher than almost ALL of the Nvidia cards. Reply

I just want to know...With this much zeal about videocards and more specifically the bias that you see, doesn't it make you sound biased too? Can you say that you have owned the cards you are bashing and seen the differences firsthand? I can say I did. I had an 8800 GT and it was running in the upper 80s under load. I switched to my 4850 with the worst cooler I think I've ever seen mind you, and it stays in the mid to upper 60s under load. The cooler on the 8800 gt was the dual-slot design that was the original reference design. The 4850 had the most pathetic fan I've ever seen. It was similar to the fan and heatsink Intel used on the first Core2 stuff. It was the really cheap aluminum with a tiny copper circle that made contact with the die itself. Now, don't get me wrong I love ATI...But I also love nVidia...Anything that keeps games getting better and prices getting better. I honestly don't think, though, that the article is too biased. I think maybe a little for ATI but nothing to rage on and on about. Besides...Calm down. You know nVidia will have a response for this. Reply

1. Who cares what you think about how you percieve me ? Unless you have a fact to refute, who cares ? What is biased ? There has been quite a DISSSS on PhysX for quite some time here, but the haters have no equal alternative - NOTHING that even comes close. Just ASK THEM. DEAD SILENCE. So, strangely, the very best there is, is BAD.

Now ask yourself again who is biased, won't you? Ask yourself who is spewing out the endless strings... Do yourself a favor and figure it out. Most of them have NEVER tried PhysX ! They slip up and let it be known, when they are slamming away. Then out comes their PC hate the greedy green rage, and more, because they have to, to fit in the web PC code, instead of thinking for themselves.

2. Yes, I own more cards currently than you will in your entire life. I started retail computer well over a decade ago.

3. And now, the standard red rooster tale. It sounds like you were running in 2d clocks 100% of the time, probably on a brand board like a DELL. Happens a lot with red cards. Users have no idea.
4850 with The worst fan in the World ! ( quick call Keith Olbermann) and it was ice cold, a degree colder than anything else in the review. ROFLMAO
Once again, the red shorts pinnocchio tale. Forgive me while I laugh, again !
ROFLMAO
Did you ever put your finger on the HS under load ? You should have. Did you check your 3D mhz..
http://forums.anandtech.com/messageview.aspx?catid...">http://forums.anandtech.com/messageview.aspx?catid... Not like 90C is offbase, not like I made up that forum thread.

4. I could care less if nvidia has a response or not. Point is, STOP LYING. Or don't. I certainly have noticed many of the lies I've complained about over a year or so have gone dead silent, they won't pull it anymore, and in at least one case, used in reverse for more red bias, unfortunately, before it became the accepted idea.

So, I do a service, at the very least people are going to think, and be helped, even if they hate me. Reply

Well of course that's the excuse, but I'll keep my conclusion considering how the last 15 reviews on the top videocards were done, along with the TEXT that is pathetically biased for ati, that I pointed out. (Even though Derek was often the author).
--
You want ot tell me how it is that ONLY the GTX295 is near or at 90C, but ALL the ati cards ARE, and we're told "temperatures are all over the place" ?
Can you really explain that, sir ?
Reply

Does the article keep referring to Cypress as "too big"? If Cypress is too big, what the hell is GT200 at 480mm^2 or whatever it was? Are you guys serious with that crap?

I've heard that the "sweet spot" talk from AMD was a bit of a misdirection from the start anyway. IMO if AMD is going to compete for the performance crown or come reasonably close (and frankly, performance is all video card buyers really care about, as we see with all the forum posts only mentioning that GT300 will supposedly be faster than 58XX and not anything else about it) then they're going to need slightly bigger dies. So Cypress being bigger is a great thing. If anything it's too small. Imagine the performance a 480mm^2 Cypress would have! Yes, Cypress is far too small, period.

Personally it's wonderful to see AMD engineer two chips this time, a bigger performance one and smaller lower end one. This works out far better all around.

I think that you're missing the point. AMD appeared to want the part to be small enough to maximize the number of gpu's generated per wafer. They had their own internal idea of how to get a good yield from the 40nm wafers.

It turns out that performance really isn't all people care about - otherwise nobody would run anything other than dual GTX285's in SLI. People care about performance __at a particular price point__. ATI is trying to grab that particular sweet spot - be able to take the performance crown for a particular price range. They would probably be able to make a gargantuan low-yield, high power monster that would decimate everything currently available (crossfire/SLI or single), but that chip would be massively expensive to produce, and surprisingly, be a poor Return on Investment.

So the comment that Cypress is "too big" I think really is apropos. I think that AMD would have been able to launch the 5870 at the $299 price point of the 4870 only if the die had been significantly smaller (around the same size as the 4870). THAT would have been an amazing bang-for-buck card, I believe. Reply

That "price game" is because the 5870 is rather DISAPPOINTING when compared to the GTX295.
I guess that means ATI "blew the competition" this time, huh, and NVidia is going to get more money for their better GTX295.
LOL
That's a *scowl* "new egg price game" for red fans.
Thanks ATI for making NVidia more money ! Reply

It truly amazes me that AnandTech allows a Troll like you to keep posting...but there is always one moron that comes to a forum like this and shows his a** to the world...So we all know it to be you...nice work not bringing anything resembling an intelligent discussion to the table..Oh and please don't tell me what it is that I bring to the conversation...my thoughts about this topic have nothing to do with my reply to you about your vulgar manner and lack of respect for anyone that has a difference of opinion.
Oh and as for you paper launch...well sites like Newegg were sold out immediately because of the overwhelming demand for this card and I'll bet you anything there are cards available and in good supply at this very moment...Why don't you take a look and give us all another update.....I guess having that big "L" stamped on your forehead sums it up..... Reply

No, they didn't, because the 5870's just showed up last night, 4 of them, and just a bit ago the ONE of them actually became "available", the Powercolor brand.
The other three 5870's are NOT AVAILABLE but are listed....
So "ATI paper launch" is the key idea here (for non red roosters).
1:43 PM CST, Wed. Sept. 23rd, 2009.
---
Yes, I watched them appear on the egg last night(I'm such a red fanboy I even love paper launches)... LOL Reply

We used the same save game, but not the same computer. These were separate issues we were chasing down at the same time, so they're not meant to be comparable. In this case I believe some of the shots were at 1600x1200, and others were at 1680x1050. The result of which is that the widescreen shots are effectively back a bit farther due to the use of the same FOV at all times in HL2.

As you'll see in our Crysis shots, there's no difference. I can look in to this issue later however, if you'd like. Reply

Really enjoyed the discussion of the architecture, new features, DX11, Compute Shaders, the new AF algorithm and the reintroduction of SSAA an ATI parts.

As for the card itself, its definitely impressive for a single-GPU but the muted enthusiasm in your conclusion seems justified. Its not the definite leader for single-card performance as the 295 is still consistently faster and the 5870 even fails to consistently outperform its own predecessor, the 4870X2.

Its scaling problems are really odd given its internals and overall specs seem to indicate its just RV790 CF on a single die, yet it scales worst than the previous generation in CF. I'd say you're probably onto something thinking AMD underestimated the 5870's bandwidth requirements.

Anyways, nice card and nice effort from AMD, even if its stay at the top is short-lived. AMD did a better job pricing this time around and will undoubtedly enjoy high sales volume with little competition in the coming months with Win 7's launch, up until Nvidia is able to counter with GT300. Reply

I didn't even realize til I read another comment that Ryan Smith wrote this and not Anand/Derek collaboration. That's a compliment btw, it read very Anand-esque the entire time! ;-) Really enjoyed it similar to some of your earlier efforts like the 3-part Vista memory investigation. Reply

I wouldn't be surprised if most of us already knew what was going to take place with performance and what-not. But its still a nice card whether I knew the specs before its official release or not. (And viewed many purposely leak benches). :)

Another fine review and nice to see it hit today. Your reviews are one reason I keep coming back to AT!

Unfortunately, at MSRP the 5870 doesn't offer enough for me to move past the 4890 I am currently using, and bought for $130 during one of the sales streaks a month or so ago. Will re-evaluate when we actually start seeing price drops and/or DX11 games hit the shelves.

But piggy backing on what wicko said, I'm surprised you didn't include two 4890 in CF. Seeing as how you can get two 4890 for less than a 5870 (whenever they are actually available). $180 x 2 = $360 < $379. And this from Newegg, not some super special sale price.

Not only all that, but when there were 13 big titles for PhysX and a hundred smaller ones, we were told here, "Meh", who needs it.
Now, we have a papery and unavailable (egg)except by pre-order(tiger) 5870 launch, a not-existing 5850, with guess what ? NO DX11 games!
Oh wait, there is actually just ONE - see page 7 of review. LOL
---
Conclusion ?: " It looks like NO(err.. just one) DX11 games ready, so... it also looks like NVidia is launching at the right time, and ATI blew their dry unimpressive wad on a piece of paper porn. "
---
Gee no one crowing about the first DX11 card... imagine that...

Good thing , too, considering how 13 big or a hundred titles small of PhysX enhanced games was "nothing to change one's purchase decision over". At least Anand got addicted to Mirror's Edge with PhysX enabled before concluding in the article "meh" this PhysX thing is ok if you like this game, but who cares...
---
Now we have the DX11 pre DX11 games launch with a paper product, so crowing about it wouldn't be too fitting, huh.

Why would a developer release a DX11 card before DX11 is even available ?
(I suppose you'll have to unscrew your hate nvidia foil cap, and grind in the red spikes in it's place to answer that one.)
However, allow me, instead.
1.I have been running windows 7 32&64 for quite some time now, not sure why you haven't been.
2. Battleforge, an ATI promo game, as noted in the review, has released their DX11 patch, hence, with W7 from MSFT (the beta+ free trial good till March 1st 2010 or something like that) I believe any gamer has had a reasonable chance to preview DX11.
---
So anyway... Reply

When the GTX295 still beats the latest ati card, your wish probably won't come true. Not only that, ati's own 4870x2 just recently here promoted as the best value, is a slap in it's face.
It's rather difficult to believe all those crossfire promoting red ravers suddenly getting a different religion...
Then we have the no DX11 released yet, and the big, big problem...
NO 5870'S IN THE CHANNELS, reports are it's runs hot and the drivers are beta problematic.
---
So, celebrating a red revolution of market share - is only your smart aleck fantasy for now.
LOL - Awwww...

BUT - the G80 move to G92 added 73M core transistors, and you couldn't stop shrieking "rebrand".
---
nearly as fast= second best
DX11 in a month = not now and too early
hot card -= IT'S OK JUST CLAIM ATI PLANNED ON IT BEING HOT !ROFL, IT'S OK TO LIE ABOUT IT IN REVIEWS, TOO ! COCKA DOODLE DOOO!
beta drivers = ALL ATI LAUNCHES GO THAT WAY, NOT "MOST"
----
Now, you can tell smart aleck this is a paper launch like the 4870, the 4770, and now this 5870 and no 5850, becuase....
"YOU'LL PUT YOUR HEAD IN THE SAND AND SCREAM IN CAPS BECAUSE THAT'S HOW YOU ROLL IN RED ROOSERVILLE ! "
(thanks for the composition Jared, it looks just as good here as when you add it to my posts, for "convenience" of course)
Reply

But the 5870 *is* up to 65% faster than the 4890 in the tested games. If you were to compare the GTX 280 to the 9800 GX2, it also wasn't 60% faster. In fact, 9800 GX2 beat the GTX 280 in four out of seven tested games, tied it in one, and only trailed in two games: Enemy Territory (by 13%) and Oblivion (by 3%), making ETQW the only substantial win for the GT200.

So we're biased while you're the beacon of impartiality, I suppose, since you didn't intentionally make a comparison similar to comparing apples with cantaloupes. Comparing ATI's new card to their last dual-GPU solution is the way to go, but NVIDIA gets special treatment and we only compare it with their single GPU solution.

If you want the full numbers:

1) On average, the 5870 is 30% faster than the 4890 at 1680x1050, 35% faster at 1920x1200, and 45% faster at 2560x1600.

2) Note that the margin goes *up* as resolution increases, indicating the major bottleneck is not memory bandwidth at anything but 2560x1600 on the 5870.

3) Based on the old article you linked, GTX 280 was on average 5% slower than 9800X2 and 59% faster than the 9800 GTX - the 9800X2 was 6.4% faster than the GTX 280 in the tested titles.

4) Making the same comparisons, 5870 is only 3.4% faster than the 4870X2 in the tested games and 45% faster than the 4890HD.

Now, the games used for testing are completely different, so we have to throw that out. DoW2 is a huge bonus in favor of the 5870 and it scales relatively poorly with CF, hurting the X2. But you're still trying to paint a picture of the 5870 as a terrible disappointment when in fact we could say it essentially equals what NVIDIA did with the GTX 280.

On average, at 2560x1600, if NVIDIA's GT300 were to come out and be 60% faster than the GTX 285, it will beat ATI's 5870 by about 15%. If it's the same price, it's the clear choice... if you're willing to wait a month or two. That's several "ifs" for what amounts to splitting hairs. There is no current game that won't run well on the HD 5870 at 2560x1600, and I suspect that will hold true of the GT300 as well.

(FWIW, Crysis: Warhead is as bad as it gets, and dropping 4xAA will boost performance by at least 25%. It's an outlier, just like Crysis, since the higher settings are too much for anything but the fastest hardware. "High" settings are more than sufficient.) Reply

In other words, even with your best fudging and whining about games and all the rest, you can't even bring it with all the lies from the 15-30 percent people are claiming up to 60.96%
--
Yes, as I thought. Reply

I knew the 5870 was gonna be great based on the design philosophy that AMD/ATi had with the 4870, but I never thought I'd see anything this impressive. LESS power, with MORE power! (pun intended), and DOUBLE the speed, at that!

Funny thing is, I was actually considering an Nvidia gpu when I saw how impressive PhysX was on Batman AA. But I think I would rather have near double the frame rates compared to seeing extra paper fluffing around here and there (though the scenes with the scarecrow are downright amazing). I'll just have to wait and see how the GT300 series does, seeing as I can't afford any of this right now (but boy, oh boy, is that upgrade bug itching like it never has before). Reply

So what you're saying then is that everyone should buy the 9500 GT and ignore everything else? If that's the most important thing to you, then clearly, that's what you mean.

I think that the performance per dollar metrics that are shown are misleading at best and terrible at worst. It does not take into account that any frame rates significantly above your monitor refresh are for all intents and purposes wasted, and any frame rates significantly below 30 should by heavily weighted negatively. I haven't seen how techpowerup does their "performance per dollar" or how (if at all) they weight the FPS numbers in the dollar category.

SLI/Crossfire has always been a lose-lose in the "performance per dollar" category. Curiously, I don't see any of the nvidia SLI cards listed (other than the 295).

LOL - you won't find me arguing, but that performance per dollar here is absolutely FAMOUS for guess whom, ati !
Congratulations, for the first time someone other than myself conplained about it, of course, only when ATI has been implicated as a disaster in it.
--
PS - I think what the chart is saying is, use some common sense, not go for 9500, but in your range you hope to purchase, the chart can be handy for picking between several.
--
Also, the chart tells us, the 5870 is not bang for the buck.
-
So, so sorry, it upset you so badly, the tips above for proper useage should be fine. Reply

Actually, far more developers care about the next version of DirectX (pc-industry-wide vendor-independent development platform) than about PhysX (single vendor, low uptake currently). Not that PhysX would be unimportant. Reply

7:08PM CST Wed. Sept 23rd, 2009 -
Out of the 9 "reviews" by "customers" on newegg for the 5870 card, only ONE of them has the revealing:
" This user purchased this item from Newegg " in blue by their name on the customer reviews tab.
LOL
So it looks like 8 posers and you. Hope you enjoy the XFX 5870,
"WoostaR" Reply

I haven't written a review yet as I haven't gotten the card. (hate it when people do that) I was a little worried up until just a few minutes ago when my UPS tracking# actually had some data. Estimated Delivery of 9/24/09. Hopefully it's waiting at the front door when I get home from work. I just spend the last 2 hours moving all the guts of my PC into a larger case to accommodate the card. Reply

I guess it is, it was even better still when I decided to come home a bit early and UPS got there 2 hours sooner than they usually do. Playing through Crysis Warhead once more "The way it's meant to be played" on an ATI card though.

Also included was a Dirt 2 code to be activated on steam, which I nearly threw out! Now I just have to wait for the game to be released. Reply

I'm not certain how you saw any available then, looks from 3 different states didn't show that.
Heck they weren't even active at 8 am, nor later than that. I guess you "got the only one" that wasn't Autonotify when arriving. Reply

This is the last piece of hardware I've been waiting for before I build my i7 system. Now I think it's a ok time for enthusiast to start building again but on a cheaper budget with better performance :) no more of these 200-300 MB and stuff lol Reply