At this point, I doubt there's much difference between vendors. Since the card is still relatively new and supply still low, I would think most vendors are sticking to the reference design.

Warranties and support aside, they're all identical.
I'd investigate the warranties offered, any trade-up programs (not that they'll be much to trade up from but still good to know) and any info like that. Reply

I find it rather peculiar that even though Crysis Warhead runs better on ATI single GPUs (4890 faster than GTX275, 4870 faster than GTX260, 4850 almost as fast as GTX 260) it scales well enough in SLI to take the performance lead from ATI, as shown by the 5850 CF losing to GTX285 SLI (SLI scaling almost 100%!!). It makes one wonder just what the heck all the hype is about ATI releasing a new driver once a month if they don't seem to do make much of a difference in the performance department. Reply

sorry the f-bomb quote was in the previous artical. But kinda adds to the point. Please no silly fan boy comments as I have stock in amd and nvidia :) (ya, in know probably silly to have stock in these two companies) Reply

Probably get locked out of the site for saying this. But it seems like there is always an agression toward NVIDIA. Kinda like ATI gives free samples and NVIDIA does not, or not in a timely manner. I mean why would an F-Bomb be quoted on a professional site. It's not just this one statement and not just this one artical that makes me wonder.
otherwise, as usual, great artical, with great content. Reply

I wish you guys would put together Company of Heroes, World in Conflict and Supreme Commander so we could see clearly the differences between the cards. Today we see only reviews based on FPS, I like FPS, but my main games are all RTS.... I bought the HD4890 based on reviews but it didnt run as good as my GTX 275 for RTS titles.... Reply

From what I've heard at other sites the Vapor-x 4890 is significantly quieter than the ATI cooler on the 5850. Thats not a knock on the 5850, it's just that the Vapor-x cooler on the 4890 is dead quiet. I love mine. It even has HDMI, VGA, Displayport and DVI on the back of the card! Reply

I was looking at the the exact same cards. Both the Twin Forzr GTX275 and the Radeon 4890 vapor-x are VERY QUIET. I would pick based on which games you play. If you have a home theater and audio passthrough is a concern I would lean towards the ATI card. Both of them come pre-overclocked and both have stellar performance. The Twin Frozr is slightly longer in length; however it's power connectors face up (Not out the back of the card like usual). Reply

Yes. That's the noise of our test rig that we measure at idle when using an entirely passive video card; we can get our rigs down to about 36dB, but that requires removing all the fans other than the CPU fan. Bear in mind that a dead quiet room is 30dB and that we're measuring roughly 6" away from the card, so our methodology may differ from how other people do it. Reply

I didn't read all the comments yet so perhaps this has been mentioned, but where are all the 5850 Xfire results? Hopefully the Crossfire benches weren't done exclusively at 2560x1600? From some of Ryan's text in the article, it seems there were Crossfire results that are accidently missing or got pulled. Stuff like this quote from the Battleforge page 6 text:

quote:The 5850 Crossfire on the other hand loses once again to the GTX 285 SLI in spite of beating the GTX 285 in a single card matchup.

Yet there are no Xfire results on any of the Battleforge graphs. Hopefully they'll be added later. In the meantime guru3d and HardOCP have Xfire results for those interested. Reply

Had a chance to go through all the comments now, seems TheOne pointed out the same thing on page 6 of the comments and Ryan attempted to fix it. The fix still ain't showing up in my browser though. Reply

HD5800 series is bandwidth limited. The 5850 being less severe than 5870. 5850 has about 77% computing power and about 83% of memory bandwidth of 5870. So normally, 5850 should perform about 77% as fast as 5870 but it wasnt the case here. If you calculate all the benchmarks performance at all resolution, its surprisingly consistent, 5850 is always around 82-85% performance of 5870. Never did it drop to below 80% performance level, let alone coming close to 77% which is where it should be. Different game has different bandwidth requirement and there's fluctuation in percentage improvement from 4800 series to 5800 series. It unstable but 5870 for eg rarely doubles 4870, let alone 4890. So in the end, its not parallelization or scaling problems, nor was it geometry or vertex limitation (possible but less likely), it is indeed the 5800 series being limited in performance due to restricted memory bandwidth. Those of you who has 5800 cards, overclock the memory and check the scaling and you'll see wht i mean. Reply

Overclocking the RAM is one idea, adding more RAM is another, however it remains to be seen whether ATI will introduce a wider bus for any higher spec models.

The situation is a little different to the 4830 - 4850 comparison whereby the 4830 had slightly lower clocks but only 640 SPs enabled instead of the full 800, however in the end the performance difference wasn't very large so the lack of shaders didn't cripple the 4830 too much. Reply

If u cant see it thats ur problem, to say games arent pushing it is noobish, 1gb is still plentiful for today's games, memory buffer was nv an issue as long as its 1gb. well in the end u will see faster ram outperforming ur 2gb version. Very amazing to see many people still cant figure out the main reasons of 5870's underperformance.

It is still a decent card and offer many features and definitely a better performance to price ratio card than GTX 285. But it is underperforming. Not living up to its nex gen architecture prowess. Unless GT300 screw up, it can easily outperform 5870 when its out. If AMD came out quickly with 5890, they will be wise to significantly bump up the GDDR5 speed as it is unlikely they will go with higher than 256bit design due to their "sweet spot" small die strategy. Reply

I'm looking to pick one of these up relatively quickly. My question is there really a difference in which vendor I purchase from (HIS, Powercolor, Diamond, XFX, etc). I know many offer varying warranties, but if they offer the same clock speeds, what else is there? I guess I'm looking for the most reputable brand since I won't be waiting for too many specific reviews before purchasing one. Any help is appreciated. Reply

Where and under what conditions/server load do you test the frame rate in WoW? I've played for years and with my 4850 i can get 100fps in the game if i am in the right spot when no one else is in the zone. Knowing when and where you do you frame rate tests for WoW would help to put it into context. Reply

"...our test isn't representative of worst case performance - it uses a very, very light server load, unfortunately in my testing I found it nearly impossible to get a repeatable worst case test scenario while testing multiple graphics cards.

I've also found that frame rate on WoW is actually more a function of server load than GPU load, it doesn't have to do with the number of people on the screen, rather the number of people on the server :)

What our test does is simply measures which GPU (or CPU) is going to be best for WoW performance. The overall performance in the game is going to be determined by a number of factors and when it comes to WoW, server load is a huge component." Reply

A good repeatable test would be to have a RAID group in an instance and have them all cast a set of spells at once. the instance server separates from the rest of the server load and allows for a bit better testing. While it's true that the game is generally more CPU/RAM limited than GPU limited, especially if you have a lot of add-ons doing post processing on all the information that is shooting around. However, having been in raids with and without add-ons and such, i can tell you that i can get 45-50fps when we are just standing there waiting to attack, and then as soon as the spell effects start going off my frame rate drops like a rock. The spell effects are particle effects that overlap and mix and are all transparent to one degree or another. All those effects going off on a single target creates a lot of overlap that the GPU has to sort out in order to render correctly.

What you might try is to see if you can get Blizz to put a target dummy in an instance to isolate it from the rest of the masses, and allow for sustained testing with spell effects going off in a predictable manner. (not having every testing go balls to the wall, but simply repeat a set rotation in a timed manner so that you can get an accurate gauge. Reply

I second your question
And also just want to say that with a very heavily volt modded and overclocked 8800GTS 512MB the performance in WoW at maximum settings with 2xAA will totally kill my card
For example in heavily populated areas it will use more than 512MB video ram (confirmed using rivatuner)

And in heavily populated areas I get like 20FPS, for example in Dalaran at peak hours (like, when I play :P)

The numbers you provide for WoW are welcome, very few sites do these tests
But more realistic numbers would be nice, representing what a big guild would see in a 20 or 40 man RAID...

Perhaps you could setup a more realistic test with private servers, or if you are unwilling to go that route ask Blizzard if they could setup a testserver for you to use so you can get reproducible tests? Reply

The charts are a bit confusing. My main focus is at the 2560x1600 and the review references 5850 CF and 285 CF but they are not to be found in any of the charts. Same for 285 SLI

"With and without ambient occlusion, the 5850 comes in right where we expect it. The 5850 Crossfire on the other hand loses once again to the GTX 285 SLI in spite of beating the GTX 285 in a single card matchup."

Ryan, I know it's not the focus of this article, but it would be great to get a small paragraph (or a blog post or whatever) on what ATI has said in reference to the lower-spec 40nm DX11 parts. I simply don't need 4850 power in a SOHO-box, but the low 40nm idle power consumption and DX11 future-proofing are tempting me away from a 4870/90 card. What kind of prices and performance scaling are we likely to see before the end of 2009? Thanks for any info! Reply

I have to point this out because it's something I've now seen on two websites and it irks me a little bit just like 'solid state capacitors' does. In the last sentence on page one the plural of die in this case is dies not dice. Someone didn't edit this carefully! Reply

btw, this card is powerless against "the way is meant to be played"
nvidia keeps bribing developers left and right, ATI does nothing
(except boring sideshow penis wars), meanwhile the poor ATI users cant seem to play NFS SHIFT 640 x 480, all set to low, ( my -rebranded- 9800 does it great btw) + there is no in-game selective AA available to any ATI Radeon user in Batman, (another TWIMTBP game) + it looks like empty crap with all the shit nvidia - removed ( yes, really no smoke? no papers? no flags? not even static flags? what about GRAW? it used to work fine on reg cpus...) Reply

I suppose we probably have to wait for the consumer driver release to know for sure, but how is the stability of these? The only two AMD cards I have direct experience with have both had driver issues, so that is the one factor that would keep me from considering one of the lower-powered versions of this architecture once they are released. Reply

AMD's graphics forums and the number of bugfixes and known issues posted for each driver release say otherwise.

NVIDIA is not doing so well lately with their drivers either though, especially where Vista/7 is concerned.

Both companies can't seem to get proper fixed aspect ratio GPU scaling working in Vista/7. This has been broken since Forceware 169.04, and my friend tells me broken in a recent Catalyst release. What the hell is going on? Reply

Instead of just marginally reducing performance by dropping clockspeeds and available bandwidth, they artificially neutered their parts by cutting out a few SIMD clusters similar to Nvidia's MO of cutting TPC units.

Your conclusion doesn't seem to draw this parallel, that the cut SIMD probably don't factor much into the overall performance because 1580 or whatever is left is enough and the full 1600 aren't being fully utilized in most games today. So instead the 5850 scales more closely to the 15% decrease in clockspeeds compared to the combined 23% for clockspeeds and SIMD units.

The 5870 soft launch followed by today's 5850 paper launch also says quite a bit about 40nm yields in light of their artificial die neutering approach. Reports of AMD shipping *FOUR* 5870 for every *ONE* 5850 for a 4:1 ratio indicates 40nm yields are quite good. Given the high demand and apparently inadquate supply, it makes absolutely no sense whatsoever for AMD to ship these perfectly capable die for a $100 discount when they can sell them for that much more on the 5870. Reply

I think the current beta drivers are holding the performance of these cards and it can improve by another 10-15%. Maybe ATI is holding it back just in case nVidia brings in some surprise (really doubt it).
Strangely the temps mentioned for both the cards are inconsistent with other reviews on the web with Anandtech's being the lowest. Maybe it would be better to post ambient-to-idle/load temps in all your reviews. Reply

Nvidia's tech is soon to be so out dated that they will not be a deal at any price. They cannot even do all DX10 spec let alone any DX11 which I do believe ATI has been able to do some DX11 functions since the X1900. I hope Nvidia gets their act together and survives but unlike when 3D was new and Nvidia pushed new tech envolpe they are have been holding progress to a stand still. Nvidia should put up and play the game or get out of the game and make PhysX cards.

I do hope they create a 5850X2. These new RV870 gpu's look like they will work well in a 2GB version. Ive heard the 5870X2 will be a 4GB card, lets just hope. I know I would pay $600-$700 for that baby without a thought. Reply

I am with you, I think NVIDIA needs to go out of business. I think they will.

They are at a huge disadvantage without a CPU. Intel is moving CPU/GPU soon, and AMD had this planned for a long time. With Intel already precluding NVIDIA from making chipsets for Nehalem based computers, and ATI making far better GPUs, NVIDIA is running on momentum now, and that runs out over time.

NVIDIA might shirk Intel and make a chipset for Nehalem. While most us wouldn't even consider a crappy NVIDIA chipset, the general market has no idea how problematic they are. They buy from HP and Dell, and they use NVIDIA. I am surprised at how many of these that I see, so it's a good business for NVIDIA.

Right now, the Lynnfield is essentially irrelevant, and the Bloomfield is a niche product. Neither are particularly important products as far as the market is concerned, so NVIDIA isn't really paying a price. Core 2 is still the most attractive platform for mainstream America, or an AMD platform. Clarkdale, with all its flaws, should sell especially well, and even if NVIDIA does decide they want to make a chipset, it won't sell. No one who knows much about computers will buy an NVIDIA chipset, so they sell mainly through HP and Dell, or similar companies. HP and Dell are not going to want to pay extra for an NVIDIA GPU, since the processor comes with one, and really it's only the southbridge that's up for grabs now. This would make a much smaller contribution to their bottom line. It's all bad for them.

Yes, they can sell into the Bloomfield space, if they come up with a good discrete card. But, how big is this market? Lynnfield should be even smaller, being brain-damaged and second-best, but, not particularly cheap like the Clarksdale. Also, it's unlikely someone will want a high priced video card, or two, and pair it with anything but the best platform.

So, where does NVIDIA sell into? Core 2 will go away, Bloomfield and Lynnfield will have relatively small market shares, and Clarksdale should sell especially well in the markets where NVIDIA chipsets sell well now.

Anand said the Clarksdale was the replacement for the Conroe, which caught a lot of FlAK, because he worded it poorly. But, in a way, he's right with respect to the Clarksdale replacing the Core 2 as the platform for the mainstream market. In this respect, the Clarksdale is better in almost all respects. It's dual core, but runs four threads. Sure, they put the MMU in the wrong place, but it's still better than being outside the processor, and the GPU is better than the G45. On top of this, it should be cheaper. Core 2 duals, with Pentiums, etc..., sell the best, still. Clarksdale is better, and should be cheaper, so it's going to dominate the market in the same way. Bloomfield is king of performance, and will have a place. It's not a big one though. Lynnfield is a good combination of power and decent performance. It's also not a big space, although the i5 750 might do well and shouldn't be discounted. The big space will be the Clarksdale. NVIDIA is going to be hurt by it. Hopefully, fatally. Reply

I disagree with you on ALL points. I buy 2 videocards per year and I own an almost equal number of ATI/Nvidia cards. I just bought a 4890 and next will be a 5850.

Nvidia should definitely NOT go out of business. Competition drives creativity and reduces prices for consumers. I would hardly say Nvidia is doing badly at the moment. The bulk of aftermarket videocards still come from Nvidia. They are still ahead of ATI in marketshare. They are also a marketing juggernaught; "The Way its Meant to be Played" is a very powerful marketing tool. That being said I expect a firm advantage for ATI over the next six months.

I have owned several Nvidia chipset motherboards and they have all been exceptionally reliable and great overclockers. I've never had driver issues with them. I find Clarksdale underwhelming. G45 didn't live up to all its promises (Bitstreaming) and I seriously doubt that it's successor will either. G45 has the least features and customizability of any onboard solution I have used yet. Intel has a long way to go on integrated video if they ever want to capture the enthusiast market.

ATI has been doing a stellar job lately. The 5850 is every hometheater guys dream. Inexpensive, Bitstreaming HD content and it will fit in most HTPC cases. The latest GTX cards and the 5870 are too long. Videocards should be less than 10 inches long! Reply

nVidia shouldn't "get out of the game" at all. True, ATI may just have 3 or 4 months of technical superiority, but nVidia's next cards may be superior as well as offering plenty of revolutionary features.

nVidia can also chop prices for their current lineup but not too much, otherwise they may undercut their new cards.

I'm impressed by the 5850's frugal (as compared to the 5870) power requirements. Coupled with a relatively low price, it should sell very nicely indeed (and spawn some overclocked versions very quickly). Reply

Did you have the August DirectX Redist installed on your test system? I think I've read somewhere that this is the update that brings 'full' DirectX11 functionality to Windows 7, and perhaps this is the reason you didn't see the results you were expecting.

well, since it's a centrifugal fan, it sucks air through the center and exhausts it in a radial direction, being the shroud's job to redirect the air afterwards. now looking at how restricted the openings appear on the 5850 and the cooling performance. I'm curious if there is connection, say fresh air may be escaping through those openings and back into the case instead of passing through the heatsink and out the back of the case. Reply

With 5870 being basically a doubled up 4870 architecture and (still SIMD), i am interested to see how Nvidia's new MIMD architecture will compete, especially with the ridiculous memory bandwidth it will have with GDDR5 and a 512 bit bus.(if the 512 bus isnt just a rumor and hopefully they are not plagued by driver issues) I am glad AMD/ATI is doing better the competition is great, but i feel the new NV cards are going to be good (least if any of the rumors are true). I am still trying to find a reason to replace my 9800GTX SLI, they burn thru about any game as long as you stay away from over 4X AA due to the 512 MB frame buffer.

BTW, Not a NV fanboi here, hope i dont sound like one, its late and i just dropped my friend off at the ER, brain is tired. the fiancee's PC has a ATI card and its great, no complaints other than a few driver issues, but nothing i could complain about really. HD4850 512MB

keep up the competition, we have AMD to thank for under 400 fast cards Reply

An excellent question! This isn't something we had a chance to put in the article, but I'm working on something else for later this week to take a look at exactly that. The 5850 gives us more of an ability to test that, since Overdrive isn't capped as low on a percentage basis. Reply

You could make some raw shader tests that doesnt depend on memory bandwith to see if the gpu internal bandwith is somehow limited or the external bandwith. And maybe try out some older games(quake3 or 3dmark2001).
In DX11 games will use more shader power for other things which hawe litle impact on bandwith. Maybe they tested those heawy dx11 scenarios and ended with much less costly 256bit interface as a compromis. Reply

Ok, this is an absolute killer for the lower performance market segment. Its 4870vs4850 all over again. Only this time, they get the performance crown for single cards too.

Another thing to remember, is that nvidia does not currently have a countermeasure for this card. The GT380 will be priced for the enthusiast segment, and we can only hope for the architecture to be flexible enough to provide a 360 for the upper performance segment without killing profits due to diesize constraints. Things will get even more messy as soon as Juniper lands, the greens have to act now (thats our interest as consumers too)! And I don't think that GT200 respins will cut it. Reply

My guess is - GT300 wont compare to 5850 or 5870.
It will compare with the 5870X2 and be in the price bracket. (Too much for most of us.)

When the GT300 eventually gets released that is.... Then a few months later again nvidia will bring out the scaled down versions in the same price brackets as the 5850/5870 that will probably compete pretty well.

Only question is - can you wait?
You could wait for the 6870 as well:P

I really think enthusiast that spends hundreds on the MB alone isn't the regular enthusiast. So price wouldn't be an issue. I love building PCs and testing them but I'm not going to spend $200+ of a MB knowing that I will be building another system in few months with better performance parts and pricing. Unless I'm really keeping the system for a long time then I'll pour my hard earn money into the high end parts. But then if you're doing this I don't think you're really an enthusiast as it's really a one shot deal? Reply

Ya it already sounds like the 5870X2 and 5850X2 are being positioned in the media to compete with just a single GT300 with rumors of $500 price points. I think the combination of poor scaling compared to RV770/RV790 in addition to some of the 5850/5870 CF scaling problems seen in today's review are major contributing factors. Really makes you wonder how much of these scaling issues are driver problems, CPU/platform limitations, or RV870 design limitiations.

My best guess for GT300 pricing will be:

$500-$550 for a GTX 380 (full GT300 die) including OC variants
$380-$420 for a GTX 360 (cut down GT300) including OC variants
$250 and lower GTX 285 followed by GT210 40nm GT200 refresh with DX10.1

So you'd have the 5870X2 competing with GTX 380 in the $500-600 range. Maybe the 5850X2 in the $400-$500 range competing with the GTX 360. 5870 already looks poised for a price cut given X2 price leaks, maybe they introduce a 2GB part and keep it at the $380 range and drop the 1GB part. Then at some point I'd expect Nvidia to roll out their GT300 GX2 part as needed somewhere in the $650-700+ range..... Reply

How would it need to be 50% faster? It'd only need to be ~33% faster when comparing the GTX 380 to the 5870 or GTX 360 to the 5850. That would put the 5870 and 360 in direct competition in both price and performance, which is right on and similar to past market segments. The 380 would then be competing with the 5870X2 at the high-end, which would be just about right if the 5870X2 scales to ~30% over the 5870 similar to 5870CF performance in reviews. Reply

"It's not like i NEED DX11 now, and i certainly don't need more GPU performance than I already have. "

As of today I am limping along on a GTX275 (LOL) and I really cannot tell any differences between the cards at 1920x1080. Considering the majority of PC games coming for the next year are console ports with a few DX10/11 highlights thrown in for marketing purposes, I am really wondering what is going to happen to the high-end GPU market. That said, I bought a 5850 anyway. ;) Reply

I'm running GTX 280 SLI right now and have found most modern games run extremely well with at least 4xTrMSAA enabled. But that's starting to change somewhat, especially once you throw in peripheral features like Ambient Occlusion, PhysX, Eyefinity, 3D Vision, 120Hz monitors or whatever else is next on the checkbox horizon.

While some people may think these features are useless, it only really takes 1 killer app to make what you thought was plenty good enough completely insufficient. For me right now, its Batman Arkham Asylum with PhysX. Parts of the game still crawl with AA + PhysX enabled.

Same for anyone looking at Eyefinity as a viable gaming option. Increasing GPU load three-fold is going to quickly eat into the 5850/5870's increase over last-gen parts to the point a single card isn't suitable.

And with Win7's launch and the rollout of DX11 and DirectCompute, we may finally start to see developers embrace GPU accelerated physics, which will again, raise the bar in terms of performance requirements.

There's no doubt the IHVs are looking at peripheral features to justify additional hardware costs, but I think the high-end GPU market will be safe at least through this round even without them. Maybe next round as some of these features take hold, they'll help justify the next round of high-end GPU. Reply

With PC gaming seemingly going towards MMO like WoW/Aion/Warhammer (and later on Diablo 3) and far less emphasis on other genre(besides FPS, which is more or less the same every year), and as you said most new games being console ports, I really doubt we'll need anything more powerful than the 4890, let alone a 5850 or 5870 for the coming couple of years. Maybe we've enter the era where PC games will forever be just console ports + MMO, or just MMO, and there'd be little incentive to buy any card that cost 100+.

I was told by a Microcenter employee the current pre-order retail price for the top end GT300 card was $579, an EVGA card, btw. And reportedly the next model down is the GT350. Dunno if this is fact or not, but he didn't have any reason to lie. Reply

The GT300 will need 512bit gddr5 to make memory faster than GT200 and it will hawe even more masive GPGPU bloat than last gen. So in folding it will be surely much faster but in graphic it will cost much more for the same(at least for nvidia depending how close they want to bring it to radeon 5k). And of course they can sell the same gt300 in tesla cards for several thousand(like they did with gt200).
The 5850 price with disabled units is still win for ati or else they wouldnt sell the defect gpu at all. Reply

Well, you meant YOUR HTPC case. Not all HTPCs are limited to half-cards. Although, I thik that these fan blower cooling solutions are horrible for HTPC applications with their horrible whine. It may be awhile until aftermarket solutions are out for this new line. Reply

I'm surprised it performs so closely with the 5870 yet cooler running and doesn't demand so much from your power supply. I think this is my next card come Christmas time, unless Nvidia releases some details about their next generation GPU's along with expected prices before then.

I've read rumors that we will not see any Nvidia cards for sale until next year...ouch, but I'm betting they release before Christmas. Missing the holiday buying season would be a real stupid move for Nvidia. Reply