Post Your Comment

173 Comments

i mean, ATI invented the unified pixel/shader architechture, or so i believe, and this generation they've got dx10.1 level hardware something like 7 months and counting before Nvidia have any.

Also i once read an article about SLI and Crossfire, about how SLI has only one rendering type, scissors. (meaning the screen is divided in two) Whereas ATI have scissors, tiled, (so that the more demanding areas of the screen are better divided amongst the cards) and others.
Also, you can combine any HD series ATI card with any HD series card! thats way better than having to say, buy another 7800 for SLI because just one isn't doing it anymore, even though the 9800 series are out. With CFire you could add a 4870 to your old 3870!

Im currently with Nvidia (a 9600gt (found one for £60!)) but am frequently impressed by ATI.

im pretty shore the 4870 is low profile ive been looking everywhere for a low profile graphics card and i think i foung a high ends one unlike the geforce 8400 and the 8600 those arenot very good and they dont look good either but where do i buy the 4870 Reply

Why has there been no article comparing the 8800GT to the 9800GT? Is it just a rebrand, are there noticeable performance differences. It's 9 series, I assume it has hybrid power, but I don't know. Anandtech, PLEASE! Do an article on this. Reply

I would for sure go for the 4870. (I am a Quake Wars fan boy u see :P)
But I was unsure how these would perform on Linux. Is the driver support reliable? Right now I use a 7950 NVIDIA and their support on Linux is almost shite. I was wondering whether its better or worse...

Thank you very much for including the 8800gt sli figures in your benchmarks. I created an account especially so I could thank Anand Lal Shimpi & Derek Wilson as I have found no other review site including 8800gt sli info. It is very interesting to see the much cheaper 8800gt sli solution beating the gtx 280 on several occasions. Reply

Nice article. Could you please add the amount of graphics memory on the cards to the "The Test" page of the article. The amount of memory matters for the performance and (not unimportant) the price of the cards...

Hello!
Long-time reader here that finally decided to make an account. First off, thanks for the great review Anand and Derek, and hats off to you guys for following up to the comments on here.
One thing that I was hoping to see mentioned in the power consumption section is if AMD has by any chance implemented their PowerXpress feature into this generation (where the discrete card can be turned off when not needed in favor of the more efficient on-board video- ie: HD3200)? I recall reading that the 780G was supposed to support this kind of functionality, but I guess it got overlooked. Have you guys heard if AMD intends to bring it back (maybe in their 780GX or other upcoming chipsets)? It'd be a shame if they didn't, seeing as how they were probably the first to bring it up and integrate it into their mobile solutions, and now even nVidia has their own version of it (Hybrid Power, as part of HybridSLI) on the desktop... Reply

I honestly don't understand what Nvidia was thinking with the GTX 200 series, at least at their current prices. Several of Nvidia's own cards are better buys. Right now, you can find a 9800 GX2 at Pricewatch for almost $180 less than a GTX 280, and it'll perform as well as the 280 in almost all cases and occasionally beat the hell out of it. You can SLI two 8800 GTs for less than half the price and come close in performance.

There really doesn't seem to be any point in even shipping the 280 or 260 at their current prices. The only people who'll buy them are those who don't do any research before they buy a video card, and if someone's that foolish they deserve to get screwed.
Reply

Hey iamap, with the current release of HD 4870 cards, all of the manufacturers are using the reference ATI design, so they should all be pretty much identical. It boils down to individual manufacturer's warranty and support. Sapphire, VisionTek, and Powercolor have all been great for me over the years, VisionTek is offering a lifetime warranty on these cards. I've had poor experiences with HIS and Diamond, but probably wouldn't hesitate to get one of these from either of those manufactures on this particular card (or the HD 4850) because they are the same card, ATI reference. Reply

Now that the large monolithic, underperforming chip is out, leaving AMD free to grab market share, I'm so excited at what to happen. As nVidia's strategy goes, they're now scaling down the chip. But pardon me, cut the GTX 280 in half and then prices it at $324.99? That sounds so crazy!

Anyone remembers the shock treatment of AMD with codename "Thunder"? DAAMIT has just opened "a can of whoop ass" on nVidia! Reply

Anand tech why didnt you use and amd 790FX board to bench the radeon cards instead of using an nvidia board for both nvidia and ATI cards.It would be more accurate to bench those cards on compatible boards .
I think those cards would have worked better on an amd board based on the radeon express 790fx chipset.
Reply

I was watching the 8800GT SLI and it was in the top 2 most of the test, for less than the price of 2 4870s i could pick up a SLI board and another 8800 GT. perhaps also a E8400 to equal the $600 on 2 of these 4870s in CF. I am talking about the resolution i use that the 8800GT was looking very good in SLI 1680x1050 Reply

I'd like to see the differnce the GDDR5 made and since clocking the 4850 up may not be possible right now it would be nice to see the 4870 slowed down. If this has been asked or done sorry so much info pouring out now it is hard to keep up, thanks! Reply

Good but I just wish AMD would give it a full 512-bit memory bus bandwidth. Tired of 256-bit. It's so dated and it shows in the overall bandwidth compared to NVidia's cards with 512-bit bus widths. All that fancy GDDR4/5 and it doesn't actually shoot them way ahead of NVidia's cards in memory bandwidth because they halve the bus width by going with 256-bit instead of 512-bit. When they offer 512-bit the cards will REALLY shine. Reply

R600 was 512-bit ring bus with 256-bit memory interface (four 64-bit interfaces). http://www.anandtech.com/showdoc.aspx?i=2552&p...">Read about it here for a refresh. Besides being more costly to implement, it used a lot of power and didn't actually end up providing provably better performance. I think it was an interesting approach that turned out to be less than perfect... just like NetBurst was an interesting design that turned out to have serious power limitations. Reply

I'd argue that the 512-bit memory interface on NVIDIA's cards is at least partly to blame for their high pricing. All things being equal, a 512-bit interface costs a lot more to implement than a 256-bit interface. GDDR5 at 900MHz is effectively the same as GDDR3 at 1800MHz... except no one is able to make 1800MHz GDDR3. Latencies might favor one or the other solution, but latencies are usually covered by caching and other design decisions in the GPU world. Reply

The tests showed what i feared: my 8800GT is getting old to pump my Apple at 2560x1600 even without AA! But the tests also showed that the 512MB of DDR5 on the 4870 justifies the higher price tag over the 4850, something that the 3870/3850 pair failed to demonstrate. It remains the question: will 1GB of DDR5 detrone NVIDIA and rule the 30 inches realm of single GPU solutions? Reply

I thought it was only Johan, and it was sort of understandable since he's not a native English speaker, but it seems most Anandtech writers don't know the difference between "its" and "it's".

"It's" means "it is" or "it has" (just as "he's" or "she's"). When you're talking about something that belongs to something else, you use "its" (or "his" / "her").

In a sentence such as "RV770 in all it's [sic] glory.", you're clearly not saying "in all it is glory" or "in all it has glory"; you sare saying "in all the glory that belongs to it". So you should use "its", not "it's".

Even if you can't understand the difference (which seems pretty straightforward, but for some reason confuses some people), modern grammar checkers will pick this up 9 times out of 10. Reply

I am not a native English speaker, but I am well aware of the difference. I am also sure that reviewers are also ... it's just that - with all this text, we can forgive them, can't we?

I have a bachelor of computer science, studying for higher degree, but: I look at the technical side of the article, so I don't even notice the errors :D (although I can tell the difference I simply don't see it while reading) Reply

More likely is that with a 10000 word article and four lengthy GPU reviews in two weeks, errors slipped into the text. I know at one point I noticed Derek says "their" instead of "there" as well, and I can assure you that he knows the difference. I know I use Word's grammar checker, but I'm not sure Derek even uses Word sometimes. :) Reply

This review is flawed. It shows greater than 100% scaling for Crossfire 4870 in Call of Duty 4. Why don't they just give us the raw numbers for both single and dual cards in the same scenario? Why use a method that will artificially inflate the Crossfire results? Reply

i don't understand what you mean by raw numbers ... these are the numbers we got in our tests ...

we can't do crossfire on the nvidia board we tested and we can't do sli on the intel board we tested ...

we do have another option (skulltrail) but people seemed not to like that we went there ... and it was a pain in the ass to test with. plus fb-dimm performance leaves something to be desired.

in any case, without testing every solution in two different platforms we did the best we could in the time we had. it might be interesting to look at testing single card performance in two different platforms for all cards, but that will have to be a separate article and would be way to tough to do for a launch. Reply

In Bioshock in the multiGPU section the SLI 9800GTX+ seems to fall down on the job. In all other benches this SLI beats out the GTX 280 easily, here it fails miserably. While even the SLI 8800GT beats the GTX 280. Methinks something's wrong here. Reply

Egg's got them for 309.99. I'm gonna run 2 4870s in CF. I planned on using a P45 board but I am wondering if the P45s X8 per card will bottleneck the bandwidth and if I should go with an X48 board instead? When I research CF all I seem to find is "losing any bandwidth at X8 versus X16 is "debateable". What I'm thinking is that 8 pipelines can handle 4GBs so if I look at the 4870s 3.6 Gbs of memory bandwidth then X8 should be able to handle the 4870 without any performance hits. It that correct or am I all wet? Reply

I contacted ATI and they said I was correct. A P45 board only running X8 per card in CF will bottleneck the massive DDR5 bandwidth of the 4870s. If you're gonna CF 2 4870s use an X38 or X48 board. Reply

I created an account just to say how awesome this article was. It was really nice to see all the technical details laid out and compared to the competition. I was lucky to get in on that $150 hd4850 price at best buy last week and I am hoping the future drivers with improve performance even more. Please keep up the good work on these articles!!! Reply

and yes a lot of posters are reading way too much into it "you're biased waaa waaa boo hoo"

just get the facts from the article (thats what the charts and graphs are for) and then make your decision, if you cant do simple math and come to the conclusion yourself that the $300 card is a better buy than the $650 then you deserve to get ripped off. Reply

a very important comparison is missing. for those who want to go in for a multi-GPU setup, the 260 SLI vs 4870 CF is a very important consideration since SLI scaling has always been better than CF, and the 260 scales very very well.

in that case, if nvidia responds by reducing the price on the 260, the 260 SLI could be the real winner here. but sadly there were no 260 SLI benches.

Although I'm somewhat dubious about dual card solutions, I keep looking at the benchmarks and then at the prices for a couple of 8800 GTs.

Perhaps, if the 4870 forces Nvidia to reduce their prices for the GTX 260 and the GTX 280, they will likewise bring down the price for the 9800 GX2. This is already the fastest single card solution, and it sells for less than the GTX 280. If this card starts selling for under $400 (maybe around $350), will this become Nvidia's best answer to the 4870?

Given the performance and the prices for the 4870 and the 9800 GX2 will Nvidia be able to price the GTX 280 competitively, or will it simply be vanity product - ridiculously priced and produced only in very small numbers?

It should be interesting to see where the prices for video cards end up over the course of the next few weeks. Reply

It would be nice to have a video card, where it doesn't matter how weak the current-gen processor is (say the lowliest celeron available), the card can still output 1080p HDTV without dropping any frames. Reply

Ragarding the SLI scaling in Witcher:
The GTX 280 SLI setup may be running into a bottleneck or driver issues, rather than seeing inherent scaling issues. Consider, the 9800 GTX+ SLI setup scales from 22.9 to 44.5. So the scaling isn't an inherent SLI scaling problem. Though it may point to scaling issues specific to the GTX 280, it is more likely that the problem lies elsewhere. I do, however, agree with your general statement that when CF is working properly, it tends to scale better. In my systems, it seems to require less CPU overhead. Reply

Again, try faster CPUs to verify whether you are game limited or if there is a different bottleneck. The Witcher has a lot of stuff going on graphically that might limit frame rates to 70-75 FPS without a 4GHz Core 2 Duo/Quad chip. Reply

It looks like there seems to be a lot of this going on in the high-end, with GT200, multi-GPU and even RV770 chips hitting FPS caps. In some titles, are you guys using Vsync? I saw Assassin's Creed was frame capped, is there a way to remove the cap like there is with UE3.0 games? It just seems like a lot of the results are very flat as you move across resolutions, even at higher resolutions like 16x10 and 19x12.

Another thing I noticed was that multi-GPU seems to avoid some of this frame capping but the single-GPUs all still hit a wall around the same FPS.

Anyways, 4870 looks to be a great part, wondering if there will be a 1GB variant and if it will have any impact on performance. Reply

the only test i know where the multi-gpu cards get past a frame limit is oblivion.

we always run with vsync disabled in games.

we tend not to try forcing it off in the driver as interestingly that decrease performance in situations where it isn't needed.

we do force off where we can, but assassins creed is limiting the frame rate in absentia of vsync.

not sure about higher memory variants ... gddr5 is still pretty new, and density might not be high enough to hit that. The 4870 does have 16 memory chips on it for its 256-bit memory bus, so space might be an issue too ... Reply

No info from Anandtech on heat or noise. The info on the 4870 is most needed as most reviews indicate the 4850 with the single slot design/cooler runs very hot. Does the two slot design pay off in better cooling, is it quiet? Reply

a quick not really well controlled tests shows the 4850 and 4870 to be on par in terms of heat ... but i can't really go more into it right now.

the thing is quiet under normal operation but it spins up to a fairly decent level at about 84 degrees. at full speed (which can be heard when the system powers up or under ungodly load and ambient heat conditions) it sounds insanely loud. Reply

...as more and more people are hooking up their graphics cards to big HDTVs instead of wasting time with little monitors, i keep hoping to find out whether the 9800gx2/4800 lines have proper 1080p scaling/synching with the tvs? for example the 8800 line from nvidia seems to butcher 1080p with tv's.

interesting, what cards have you worked with? i have the 8800gts512 right now and have the same problem as with the 7900gtx previously. when i select 1080p for the resolution (which the drivers recognize the tv being capable of as it lists it as the native resolution) i get a washed out messy result where the contrast/brightness is completely maxed (sliders do little to help) as well as the whole overscan thing that forces me to shrink the displayed image down to fit the actual tv (with the nvidia driver utility). 1600x900 can usually be tolerable in XP (not in vista for some reason) and 1080p is just downright painful.

i suppose it could by my dvi to hdmi cable? its a short run, but who knows... i just remember reading a bit on the nvidia forums that this is a known issue with the 8800 line, so was curious as to how the 9800 line or even the 4800 line handle it.

but as the previous guy mentioned, ATI does tend to do the TV stuff much better than nvidia ever did... maybe 4850 crossfire will be in my rig soon... unless i hear more about the 4870x2 soon... Reply

getting exposure for AMD's own GPGPU solutions and tools is going to be though, especially in light of Tesla and the momentum NVIDIA is building in the higher performance areas.

they've just got to keep at it.

but i think their best hope is in Apple right now with OpenCL (as has been mentioned above) ...

certainly AMD need to keep pushing their GPU compute solutions, and trying to get people to build real apps that they can point to (like folding) and say "hey look we do this well too" ...

but in the long term i think NVIDIA's got the better marketing there (both to consumers and developers) and it's not likely going to be until a single compute language emerges as the dominant one that we see level competition. Reply

AMD are going to continue to use the open source alternative - Open CL.

In a relatively fledgling program environment, it makes all the sense in the world for developers to use the open source option, as compatibility and interoperability can be assured, unlike older environments like graphics APIs.

What we need is a compute language that isn't "owned" by a specific hardware maker. The problem is that NVIDIA has the power to redefine the CUDA language as it moves forward to better fit their architecture. Whether they would do this or not is irrelevant in light of the fact that it makes no sense for a competitor to adopt the solution if the possibility exists.

If NVIDIA wants to advance the industry, eventually they'll try and get CUDA ANSI / ISO certified or try to form an industry working group to refine and standardize it. While they have the exposure and power in CUDA and Tesla they won't really be interested in doing this (at least that's our prediction).

Apple is starting from a standards centric view and I hope they will help build a heterogeneous computing language that combines the high points of all the different solutions out there now into something that's easy to develop or and that can generate code to run well on all architectures.

i would really like to know what type of performance theese cards could get in an MMO. (and hopefully compare them to some cheaper cards) Games im interested in are some of the newer titles like Age of conan ( i hear it's graphics are great and is a workout for even a 8800 ultra) And Eve-online (thier new graphics engine works cards pretty hard too)

MMO's Graphics usually get pretty intesive with some odd 200+ characters flying around shooting fireballs evrywhere with missles sailing through the air in a land of hundreds of monsters as far as the eye can see. it can get pretty demanding on a gameing computer, just as much (if not more) as a hit new title.

for example, on my current Rig i can get around 50FPS steady at 1440x900 but on Eve-Online i get 35 at the most at peacefull times and 20 or even 15 in a large fight with FEW graphics options selected. Reply

Not only do these cards rock, but I wouldn't be surprised if AMD has an ace up its sleeve with the 4870x2... with that crossfire interconnect directly connected to the data hub that you showed on the chart. That and the fact that they have been looking forward to this crossfire strategy of attacking the high end for quite some time so they might have some tricky driver stuff coming with it.

I have been disappointed with the heat and power consumption of these cards. But:
1) Someone said powerplay is getting a driver tweak and, I can always clock them lower in 2D than 500/1000 (which is insane for 2d)
2) That hardware site someone linked earlier showed a more than 50% reduction in temperatures with an aftermarket cooler! Thats insane!!

And finally, if I can get the 1 & 2 fixed... I want to know how well these babys overclock. If I can get a 4850 running like a 4870 or better... yum. And in that case, how high will a 4870 OC? And I want to know this with a non stock cooler, because apparently the stock ones suck. With a non stock cooler if the 4850 clocks up to 4870 level, but the 4870 clocks way up too... i'm gonna have to grab a 4870.

Impressive review, Thanks :)
A few glitches:
It says "Power Consumption, Heat and Noise", but the graphs only shows Power Consuption.
In Page 17 (The Witcher), in second paragraph, it says 390X2 instead of 3870.

In the article it says the GT200 doesn't need to do ILP. It only has 10 threads. Each of those threads needs ILP for each of the SP's. The problem with AMD's approach is each SP has 5 units and is aimed directly at processing x,y,z,w matrix style operations. Doing purely scalar operations on AMD's SP's would be only using 1 out of the 5 units. So, if you want to get the most out of AMD's shaders, you should be doing vector calculations. Reply

a single thread doesn't run width wise across all execution units. instead different threads execute the exact same single scalar op on their own unique bit of data (there is only one program counter per SM for a context). this is all TLP (thread level parallelism) and not ILP.

The site above mounted an after market cooler on it and got awesome results. Either the Thermalright HR-03 GT is just that great of a GPU cooler, or the standard heatsink/fan on the 4870 is just that horrible. Going from 82C to 43C at load and 55C to 33C at idle, just from an after market cooler is crazy! I was hoping to see some overclocking scores after they mounted the Thermalright on it, but nope :( Reply

Our 8800GT went from 81 deg. C to 38 deg. C at load, 52 to 32 at idle. That's also with the quietest fan on the market at low speed. And FWIW, I played through all of The Witcher (about 60 hours) with the 8800GT passively cooled in a case with only 1 120mm fan.

I would just like to point out that the 4870 falls behind the 3870 X2 in Oblivion while in every other game it runs circles around it. To me it appears to be a driver problem with Oblivion rather than an indication of the hardware not doing well there. Unless of course the answer lies in the ring bus of the R680? Reply

Also, what about at least 2 GTX 280 Cards and their numbers. Noticed that you did have them in SLI cause the power comsumption comparisons had them, but you held back the performance numbers...

Lets see the top 4 cards from ATI and Nvidia compete in dule GPU (no punt intended)on an X48 with DDR3 1600 and a FSB of 400x10!

That would be really nice for the people hoe have performance systems, but may still be rocking out a pair of EVGA 8800Ultras, cause their waiting for real numbers and performance to come out - and their still paying off theye systems lol... :] Reply

Heat is a big issue with these 4800 cards and their reference coolers, so it would be good to see it covered in detail. My 7800 GTX used to artifact and cause crashes when it hit 79 degrees, before I replaced it with an aftermarket cooler. Apparently the 4870 hits well over 90 degrees at load, and the 4850 isn't much better. Decent aftermarket coolers (HR-03 GT, DuOrb) aren't cheap... and if that's what it takes to prevent heat problems on these cards, some people might consider buying a slower card (like a 9800 GTX+) just because it has better cooling.

Anand, you guys should do a meltdown test... pit the 9800 GTX+ against the 4850, and the 4870 against the GTX 260, all with reference coolers, in a standard air-cooled system at a typical ambient temp. Forget timedemos/benchmarks... play an intensive game like Crysis for an hour or two, and see if you encounter glitches and crashes. If the 4800 cards can somehow remain artifact/crash free at those high temps, then I'd more seriously consider buying one. Heat damage over time may also be a concern, but is hard to test for.

Sure, DAAMIT's partners will eventually put non-reference coolers on some cards, but history tells us that the majority of the market in the first few months will be stock-cooled cards, so this has got be of concern to consumers... especially early adopters. Reply

Did you know these chips can do up to 125C? 90C is so common for ATI cards, I haven't had one since 2005 that didn't blow me hair dry. Your NV card was just a bad chip I suppose. Why do you think NV or ATI would spend a billion dollars in research work, then let its product burn away due to some crappy cooling? They won't give you more cooling than you actually need. It's the same very cards that go to places like Abu-Dhabi, where room temps. easily hit 50C+. Reply

You're just a dumb pissed off loser. There's a big difference in internal human temperature to its surroundings. In places like Sahara, temperatures routinely hit 45C, and max out @ 55C. But does that mean people living there just die? No they don't, because they drink a lot of water, which helps their bodies get rid of excess heat so to keep their internals at normal temperature (32C). You didn't have this knowledge to share so you decided to Google it instead, and make fool out of yourself. Here, let me break it down for you,

You said: "Keep in mind that was the internal temp of the guy"

Exactly, the guy was sick, and when you're sick, your body temperature rises, in which case 46C is the limit of survival. I suggest you take Bio-chemistry in college to learn more about human body, which is another 4 years before you finish school. Reply

I'm not talking about chips failing altogether... just stability issues, similar to what you experience from over-zealous overclocking. Lots of people have encountered artifacting/crashes with stock-cooled cards over the years. If these are just 'bad chips' that are experiencing stability issues at high temps, then there are a lot of them getting through quality control. Of course NV and ATI do enough to make most people happy... but many of us have good reason to be nervous about temperature. I think they can and should do better. Dual slot exhaust coolers should be mandatory for the enthusiast/performance cards, with full fan control capability. Often it's up to the partners to get that right, and often it doesn't happen for at least a couple of months. Reply

I think it's more profitable for board partners to just roll out a stock card rather than go through the trouble of investing time/money into performance cooling. What I've seen thus far, and it's quite apparent, that newer companies tend to go exotic cooling to get themselves heard. Once they're in the game, it's back to stock cooling. For example, Palit and ECS came up with nice coolers for its 9600s. Remember Leadtek from past years? They don't even do custom coolers any more. ASUS, Powercolor, Gigabyte, Sapphire etc just find it easier to throw in a 3rd party cooler from ZM, TT TR, and call it a day. Reply

Nice cards and nice article. But I would like to point out that there are some mistakes in the article, nothing fatal though. Like, not mentioning 4870 in the list of cards, writing 280 instead of 260, clicking on the picture to enlarge not working for some of the figures. Reply

AMD almost has a perfect card but the fact that the 4870 idles at 46.1 more watts than the 260 means the card will heat up people's room. At load, the difference of 16.1 watts more for the 4870 is forgivable.

If its possible to overclock a card using software (without going into BIOS screen), then why isn't it possible to underclock a card also using software when the card's full potential isn't being used? I'd really be interested in knowing the answer, or maybe someone just hasn't asked the question?

I hardly care about Crysis, its more a matter of will it run Starcraft II with 600 units on the map without overheating. Why doesn't anandtech also test how hot the 4870 runs? Although the 4850 numbers aren't pretty at all, the 4870 is a dual slot cooler and might give better numbers right? I only want to know because, like a lot of readers, i have doubts as to whether a card like the 4850 can run super hot and not die within 1+ years of hardcore gaming. Reply

Yes I noticed it used quite a bit at idle as well. But its load numbers were lower. And as the other guy said, they probably just are still finalizing the drivers for the new cards. I'd expect both performance and idle power consumption to improve in the next month or two. Reply

If a $200 card can play all your games @ 30+fps, does a $600 card even make sense knowing it'll do no better to your eyes? I see quite a few NV biased elements in your review this time around, and what's all that about the biggest die size TSMC's every produced? GTX's die may be huge, but compared to AMD's, it's only half as efficient. Your review title, I think, was a bit harsh toward AMD. By limiting AMD's victory only up to a price point of $299, you're essentially telling consumers that NV's GTX 2xx series is actually worth the money, which is a terribly biased consumer advice in my opinion. From a $600 GX2 to a $650 GTX 280, Nvidia's actually gone backwards. You know when we talk about AMD's financial struggle, and that the company might go bust in the next few years... part of the reason why that may happen is because media fanatics try to keep things on an even keel, and in doing so they completely forget about what the consumers actually want. No offence to AT, but I've been into media myself, and I can tell when even professionals sound biased. Reply

You're putting words into the reviewer(s) mouth(s) and you know it. I am pretty sure most readers know that bigger isn't better in the computing world; anandtech never said big was good, they are simply pointing out the difference, duh. YOU need to keep in mind that nVidia hasn't done a die shrink yet with the GTX 2XX...

I also did not read anything in the review that said it was worth it (or "good") to pay $600 on a GPU, did you? Nope. Thought so. Quit trying to fight the world and life might be different for you.

I'm greatful that both companies make solid cards that are GPGPU-capable and affordable and we have sites like anandtech to break down the numbers for us.

Are you speaking on behalf of the reviewers? You've obviously misunderstood the whole point I was trying to make. When you say in your other post that AT is a reviews site and not a product promoter, I feel terribly sorry you because reviews sites are THE best product promoters around, including AT, and Derek pointed this out earlier that AT's too influential to ignore by companies. Well if that is truly the case, why not type in block letters how NV's trying to rip us off, for consumers' sake, may be just for once do it, it'll definitely teach Nvidia a lesson. Reply

I completely agree. Anand, the GTX 260/280 are a complete waste of money. You are not providing adequate conclusions. Your data speaks for itself. I know you have to be "friendly" in your conclusions so that you don't arouse the ire of nVidia but the launch of the 260/280 is on the order of the FX series.

I mean you can barely test the cards in SLI mode due to the huge power constraints and the price is ABSOLUTELY ridiculous. $1300 for SLI GTX 280. $1300!!!! You can get FOUR 4870 cards for less than this. FOUR OF THEM!!!! You should be screaming how poorly the GTX 280/260 cards are at these performance numbers and price point.

The 4870 beats the GTX 260 in all but one benchmark at $100 less. Not to mention the 4870 consumes less power than the GTX 280. Hell, the 4870 even beats the GTX 280 in some benchmarks. For $350 more, there shouldn't even be ONE game that the 4870 is better at than the GTX 280. Not even more for more than 100% of the price.

I'm not quite sure what you are trying to convey in this article but at least the readers at Anandtech are smart enough to read the graphs for themselves. Given what has been written in the conclusion page (3/4 of it about GPGPU jargon that is totally unnecessary) could you please leave the page blank instead.

I mean come on. Seriously! $1300 compared to $600 with much more performance coming from the 4870 SLI. COME ON!! Now I'm too angry to go to bed. :( Reply

Oh and one other thing. I thought Anandtech was a review site for the consumer. How can you not warn consumers from spending $650 much less $1300 on a piece of hardware that isn't much faster and in some cases not faster at all than another piece of hardware priced at $300/$600 in SLI. It's borderline scam.

When you can't show SLI numbers because you can't even find a power supply that can provide the power, at least an ounce of criticism should be noted to try and stop someone from wasting all that money.

Don't you think that consumers should be getting some better advise than this. $1300 for less performance. I feel so sad now. Time to go to sleep. Reply

It reminds of that NV scam from yesteryears... I'm forgetting a good part of it, but apparently NV and "some company" racked up some forum/blog gurus to promote their BS, including a guy on AT forums who eventually got rid off due to his extremely biased posts. If AT can do biased reviews, I can pretty much assure you the rest of the reviewers out there are nothing more than just misinformed, over-arrogant media puppets. To those who disagree w/ me or the poster above, let me ask you this... if you were sent out $600 hardware every other week, or in AT's case, every other day (GTX280's from NV board partners), would you rather delightfully, and rightfully, piss NV off, or shut your big mouth to keep the hardware, and cash flowing in? Reply

In our GT200 review we were very hard on NVIDIA for providing less performance than a cheaper high end part, and this time around we pointed out the fact that the 4870 actually leads the GTX 260 at 3/4 of the price.

We have no qualms about saying anything warranted about any part no matter who makes it. There's no need to pull punches, as what we really care about are the readers and the technology. NVIDIA really can't bring anything compelling to the table in terms of price / performance or value right now. I think we did a good job of pointing that out.

We have mixed feelings about CrossFire, as it doesn't always scale well and isn't as flexible as SLI -- hopefully this will change with R700 when it hits, but for now there are still limitations. When CrossFire does work, it does really well, and I hope AMD work this out.

NVIDIA absolutely need to readjust the pricing of most of their line up in order to compete. If they don't then AMD's hardware will continue to get our recommendation.

We are here because we love understanding hardware and we love talking about the hardware. Our interest is in reality and the truth of things. Sometimes we can get overly excited about some technology (just like any enthusiast can), but our recommendations always come down to value and what our readers can get from their hardware today.

I know I can speak for Anand when I say this (cause he actually did it before his site grew into what it is today) -- we would be doing this even if we weren't being paid for it. Understanding and teaching about hardware is our passion and we put our heart and soul into it.

there is no amount of money that could buy a review from us. no hardware vendor is off limits.

in the past companies have tried to stop sending us hardware because they didn't like what we said. we just go out and buy it ourselves. but that's not likely to be an issue at this point.

the size and reach of AnandTech today is such that no matter how much we piss off anyone, Intel, AMD, NVIDIA, or any of the OEMs, they can't afford to ignore us and they can't afford to not send us hardware -- they are the ones who want an need us to review their products whether we say great or horrible things about it.

beyond that, i'm 100% sure nvidia is pissed off with this review. it is glowingly in favor of the 4870 and ... like i said ... it really shocks me that anyone would think otherwise.

we don't favor level playing fields or being nice to companies for no reason. we'll recommend the parts that best fit a need at a price if it makes sense. Right now that's 4870 if you want to spend between $300 and $600 (for 2).

While it's really really not worth the money, GTX 280 SLI is the fastest thing out there and some people do want to light their money on fire. Whatever.

i'm sorry you guys feel the way you do. maybe after a good night sleep you'll come back refreshed and see the article in a new light ... Reply

They do recommend hardware for different price points and such. So they do market in a way. Have you seen anands picks links? That is promoting products and does it through his referral links as well to get paid to do so. :)

Anyways, mentioning something as a better buy up to a certain price point would be helpful to someone who is not really in the know.

You've got excellent written skills buddy, and I can't help thinking you're actually better at reviews than your m8 (no offence Anand), but what I truly meant from my post above is what you summed up rather well in your conclusive lines, quote: "You can either look at it as AMD giving you a bargain or NVIDIA charging too much, either way it's healthy competition in the graphics industry once again (after far too long of a hiatus)"

Either way? Why should anyone look the other way? NV is clearly shitting all over the place, and you can tell that from the email they send you (or Anand) a couple days back. So they ripped us off for 6 months, and now suddenly decide the 9800GTX is worth $200?

GTX+ vs 4850?
Does that mean the GTX260 is now completely irrelevant? In fact, the 2xx series is utterly pointless no matter how you look at it.

To bash on AMD, the 4870 is obviously priced high. For $100 extra, all you get is an OC'ed 4850 w/ DDR5 support. I don't think anyone here cares about DDR5, all that matters is performance, and the extra bucks plainly not worth it. From a consumers' perspective, the 4850 is the best buy, the 4870 isn't. Reply

Yep... pointless unless you want the fastest card (280), then it has a point.

Pointless to YOU possibly because you're focusing on perf per dollar. Good for you. Nice of you to presume to force that view on the world.

Absolute performance? GTX 280 seems near the top of every benchmark there bud. Both in single card and in SLI where, last I checked, it gives up maybe TWO instances to the 4870CF - Bioshock and CoD and in both cases framerates are north of 100 at 2560. The 4870, on the other hand, falls WELL short of playable at that res in CF in most other benches.

High res + high perf = 200 series. Sorry if thats offensive to the egos of those who cant afford the cards.

Theres a lot in life we can and cant afford. Should have ZERO impact on ABSOLUTE PERFORMANCE discussions. Reply

AMD/ATI has to make some money somewhere. And regardless, at $300, the 4870 is a hell of a deal compared to the competition. Yes the 4850 is probably the best value. But the 4870 is still right behind it if you want a decent amount of extra performance at a great price.

Nvidia may have the fastest thing out there. But only the richest, most brain dead idiots who have not a care in the world about how they spend their (or their parents) money will buy it with cards like the 4850 and 4870 available.

And its pretty sad when your new $650 high end card is routinely beat by two of your last generation cards (8800GT) that you can get for $150 each or less. It wouldn't be as big a deal if the new card was $300-350 but at $650, it should be stomping on it.

I think Nvidia is in for a reality check for what people want. If their new chips are only going to cater to the top 1% of the market, they're going to find themselves quickly in trouble. Especially with the all the issues their chipsets have for 6 months after release. And their shoddy drivers. I mean this past Friday I decided to try and set up some profiles so that when I started up Age of Conan, it would apply an overclock to my GPU and unapply it after I exited, it ended up locking up my PC continuously. I had to restore my OS from a backup disc because not even completely uninstalling and reinstalling my nvidia chipset and video drivers fixed it. And in my anger, I didn't back up my "My Documents" folder so I lost 5 years worth of stuff, largely pictures. Reply

"Nvidia may have the fastest thing out there. But only the richest, most brain dead idiots who have not a care in the world about how they spend their (or their parents) money will buy it with cards like the 4850 and 4870 available."

You just summed it up in that first sentence there bud. NVidia has the fastest thing out there. The rest is just opinion, bitterness and noise.

I notice that the tone of the "enthusiast" community seems to be laser focused on cost now. This is like car discussions. People want to pretend to be "Xtreme" but what they really want to see is validation of whatever it is THEY can afford.

Have fun with the 4870 by all means, its a great card. But the GTX280 IS faster. Did NVidia price it too high? Dont know and dont care.

These are PERFORMANCE forums to all of the people that dont get that. Maybe even the editors need to be reminded.

If I want to see an obsession with "bang for the buck" Ill go to Consumer Reports.

I mean seriously. How much of a loser are you when you're taking a shot like "your PARENTS money"? LOL...

Personally, I treat the PC hobby as an expensive distraction. Ive been a technology pro for 15 years now and this is my vice. As an adult earning my own money, I can decide how I spend it and the difference between $500 and a grand isnt a big deal.

The rehtoric on forums is really funny. People throw the "kid/parents" insult around alot, but I think its more likely that the people who take prices beyond what they can afford as some kind of personal insult are more likely the kids here. Reply

"Nvidia may have the fastest thing out there. But only the richest, most brain dead idiots who have not a care in the world about how they spend their (or their parents) money will buy it with cards like the 4850 and 4870 available."

To be honest, while I was reading the article I felt as if the article seemed a little ATI biased, but I guess that goes to show you that two different people can get drastically different opinions from the same article.

The real reason I’m posting this is I want to thank you guys for writing some of the best articles that Anandtech has ever written. I read every page and enjoyed the whole thing. Keep up the great work guys and I look forward to reading more (especially about Nehalem and anything relating to AMD’s future architecture).

Also, is GDDR5 coming to the 4850 ever? If so, maybe it would be a drastically better buy.

You guys are reading into things WAY too much. Readers understand that just because something is a top performer (right now), doesn't mean that is the appropriate solution for them. Do you honestly think readers are retards and are going to plunk down $1300 for an SLI setup?! Let's leave the uber-rich out of this, get real.

So a reader reads the reviews, goes to a shopping site and puts two of these cards in his basket, realizes "woah, hey this is $1300, no way. OK what are my other choices?"

This review doesn't tell people what to do. It's factual. You (the AMD fanbois) are the ones being biased. Reply

I think You are right. nVidia had a little too long by themselves, setting prices as seen fit. Now that AMD/ATI are harvesting the fruits of the merger, overcomming the TLB-bug, financial matters (?), etc. etc. it seems the HD48xx series is right where they needed it.

This is bound to be a success for them, with so much (tamable) raw power for the price asked. Reply

Page 21 is labeled "Power Consumption, Heat and Noise" in the drop down page box, but it only lists power consumption figures. What about the heat and noise? Is it loud, quiet? What did the temperatures measure at idle and load? Reply

"NVIDIA's architecture prefers tons of simple threads (one thread per SP) while AMD's architecture wants instruction heavy threads (since it can work on five instructions from a single thread at once). "
Yeah, they both have 10 threads but nV's threads have 24 SP's, AMD's 80 SP's. But the performance will probably be similar because both thread arbiters run about the same speed and nv's SP's run about double the speed, effectively making 48SP's (and in some special cases 96). Reply

The power data was simply taken from the GTX 280 review, we just added to the list.

As for the GTX 280 SLI numbers, we didn't include them as it it's mostly out of the price range of the Radeon HD 4870 ($1300 vs. $600 for two 4870s). We can always go back and redo the graphs to include them if you guys would like, but in the interim I would suggest looking at the GTX review to get comparison numbers.

I have a 9800 GX2 in my primary gaming rig, but I've been debating on what card to drop into my Photoshop/3DS Max art rig. I've been waffling over it for some time, and was going to settle on an 8800GT... but after seeing this, my mind's set on the 4850. It definitely appears to offer more than enough power to handle my art apps, and allow me to use my second PC a gaming rig if need be... all without breaking the bank.

Very much so, actually from where I sit I think all AMD really needs to do is get a SAM2+ CPU out there that can compete with intel at least similarly to how this card competes with nvida and they'd have one hell of a total platform solution right now. As for upgrading my vid card... I just finished upgrading to the Phenom 4x and Radeon 3870 so I'll be sticking with that for a while. Quite honestly that platform can pretty much run anything out there already as it is, so I'm feeling pretty confident my current setup will last a couple years at least. Reply

Is there any reason the first pages of benchmarks have SLI setups included in the charts, but you wait until the end of the article to add the CF? I'd think it would make the most sense to either include both from the start or hold both until the end. Reply

I really like your idea of "keeping things simple early on" by only including configurations that us mere mortals can afford at first (say, all single-GPU configs plus "reasonable" multi-GPU configs less than ~$400 total), and then including numbers for ultra high-end multi-GPU configs at the end (mainly just for completeness and also for us to drool over--I doubt too many people can afford more than one $650 card!).

Anyway, great job on the review as always. I think you and Derek should get some well-deserved rest now!

Then include SLI for the 280... let the consumer care about what the value is or isn't, we all value different things. Provide the costs and the performance (SLI) please.

Its make this very incomplete to not have included SLI for the 280/260, I for one will more then likely get 2 x GTX280's not all of us worry about a few $$, but if the CF 4870's are that good, then I want to know as I don’t care about the brand and will go with the best performance.

Can you please include them soon so we can make our own judgements on what's good or not? Reply