The 5700 series have the same improved adaptive antialiasing with shaders like the 5800 series ?
There could be a antialiasing graph with diferent resoutions and antialiasing types for each card in reviews. Reply

I may be in the minority, but I've already ordered a 5750. For a SOHO box used for only occasional gaming, it was the most future-proofed option (DX11) that also has low enough idle draw that it actually will save me enough money over the life of the card to justify any price difference with a 48xx card. Would I have loved 10% more performance, sure, but this isn't a bad blend of efficiency and longevity. Reply

what up AT?
ive been lookin at your recent AMD rants, and its getting tiresome. They paying u the big bucks these days? when you only compare AMD cards against AMD cards you are doing your site a disservice. When you show CF but no SLI you are showing me a new AT.

As we noted in the article, the CF configuration is mostly for academic reasons. I don't seriously expect anyone to pick it over a single card.

Anyhow, what would you like to see? I have the SLI data for the 275 and the 285, but since we've already established that the 260C216 is faster than the 5770, it won't really tell you anything useful. Reply

"NVIDIA would need to shave the price down to justify its purchase once more (something they have not done on the GTX series in response to the 5870 and 5850)."

------------------

I'd like to comment on this for just a moment. Where I live we haven't seen much stock on the new dx11 cards yet.. however, Suddenly there's a slew of highly priced 295's and other top end Nvidia products that these stores were not stocking.

My bet is that .. people walking in and making a purchase find out that they can't get that coveted new DX11 card so they opt out for one of those. So in a sense Nvidia would be riding on the coattails of Ati's new popular line that .. just doesn't have the availability. They haven't had to lower prices yet because they may be benifiting by the lack of stocked cards.

No, it doesn't make sense :)
Why would you spent 3-400 for something that you don't want at first place? Why just not keep your money until you actually can buy what you want? We are not talking about 10 bucks, it is way bigger chunk... Reply

I respect the MONEY :). Wasting money for something that doesn't quite fit my intention is not my way. But, people are different, I guess not everybody would do the same...My point is, better pay and get what you exactly want and need, otherwise later you'll regret. Now, if you have so much money to waste, buy anything :) that's different story.

5770 with that anemic 128bit bus is worth less than $100, in my opinion. Above $100 it is just wasting money for getting nothing :) Reply

I need a new card, I want it to be DX11, but the performance isn't there. I want something about 10 percent faster than a 1GB 4870 for about 150 bucks, and something about 10 percent faster than a 4890 for less than 200 bucks; with DX11. Once I see a card like that, from AMD or Nvidia; I'll buy it. Stupid X1650Pro is REALLY limping along in modern games and I'm starting to get sick of low res and min settings. Reply

The HD 5850 was "wow". The HD 5770 is a little "meh". Its great you have something that sits between the HD 4850 and HD 4870 in performance with such low power requirements and noise, but that price has to come down. Reply

I view the 5770 as a natural successor to the 4770/4830/4850 (so I wouldn't expect a 5830 to appear, for example) as opposed to a replacement for the 4870. By now I'd expect 40nm yields to be much better than a few months back when TSMC had issues producing the RV740 variants so hopefully any dies that are defective are only minimally so and ATI can put them on the 5750 cards. Makes me wonder about the lower-range cards due next year though.

The Eyefinity ports are an enigma, however it could make for a very nice business class card assuming anyone can afford those dongles. Reply

If you are building an Eyefinity (EF) setup, you prolly don't have three monitors. You prolly have one or two. This means you will be buying at least one monitor. My advice: buy a DP monitor that matches the physical size & resolution of your existing monitors. That way you don't need to get an adapter.

With any luck they'll become plentiful in a short space of time, offering early adopters the chance to set up a decent EF, umm, setup.

If you think the typical EF setup will be two or three monitors, do you expect the full six monitor glory with an X2 part? I'm still wondering if even the 5870 can handle three monitors and still offer smooth gaming performance. That said, despite their power they're not going to strictly be gaming cards. Reply

the fact that this cards consume little power is irrelevant when you have that great efficiency on the 5800... also including the Eyefinity gimmick here is a mistake, it only diminishes the value of that feature on the 5800. It should have been 1 card. HD 5770:
no Eyefinity, 800 SP, 750MHz, 512MB = $99 USD

Eyefinity (EF) will be in all 5xxx products for a multiplicity of useful reasons, many of which aren't apparent yet. There will be frequent roll-outs of new EF goodness. There will be many, many customers who will find EF very useful. Hopefully you will realize what EF can do for you and buy one of our products. We'd like for you to be a happy customer of ours.

why no mention of 4770? i know its older and slower, but its also 40nm like the 5750 and is the same price, it would be nice to see the difference between the two as they are specked quite closely(640sp@750mhz, 720sp@700mhz, both 128bit gddr5) Reply

Yet another good reviews from AT, thanks Ryan. However, it becomes clear cards like HD5870 and HD5770 isnt a very good performer for its price. HD5850 and HD5750 512MB repeesents a more solid bang for bucks. Again its very amazing that AMD has been able to bring us so many next gen DX11 cards when Windows 7 isnt even launched yet and their competitor is being super slow by onli recently releasing a non high end part of G200 derivatives. That being said, from the point of view of solely performance, Cypress and Juniper is kinda a disappointing performer for its price, as well as for its specifications. Reply

Hmm. I was expecting the 5770 to perform either at the 4890 levels or slightly slower at very least while running cooler and taking up less power. This is quite disappointing. I was all ready to get one to replace my 4850 if the price was right. I wonder how well they can tweak the drivers for this thing. Reply

I bought a 3870 for about 210.00 a year and a half ago. This card has double the performance, a lot more features, and is starting at 160.00. I figure by christmas, it will be down around 120.00. I run a small frag-box, so the lower heat and lower power makes sense for me over a 4870, and if I ever wanted more power, I could run two of these in crossfire and have all that I need. I am putting this card on my wish-list. Thanks again for a great review. Reply

Great article except for the flaw of not a single Memory OC data point. If as the data shows the 5770 is performing poorly due to inadequate memory bandwidth (seemingly the ONLY issue hampering performance when comparing specs between it and the 4870), it makes sense that a simple OC could shed some light on this issue. Please update the article with some numbers as this card is mainstream enough that I would imagine overclockers *could* see this as a gem in disguise. Reply

Once things calm down, we're going to do a 5800 series overclocking article. It's something that takes a while to put together, and there are major product launches every week right up through the end of this month. Reply

Can you give us a hint then whether this memory has the potential to overclock well enough to where the bottleneck is overcome? I doubt it can completely remedy the situation but if the memory overclock was enough to make up for the significantly narrower bus in most cases, I think this card would have a better reception. Reply

I'm sure there is a perfectly good explanation for this that I'm missing. You say there is 1 SIMD disabled in the 5750 vs the 5770. Looking at the chart on the first page there is a difference of 80 stream processors and 4 texture units between the 5750 & 5770 so this would indicate 4 SIMDs are disabled. Reply

Wait, never mind. I see each SIMD consists of 4 texture units and 20 SP (Stream Processors) and each SP contains 4 Stream Cores. I guess in the chart when you say Stream Processors you really mean Stream Cores right? Reply

Ryan - THANK YOU for including the 8800GT in the graphs: That is the card I (and many other potential buyers) will be upgrading from, moving to a DX11 card.

It's too bad this new card (5770) can't quite catch the GTX260 C216, as that is its main NVidia competitor in performance and price. It uses just as much Load wattage as a 260, but seems to offer around 80-90% of the performance. Perhaps if they hadn't cut the bandwidth to 128-bit, it would have squashed the GTX-260. But ATI has a habit of under-bussing their cards and it continues to negatively impact high resolution performance, no matter what certain reviewers might claim about the bus-width not hurting performance. Time and again, testing shows potential for improvement from a wider bus.

Blah. ATI, you always come so close to getting me to purchase but there's always something to hold me back. Perhaps if/when this card drops to $139 (without rebates). But by then NVidia might have their answer out and the GTX-260 would also drop in price or be replaced with a DX11 part, and then the 5770 again loses appeal. Reply

What in the world is going on with this game? 8800 GT beats the 4850? No, sorry, I don't buy that. Something is wrong here. The 5770 beats everything? If that is the case, then this game should immediately be removed from the bench suite - games in the bench suite should help us understand the general performance characteristics of the hardware and a game that returns such erratic results actually distorts that understanding. Reply

"The 3870 beats it by 14W at the cost of a significant degree of performance, while the 8800GT is neck-and-neck with the 4770, again with a decent-sized performance gap."

You certainly meant 5770 there. But this brings me to a question: Why isnt the 4770 included here? As an owner of that card, I'm very much interested in the performance/power/noise difference - just ditch one of the relatively irrelevant SLI or CF combos. I dont think too many care about comparing high-end multi-GPU with performance parts such as the 5770 and 5750, even if its 57xx in CF. Reply

It makes no sense (beside bad drivers) for the 5770 to lose to the 4850. The 5770 has more memory bandwidth (76.8) compared to the 4850 (63.55 gb/s), due to the 4850 sticking with ddr3, even with the 128 bit bus. The 5770 is also clocked 36% faster than the 4850 (850 vs 625).

Maybe the 4*64bit memmory controlers on the perimeter of the chip keep up the data better than 2*64bit controlers with higher bandwith.
I think that they could make it at least 192 bit (3*64bit). Reply

Remember that there is more to the card than just the ROP/TU/ALUs. If the other logic is intact it could give the dual 5770s a net larger ammount of cache, more resources for scheduling, rasterization, etc. Reply

I'm again seeing many comments of "DX11 gives me nothing". Well, you buying it gives developers one more reason to develop for it. If you stick to DX10, then it'd take more time to move to DX11. Really. Until the majority of the market moves to a new feature set (and hopefully Windows 7 will help move out of DX9), developers will only use higher end features as "special features". Reply

You're right, though not the way you think. Xbox programming is more like DX11 than DX9 or DX10, and the Xbox also has a tesselation unit (though simpler than in the DX11 parts), so moving to DX11 would make developers life easier.

What users don't get is the difference between API and hardware capabilities. Even if developers limit themselves to DX9 level capabilities, for console compatibility, using DX10 or DX11 only to develop will be much easier than using both DX9 and DX10, and result in faster and less buggy code (optimising for two very different API's is hard). Reply

As MadMan007 says, there wont be a large adoption rate from the developers towards DX11 until the NEXT generation of consoles ships (around 2012) supporting DX11... Win7 won't matter because game developers are still going to make games for DX9-DX11... Probably the very few game that will come out being DX11 only are going to be some kind of tech demos & suck 4ss! Reply

That may be due to some architectural improvements in the 5770's shaders. The drop in performance in other games may be due to the decreased memory bandwidth, which may not matter with regards to Far Cry 2. Reply

Just wanted to say I like the conclusion and it's dead spot on on the suggestions and advices.

I'm very surprise almost no one is talking or bringing up the subject of DirectX. DX11 has more chance to succeed yet less attention. It's amazing how badly DX10 was to sway consumers about face. Reply

DX10 rendering paths of games that were also DX9 (meaning all of them at the time and even now) were also *slower* and provided little to no i.q. improvements. So even if it hadn't been Vista-only (and only morans keep on with the Vista FUD after SP1) there was no real benefit. DX11 looks to be different in all respects. Reply

Quite strange that with die size 166mm2 againts 260mm2(rv770) and with 128bit memmory it costs this much. And the 5750 has disabled one simd which even increase the amount of usable chips (but maybe its disabled just for the diference or else the two cards would be exatly the same except clocks).
The Tessellation part with fixed units is exatly the same as 5800 series or tuned down ? Reply

Unless you absolutely need to take advantage of the lower power requirements of the 40nm process (e.g. you pay a ton for power)...

According to your tests, the 5770 consumes a whopping 48W less idle power than the 4870, and other reviews have comparable results. If your computer is out of standby a modest 10 hours a day, that works out to 175 kWh per year. That's easily $15/year even for people with cheap electricity.

The funny thing is that I usually see people overstating the savings from power efficiency... Reply

We're not talking about most people, we're talking about people who bother to get a 5770 instead of living with IGPs. Many people leave their computer on 24/7 to download torrents or fold or act as a file server (it's nice to access it from work) or whatever. I think 10 hours is a reasonable average for the target audience.

Even if you reduce it to 5 hours a day, though, that's still $8/year. I like to keep video cards for a long time (usu. 2 years or more), and even when I upgrade, the old one is usually handed down.

My point is that it's not something to ignore when comparing to the 4870. It was much less relevent for $300 cards with a 20W-30W difference (4870 vs GTX260 at launch), but now it's a 50W difference for $150 cards. Reply

I guess I'm rather surprised at the 5770 results being consistently lower than the 4870 as well, and would be interested in a a bit more hypothesizing as to why exactly this is the case when the stats on the cards suggest they should be at minimum roughly equivalent. Is this situation the sort of thing that might see large changes with updated versions of Catalyst? Reply

The difference is obviously the memory bandwidth. It seems to me that ATi should have gone with a 192bit bus, this change alone would have made the HD57x0 a worthy successor to the HD48x0 range, without any performance caveats, while still being significantly cheaper to manufacture (40nm vs 55nm, 192bit vs 256bit). Reply

At this price point it looks like the 5770 & 5750 are priced to pad AMD's pockets, not to provide increase performance (not that, that's a bad thing when in a war with Intel). With the smaller process size and smaller chip size and similar performance each new part sold will net AMD a substantialy higher profit. This is why AMD will likely kill off the older gen instead of droping the price point. Reply

Yes I think that's where my mild disappointment comes from. Not that they aren't great cards for the launch MSRP, they just aren't great in light of street prices, but unlike HD4800 or even arguably HD5800 AMD doesn't seem interesting in shifting the price/performance curve with these cards. At best matching the current price/performance curve leaves me a bit cold. Reply

That's been the trend from ATI lately with their mid-grade cards. The 5700 series is meant to offer roughly the same performance of the 4800 series for a cheaper price. The 4600 series last time was meant to match the 3800 series (the 4770 was quite an oddball though). It's not a bad system, really, as it allows ATI to migrate their lineup with some consistency. Reply

It might be that some of the cost does indeed come from the RAM though.
Once GDDR5 chips drop some more, it will be easy for AMD to drop the prices on these cards, but that might (might) be what's limiting pricing options.

Or AMD just want to try and get maximum profit from these cards.
But even so, when GDDR5 prices drop it will be easier to extract profit at lower prices, so GDDR5 pricing will still be at least partly responsible. Reply

Reading the charts it gets obvious that it is upgrade time: lets get 4850s, 4870s, and even 4850X2-4870X2 on the next weeks before these cards phase out: they are faster and a LOT cheaper than the 57xx series. As for the high end consumers, just wait for the 5870X2, now that is a card to roll eyes, when and IF it launches. Reply

So where is the double precision implemented? I didn't bother too look it up by I imagine it's buried deep in the shaders. If so why take it out? Is it just disabled or not present at all? If not present I guess I could see removal for the sake of fewer transistors but otherwise it seems like artificial market segmentation. On the other hand hardcore compute power people where time = $$ won't have a problem getting a 5850 or better, or seeing what NV does. Reply

DPFP (Double Precision Floating Point) is physically not in the Juniper GPU - it is not artificial segmentation. We had to choose between giving you a GPU that would be great for consumer HPC and games at a price you could afford, or something that cost notably more.

Their newer Stream SDK 2.0 series (currently in Beta 4), mentions they now support OpenCL in GPU, and that the Radeon HD 5870, 5850, 5770, and 5750 are supported. No mentioned of which can actually do double precision though...

Still, considering the 5770 looks similar in spec to the 4870/4850, it may support it. (The major difference seems to be the Memory Bus Width.)

Come to think of it, what are the requirements to support double precision on a Radeon HD-series GPU? Reply

As always from anandtech, great review. However, I almost crapped my pants when I saw the price of a "display port to dvi" dongle," $100?? Hope thats not the average not inflated by Apple price. =) Reply

Nope this wont work, the card(s) has only two TMDS's for one DVI and one DVI or HDMI, you can't use two DVI's + HDMI...

if you want to connect the third monitor you have to use Display Port, and adapters won't work since DP on this card doesn't support DVI single Pass through ( this will need a seperated TMDS chip )

there's some devices that support DVI/HDMI pass throught using DisplayPort, I'm talking about Apple latest Mac's where they dropped DVI/HDMI and replaced it with DP... that one supports DVI/HDMI adapters as it has it's own TMDS chip which is required for DVI/HDMI signals... Reply

It has to be powered if you wish to run dual-link DVI. The single-link MonoPrice adapter will work fine for resolutions up to 1920x1200. But most people looking to run Eyefinity will probably be wanting to go whole-hog with 2560x1600 given the large price tag already associated with such a setup. Reply

The 4870 will only drop in price to clear inventory, because it's not worth it to produce them with the intent of selling them at $120 or less. I expect them to sell out before the price drops much further.

Don't fret, though. The 5770 has a 128-bit bus and a fairly small die. It will drop in price soon enough, unless NVidia decides to stop bleeding $$ on its huge GT200 chips on $150 cards and Fermi-based mainstream cards can't get down in price. Reply

well, my favorite retailer (alternate.de) already has the 5750 and 5770 in stock, at 130eur and 160eur respectively

they also have the 4870-1GB at 115eur, which is MUCH cheaper

in any case, right now, with my usage pattern (24/7 on, but mostly GPU-idle, maybe just one hour a day of GPU stress), the difference in power consumption between the 4870 and the 5770 is at least 50w, which means ((50*24*365)/1000)*0.15eur/KWh = 65.7eur/year

so it pays for the difference in just over 6 months, at the expense of slightly lower performance, with the advantage of less noise

speaking of which, I like my GPUs silent, passive if possible, thankyouverymuch, so I'll wait for vendor-specific designs or after-market coolers; by the time these are out, maybe the 4870 will not eve be available anymore Reply

Search for and download GPUTool. It's still in beta and has some quirks but for massive idle power drop it cannot be beat (at least for my system, 4870). I simply lowered the 2D core/memory clocks (they have a low/medium/high setting, and ALL need to be the same setting or you get flickering), down to around 250MHz, and this dropped idle power consumption by a crazy amount (40-80w, can't remember exactly). Once the creator of the program releases a newer version I'm hoping some of the fan speed and voltage mod bugs get worked out. Even so, the 2 second click to lower idle speeds is incredibly handy.

I don't know if you've tried using ATI tray tool already, but after scourging around the web trying to figure out way to keep my XFX 4870 1GB from drawing more power than needed(e.g. when just surfing/playing video), I was able to drop the GPU clock to 400 MHz, and memory to 225 MHz. The memory draws much more power than does the GPU, so leaving GPU at 400 doesn't really make that much of a difference, compared to 250.

Keep in mind that running said program in Vista is somewhat of a headache, since the driver is not signed by MS, so you need to do the work-around to get it running as startup program so the clocks drop can be initiated by the program.
Reply