Post Your Comment

111 Comments

Maybe the site should have a "report an error" link at the end of the page for readers to quietly submit problems. I'm all for accuracy, but these "correction posts" don't really make for conversation. The first 3-4 posts are all correction notices! Hardly the dialog I was looking for. ;)Reply

There should be a way to report 1. errors in technical info (which is important) and 2. grammatical errors (which is not so important) separately. (or exclusively)I can just picture the editors trying to sort though hordes of emails detailing grammatical minutiae.Reply

I can only recommend that you get the card that performs best in the games you play, and whilst the 560 Ti will obviously have the edge over the 6950 1GB in tesselation, it's going to use more power overall. Either is great.

The 6870 BE isn't bad either, really. Good time to be in the market... though I'm still waiting for the 6990 to rear its ugly head. ;)Reply

It looks to me like the 560 Ti only has the edge over the 6950 1GB tessellation with high factors. Even the 6870 bests the 560 Ti in the DirectX 11 Tessellation Sample test at the medium setting. See anandtech's 560 Ti launch article.

What I find even more interesting is that when you consider only the higher resolutions, the 6950 seems to be superior to the 560 Ti. I realize most people still use lower resolutions, but it doesn't make sense to judge between the potential of two cards at any resolutions that both can produce more than playable frame rates at the settings in question. This creates a misleading conclusion in situations where the winner reverses at higher resolutions. Hawx, for instance, shows that the 560 Ti has clearly superior frames rates at lower resolutions where the 6950 scales much better and edges it out at 2560x1600. Neither dip below 80 fps, so you can't really say the gameplay differs, however, it appears the 6950 is the one that has the muscle when it counts. Battlefield BC2 shows a similar reversal (reference anandtech's 560 Ti launch article). Of course, there are situations where nVidia turns tables at higher resolutions as well, they just aren't present in anandtech's launch article (unless I missed it).Reply

I believe that Ryan replied in the comments for the 560 Ti card to a commentor who inquired about the repeatability of FPS results with the 6950 1GB while playing Crisis at high resolutions, and it may pertain to your argument.He said that the results are "highly variable."If you are going by the avarage frame rate, and only at high res., the 6950 looks better than the 560 Ti but...Perhaps the 560 Ti produces more consistant results than the 6950?Reply

Ryan does a really good job with articles, so I don't want to come off as bashing him. However, if that was a major concern, I really wish he would have mentioned it in the article. Taking it a step further, he could post charts with min, max, and average. Alternately, if he felt particularly generous, he could post a graph of the frame rates over the course of the benchmark for the cases where one companies cards are less consistent than the others. Of course, that would be a lot of work to do for every benchmark and would incur unnecessary delays in getting the articles out. I would only include such charts/graphs to back myself up when I felt it changed the outcome. That said, even if these never show up, I'll still enjoy reading Ryan's articles.

On a personal note, the idea that the GTX560 Ti may be more consistent than the HD6950 makes me feel better about my decisions to purchase a GTX460 and GTX470 given the similarities in architecture. That said, I haven't noticed abnormal inconsistencies in frame rate with the HD6870 I bought as a Home Theater/Gaming card for the living room. I hope any inconsistencies in the frame rate of the HD6950 are driver related and not architectural, or we may loose some of the wonderful competition that has characterized the graphics market as of late.Reply

I see the 6950 pricing as sort of strange. Currently you pay $10 after MIR's to move up to 2 GB. A small price for a sometimes useful boost. But if you take unlocking into consideration, the ability to unlock with the 2Gb version, and not with the 1Gb, I'd say that's quite a massive difference.

Of course, I don't have a good feel about the success rate of the 6950 to 6970 unlock, or what % of cards it's possible with, but the pricing seems quite strange in that light.

My local hardware dealer has several GTX 560s in stock today, including 900Mhz factory overclocked models. The Gigabyte Super OC 1Gb is listed and promised soon.... But the 1Gb AMD 6950 - no sign whatever. I see elsewhere references to the fact that this card is likely to be a short run special by AMD as a GTX 560 launch spoiler, and that certainly seems to be the case. I look forward to the Anandtech review of factory overclocked GTX 560s at some point.Reply

At this point the only place you're going to find them is at Newegg and other e-tailers. With the launch pulled in by this much this soon, they won't be on B&M store shelves yet. This isn't all that rare, in fact I would say it's much more rare to find newly launched cards available in B&M stores.Reply

Hello Ryan, after such a nice review of the GTX560 Ti, I am quite disappointed that you included the overclocked HD6870 in this test. First, after you reviewed the GTX460 and included an OCed model, you get bashed by AMD fans crying foul. So you ask readers to say if it's ok to include an OCed model and from the count of posts you draw a conclusion not to include OCed models. I was suprised then, because measuring such a thing by mere post count is quite inadequate, considering that unhappy people usually shout the loudest and the happy ones don't need to. So of course you'd have more posts against it, no surprise there. But then I kinda let it go. However, seeing now that you did include an OCed model again, but this time something that is not so common, unlike OCed GTX460, I was very upset. Why didn't you review the OCed Gigabyte or Asus GTX560 cards? And considering reviews from other sides and the great results the OCed cards have, will you prepare a new review article dedicated to the OCed GTX560 to fix this bias?

Here is where I remember how I though saying things like "I am not going to visit this site anymore" is quite silly after the OCed GTX460 case. But seeing how you turn 180 for reasons unknown to me, I must say the very same thing.

I really don't get the persistent whining from some of you over this topic. You're so "hurt" over Ryan spending *his* time on benchmarking a card that was *originally billed as the 560 Ti's competitor*... it's completely inane.

If you don't think it should be considered, then simply ignore the card in the charts, and you'll get what you consider to be a pure "OC free" comparison.

As for my stance on it, if something is purchased off the shelf **with the configuration that was tested**, then it's fine to put it on there in a normal (i.e. not overclocking specific) article and/or section.Reply

The market speaks louder than needlessly outraged readers. Like it not, overclocked cards will continue to be produced. In order to be responsible journalists, they have to include them in order to evaluate their value to the consumer.

He also made clear that AMD was bumping up the launch at little notice. I think you are making much ado about nothing and will see plenty of factory-OC'd cards in the near future.Reply

It's not about not liking benchmarks of overclocked cards. As I stated, I didn't agree with the whining about GTX460 OC as well. I think it's legitimate to include OCed models. But if you do it, then do it for both sides. Especially after such a drama and a strict decision by the writer not to do it. That is the point.Reply

In the original 68xx review, the site got flack for including a highly overclocked GTX 460, at NVIDIA's asking.

This time, they review the GTX 560 Ti against stock clocked rivals. In a separate article they present ATI's competitive reaction to the GTX 560 launch. I think Anandtech and Ryan handled this correctly. They analyze and present the GTX 560 as a reflection of what NVIDIA has done, and produce a separate article where they focus on the GPU ecosystem as a whole.

In this way I think it looks a lot less like they kowtowed to a vendor's requests, and in fact show how targeted and thought out AMD/ATI's launch is. In a market this closely matched for performance and price, and with vendors offering customized versions of AMD/NVIDIA products, it's hugely complicated.

Well done Anandtech for today's articles, they definitely made my lunch hour more enjoyable. Reply

"They analyze and present the GTX 560 as a reflection of what NVIDIA has done, and produce a separate article where they focus on the GPU ecosystem as a whole."

Well if they did that, why didn't they include the OCed GTX560 Ti as well? Consider the fact that there are likely going to be a lot of oveclocked GTX560s as with the GTX460 card. That isn't part of the GPU ecosystem?Reply

The card just launched, it's very possible they don't have one, or didn't have the time to put that through the test suite with all the other things coming off NDA today. As a news source it's more sound for them to be able to have timely coverage, even if they have to revisit something they didn't have time for in the original article.

It sounds like most tech blogs were up very late compiling, testing, and writing for these launch articles. Most people are content with waiting a week for the entire picture to become clear, and if not, well that is the price for early adoptership.

You may be right that they didn't have any OCed GTX560s. However while there are many more review sites that did receive them, I kinda doubt that a site with such a big name as AnandTech wouldn't receive any.Reply

Just tell him to quite his whinning .. jk But for the love of god it's not a big deal. I'm just glad we get the objective tests that we do, As opposed to taking a shot in the dark when buying cards.Reply

Completely different scenario. This is a review of 2 AMD cards. This is not the review of the GTX-560 with the inclusion of a highly overclocked card that was put in at AMD's request/insistence, as was the case with the GTX460 FTW. Add to that there was also input from nVidia what cards of theirs to NOT include for comparison in the 6870 review and even benchmarks they wanted AMD cards tested with (HAWX2). Again, not even close to the same scenario. Reply

There is no bias at Anandtech, only well documented arguments and conclusions that you're free to disagree with. If you want to abandon one of the best tech review sites on the planet in favor of one that panders to your personal delusions about the fuzziness of a multinational corporation, knock yourself out.Reply

Find me a factory-overclocked GTX 560 that is currently available in the market. Then we can have that discussion. Anandtech is testing what is currently available - something I'm not sure you understand. All of these reviews are snapshots of a moment in time.

My searching shows one as "OC" on newegg, but no details about the core clock. That isn't a sign that the site is biased, it's called reality.Reply

They're working on a review for the overclocked card. I don't think they've ever released benchmarks for factory overclocks the same day that the card comes out, at least not in recent history, so it's not unexpected for them not to have included overclocked GTX 560 data yet. Wait for the 560 overclocked article in a few days.Reply

The 850MHz GTX460 rarely ever in stock at online retailers during its lifespan, at times, the 810MHz GTX460 was even hard to find. The overclocked GTX460s that had even lower clocks were generally available. With these previous supply constraints in mind, why should any review site review another highly overclocked card like the 900MHz GTX560 Ti, when its predecessor was a low volume card created specifically to deceptively improve the perceived benchmarks and perceived value of the rest of the cards in its series?

I have no issue with anandtech or other websites reviewing or including in reviews a 900MHz GTX560 if the variant is still readily available in four to six weeks. It would mean the card existed in reasonable quantities and was not just another Geforce 8800 Ultra card that showed up for the reviews and then was never actually in retail.

On the other hand, the 6870 Black is a modest overclock of the 6870 from AMD, who has not had and has no supply problems with the 6870 cards.Reply

My personal experience is that EVGA's ~850MHz 460s have never impossible to find, although it has occasionally been difficult to find ones that weren't the less desirable external exhaust model. I've never had any difficulty finding ~810MHz cards.

I didn't do much shopping around Christmas/New Years, though, so my experiences might not be representative of the average.Reply

GTX 560 cards are going to be a major player? Really? You know this how? Because your talking points memo from Nvidia marketing told you so?

If you honestly care about 560 OC results, here's what you can say - "Hey Ryan, will you be getting a chance to test any overclocked 560's soon? How do you think they will perform?"

Instead, here's what you went with - "OMG!! They didn't include every freaking card on the planet! BIAS! Sweet baby Jesus, I weep for the Anandtech that was!"

The only thing really missing from this article was the inclusion of an overclocked 460, which from previous benchmarks should be very competitive for $50 less. Unfortunately the ridiculous shitstorm from the last time it was included means we can't have nice things anymore.Reply

The Gigabyte and Asus OCed cards were available even before stock clocked cards. How is that in any way "temporary" or "uninterresting"? You are trying to downplay it really hard, but for apples-to-apples comparison, there should be the overclocked competition as well.Reply

Quotation marks - they do not mean what you think they mean. Nowhere on this page has anyone used either the words "temporary" or "uninteresting", nor any synonyms that I can see.

No one is downplaying anything other than your ridiculous claim that AT and Ryan are biased because in two weeks they didn't manage to benchmark and write up every single card in the universe that might be relevant to your interests.Reply

No. They already got roasted for doing it last time. Also, the 6870 Black Edition is an official AMD product that hasn't been shoved down Ryan's throat. So, whereas before all things may not have been exactly equal, they are now.

The presence of the 6950 1GB in the 560 Ti review is quite natural as the 6950 2GB was already there, and besides which, until you overload that memory, the 6950 1GB performs pretty much the same as its 2GB brother, albeit a tiny bit faster in places - it's not cheating to include it as it's not an overclocked card. There's no other way you can handle it except to have the two AMD cards in separate articles to each other and not mention the 6950 1GB in the 560 Ti review (hardly sensible - we already knew it'd be almost identical to the 2GB variant), or not review the 6870 Black Edition at all. Also, think of the time it must've taken Ryan to handle these reviews - certainly doesn't take a day or so to do.

With overclocked cards, the situation is that the standard product is reviewed and, usually, the 3rd party offerings are reviewed together in a separate article in short order. I fully expect this to happen as it's normal for a site like Anandtech to do so.

If your beef is with the 6870 Black Edition, please remember that, as stated in the review, AMD fully intended it to be the 560 Ti's true competition, and that the 6950 1GB was due out in February. When it became apparent that the 6870 wasn't the answer, they released the 6950 1GB early. There's no sense in scrapping all those 6870 Black Editions, of which there has to be thousands, so AMD have not only brought out two cards at the same time, but offered two viable alternatives to nVidia's one. The only thing that AMD will suffer is lack of availability for those 6950s for the time being which is only natural for an accelerated launch, plus nVidia will undoubtedly lose some sales so well done on that.Reply

QUOTE:"There is no bias at Anandtech, only well documented arguments and conclusions that you're free to disagree with. If you want to abandon one of the best tech review sites on the planet in favor of one that panders to your personal delusions about the fuzziness of a multinational corporation, knock yourself out."Reply

Dude, this isn't the GTX560 Ti launch article. This is a picture of the market as you or I can go out and buy cards.

I agree that the whole OC'd GTX460 "issue" was total bovine excrement from fanboys complaining that their poor nVidia was being compared to existing, non-reference cards that were wildly available at the time of the 460's launch.

That being said, the launch article for the GTX560Ti is one article down and contains nothing but reference cards in an effort to keep the whiners quiet. Reply

Dudes, whatever. (You guys started it)The 460 article wasn't even about the 460. It was brought into the fray during an AMD release article. The only bovine excrement came from the drool of Nvidia fanboys that had the ridiculous notion that a cherry picked overclocked card delivered by Nvidia was allowed into a reference card release article for AMD. Which clearly drew red flags from those readers with common sense.

And not only that but the the writer couldn't even finish the friggin article the way he wanted to because he was spending his time doodling around with the Nvidia card. That was complete BS.

We tried to give some pointers on how it should have been handled.1) Reference vs reference on product release articles.2) Follow up articles with overclocked cards vs overclocked cards.

Exactly. Nobody said that Anandtech shouldn't review OC'ed cards. The point was that OC'ed cards hand selected by AMD or Nvidia shouldn't be included in the launch article for their competitors new architecture. Had this card been included in the GTX 560 article, their would have been the same uproar as before.Reply

I haven't read this article yet (just finished the GTX 560Ti but wanted to say thank you for putting this article up. As many of us had asked for you properly kept the launch article about the card being launched and comparisons to stock cards, but in this article you are comparing other offerings including OC'd cards.

Given that it is a lot easier to find a 1920 x 1080 monitor now than it is to find a 1920 x 1200 monitor, would that resolution make more sense to list in these kinds of comparisions? I realise it wouldn't make much of a difference, but it is kind of strange to not see what, at least in my area, is the most common native resolution.Reply

wouldn't mind seeing 27" res include at the high end (2560x1440) as up there pushes the cards much harder and could make all the difference between playable and unplayable. I realize this is more work though :)Reply

As 16:9 monitors have 90% of the resolution of 16:10 monitors, the performance is very similar. We may very well have to switch from 19x12 to 19x10 because 19x12 monitors are becoming so rare, but there's not a lot of benefit in running both resolutions.

The same goes for 25x14 vs. 25x16. Though in that case, 25x16 monitors aren't going anywhere.Reply

The 6870 has 56 texture units and the 6950 has 88 , or 57% more. Yet if you add up all the scores of each you find that the 6950 is only 8% faster on average. This implies a wasted 45% increase in SPs and/or texture units (which one?), as well as about 800 million wasted transistors. Clearly AMD needed to add more ROPs to the 6950. Also, since the memory clock is faster on the 6950, this implies even more wasted transistors. If both cards had the same exact memory bandwidth, they might very well only be 4% apart in performance! AMD's gpu clearly responds much more favorably to an increase in memory bandwidth than it does to increased texture units. It really looks like they're going off the wheels and into the weeds. What they need is to increase memory bandwidth to 216G/s, and increase their ROP-to-SIMD ratio to around 2:1.

Yes I know about VLIW4... but where is the performance? Improvements should be seen by now. Like what Nvidia did with Civ 5. I'm not seeing anything like that from AMD and we should have been seeing that by now, in spades.Reply

....I like how you've completely missed out the fact that the 6870 is clocked 100MHz higher on the core, and the 6870 Black is 140MHz higher. You list all these other factors, and memory speeds, but dont even mention or realise that the 6870/Black have considerably higher core clocks than the 6950. Reply

It is probably clocked higher because it has almost a billion fewer transistors. Which begs the question.... what the hell are all those extra transistors there for if they do not improve performance?Reply

This is my first post, i've been reading Anand for at least a year, and this concerned me enough to actually create a user and post.

"For NVIDIA cards all tests were done with default driver settings unless otherwise noted. As for AMD cards, we are disabling their new AMD Optimized tessellation setting in favor of using application settings (note that this doesn’t actually have a performance impact at this time), everything else is default unless otherwise noted."

While i read your concerns about where to draw the line on driver optimisation Ryan, i disagree with your choice to disable select features from one set of drivers to the next. How many PC users play around with these settings apart from the enthusiasts among us striving for extra performance or quality?

Surely it would make be far fairer for testing to leave drivers at default settings when benchmarking hardware and / or new sets of drivers? Essentially driver profiles have been tweaking performance for a while now from both AMD and Nvidia, so where to draw the line on altering the testing methodology in "tweaking drivers" to suit?

I'll admit, regardless of whether disabling a feature makes a difference to the results or not, it actually made me stop reading the rest of the review as from my own stance the results have been skewed. No two sets of drivers from AMD or Nvidia will ever be equal (i hope), however deliberately disabling features meant for the benefit of the end users, just seems completely the wrong direction to take.

As you are concerned about where AMD is taking their driver features in this instance, equally i find myself concerned about where you are taking your testing methodology.

I hope you can understand my concerns on this and leave drivers as intended in the future to allow a more neutral review.

Here's the point. There is no measurable difference with it on or not from a framerate perspective. So in this case it doesn't matter. That should tell you that the only possible difference in this instance would be a possible WORSENING of picture quality since the GPU wars are #1 about framerate and #2 about everything else. I'm sure a later article will delve into what the purpose of this setting is for but right now it clearly has no benefit from the test suite that was chosen.

I agree with you though that I would have liked a slightly more detailed description of what it is supposed to do...

For instance is there any power consumption (and thus noise) differences with it on vs. off?Reply

For the time being it's necessary that we use Use Application Setting so that newer results are consistent with our existing body of work. As this feature did not exist prior to to the 11.1a drivers, using it would impact our results by changing the test parameters - previously it wasn't possible to cap tessellation factors like this so we didn't run our tests with such a limitation.

As we rebuild our benchmark suite every 6 months, everything is up for reevaluation at that time. We may or may not continue to disable this feature, but for the time being it's necessary for consistent testing.Reply

Thanks for the reply Ryan, that's a very valid point on keeping the testing parameters consistent with current benchmark results.

Would it be possible to actually leave the drivers at default settings for both Nvidia and AMD in the next benchmark suite. I know there will be some inconsistent variations between both sets of drivers, but it would allow for a more accurate picture on both hardware and driver level (as intended by Nvidia / AMD when setting defaults)

I use both Nvidia and AMD cards, and do find differences between picture quality / performances from both sides of the fence. However i also tend to leave drivers at default settings to allow both Nvidia and AMD the benefit of knowing what works best with their hardware on a driver level, i think it would allow for a more "real world" set of benchmark results.

@B3an, perhaps you should have used the phrase "lacking in cognitive function", it's much more polite. You'll have to forgive the oversight of not thinking about the current set of benchmarks overall as Ryan has politely pointed out.Reply

Ryan is completely right in disabling this feature, even though it has no effect on the results (yet) in the current drivers. And it should always be disabled in the future.

The WHOLE point of articles like this is to get the results as fair as possible. If you're testing a game and it looks different and uses different settings on one card to another, how is that remotely fair? What is wrong with you?? Bizarre logic.It would be the exact same thing as if AMD was to disable AA by default in all games even if the game settings was set to use AA, and then having the nVidia card use AA in the game tests while the AMD card did not. The results would be absolutely useless, no one would know which card is actually faster.Reply

Exactly. We should compare apples-to-apples. And let's not forget about the FP16 Demotion "optimization" in the AMD drivers that reduces the render target width from R16G16B16A16 to R11G11B10, effectively reducing bandwidth from 64bits to 32bits at the expense of quality. All this when the Catalyst AI is turned on. AMD claims it doesn't have any effect on the quality, but multiple sources already confirmed that it is easily visible without much effort in some titles, while in some others it doesn't have. However it affects performance for up to 17%. Just google "fp16 demotion" and you will see a plenty of articles about it.Reply

Proposal for an other review: Compare ALL current factory stock graphic card models with their highest "reasonable" overclock against each other. Which valus does the customer get when taking OC into (buying) consideration ?Reply

Apparently the model number is very important to you. What if every card above 1MHz was called OC? Then you wouldn't want to consider them. But the 6970@880MHz and 6950@800MHz are fine! Maybe you should focus on price, performance, and power, instead of the model name or color of the plastic.

I'm going to start my own comments complaint campaign: Don't review cards that contain any blue in the plastic! Apples to apples, people.Reply

If you look at the numbers, the 6870BE is more of a competitor than the article text would make you believe - in the games where the nvidia cards do not completely trounce the competition.

Look at the 1920x1200 charts of the following games and tell me the 6870BE is outclassed:*crysis warhead*metro*battlefield (except waterfall? what is the point of that benchmark btw)*stalker*mass effect2*wolfenstein

If you now look at the remaining games where the NVIDIA card owns:*hawx (rather inconsequential at these framerates)*civ5*battleforge*dirt2You'll notice in those games that the 6950 is just as outclassed. So you're better of with an nvidia card either way.

It all depends on the games that you pick, but a blanket statement that 6870BE does not compete is not correct either.Reply

Why does every single ATI card get the EXACT same FPS in Civilization 5? Did the company that made it get paid off by nVidia to put a frame cap on ATI cards or what? It makes zero sense that 2 year old ATI cards would get the same FPS as just released ATI cards.

Under normal circumstances it's CPU limited; apparently at the driver level. Just recently NVIDIA managed to resolve that issue in their drivers, which is why their Civ 5 scores have shot up while AMD's have not.Reply

please forgive me, Ryan, as I know this sounds abrasive and a little too off-topic from your response here. but speaking of 'score', what's the absolutely mind-boggling delay with including the 4890. quite frankly, if even one of any performance test over the relevant life of the 285 had not the 285 in it, nvidia would burn this site to the ground right after your nvidia-supporting readers did some leveling of their own. so honestly, for every 2xx series that finds its way into a benchmark, where in the world is ATI's top pick of that generation? site regs have posted about anandtech's nvidia-leaning ways, say, a few times, and this particularly clear evidence rather deserves an explanation - in my opinion. or, my spotty attendance contributed to missing at least one fascinating story.Reply

I agree with 7Enigma - the difference between the 4870 and 4890 are no longer significant enough to warrant inclusion in the comparison. I seem to recall that the performance of the 4890 was between the 4870 (shown) and the nvidia 285 (also shown). Couple that with the relative trouncing 30%+ increase in performance) that the newer cards deliver to the GTX285, plus that the frame rates of the GTX285 isn't that high (+30% of 20 fps is 26 fps, which is still "too slow to make it relevant"), I'm not sure that it becomes relevant anymore.Reply

Talk about not being consistent. :\ Here we have a review thatincludes an oc'd 6870, yet there was the huge row before about the 460FTW. Make your minds up guys, either include oc'd cards or don't.Personally I would definitely like to the see the FTW included sinceI'd love to know how it compares to the new 560 Ti, given the FTW oftenbeats a stock 470. Please add FTW results if you can.

Re those who've commented on certain tests reaching a plateau in somecases: may I ask, why are you running the i7 at such a low 3.33GHzspeed?? I keep seeing this more and more these days, review articles onall sorts of sites using i7 CPU speeds well below 4, whereas just abouteveryone posting in forums on a wide variety of sites is using speedsof at least 4. So what gives? Please don't use less than 4, otherwiseit's far too easy for some tests to become CPU-bound. You're reviewinggfx cards afterall, so surely one would want any CPU bottleneck to beas low as possible?

Any 920 should be able to reach 4+ with ease. And besides, who on earthwould buy a costly Rampage II Extreme and then only run the CPU at3.33? Heck, my mbd cost 70% less than a R/II/Extreme yet it would easilyoutperform your review setup for these tests (I use an i7 870 @ 4270).For a large collection of benchmark results, Google "SGI Ian", clickthe first result, follow the "SGI Tech Advice" link and then select, "PCBenchmarks, Advice and Information" (pity one can't post URLs here now,but I understand the rational re spam).

Lastly, it's sad to admit but I agree with the poster who commented onthe use of 1920x1200 res. The 1080 height is horrible for general use,but the industry has blatantly moved away from producing displays with1200 height. I wanted to buy a 1920x1200 display but it was damn hardto find companies selling any models at this res at all, never mindmodels which were actually worth buying (I bought an HP LP2475W HIPS24" in the end). So I'm curious, what model of display are you usingfor the tests? (the hw summary doesn't say, indeed your reviews neversay what display is being used - please do!). Kinda seems like you'restill able to find 1200-height screens, so if you've any info onrecommended models I'm sure readers would be interested to know.

Our 920 is a C0/C1 model, not D0. D0s can indeed hit near 4GHz quite regularly, but for our 920 that is not the case. As for our motherboard, it was chosen first for benchmark purposes - the R2E has some features like OC Profiles that are extremely useful for this line of work.

Indeed CPU bottlenecking is a concern, and we always try to remove it as much as possible. Replacing the CPU means throwing out our entire body of work, so as important as it is to avoid being CPU bottlenecked, we can't do it frequently.

The issue for us right now is that SNB-E isn't due until late this year, and that's the obvious upgrade path for our GPU testbed since SNB has a limited amount of PCIe bandwidth.Reply

RYAN: Hi Ryan, while I usually find AnandTech articles quite entertaining and informative, I always wonder why the f*ck professional editors won't get it into their head to test 2GB cards in areas where they belong to. Meaning: a 2GB vs. 1 GB card test should be about graphically overly intensive games and game mods, like the Half-Life 2 Fake Factory mod, or the STALKER Complete mod (Oblivion too has such mods). There are a number of other mods that put massive numbers of huge textures into the graphics RAM, and I think they should be the ones you need to test the cards with. After all, you can't expect games that were written with 1GB VRAM in mind to utilize the full power of double VRAM.

So please, please run some tests with the above mentioned mods. Thanks in advance.Reply

"Small Lux GPU is the other test in our suite where NVIDIA’s drivers significantly revised our numbers. Where this test previously favored raw theoretical performance, giving the vector-based Radeons an advantage, NVIDIA has now shot well ahead. Given the rough state of both AMD and NVIDIA’s OpenCL drivers, we’re attributing this to bug fixes or possibly enhancements in NVIDIA’s OpenCL driver, with the former seeming particularly likely. However NVIDIA is not alone when it comes to driver fixes, and AMD has seem a similar uptick against the newly released 6900 series. It’s not nearly the leap NVIDIA saw, but it’s good for around 25%-30% more rays/second under SLG. This appears to be accountable to further refinement of AMD’s VLIW4 shader compiler, which as we have previously mentioned stands to gain a good deal of performance as AMD works on optimizing it."Reply

None of the sites I frequent have said anything about a reduction in texture filtering quality with the new Catalyst versions. Could someone post some links to articles about the issue? Why didn't Anandtech mention it, anyhow?Reply

Apparently there are 2 Black Edition cards. The one we looked at is the newer of them (687A-ZDBC), whereas the old one used the reference cooler. I'm not sure the newer Black Edition has as widespread availability as the older one, but it's been available at Newegg for as long I've had the card in my hands.Reply

That would be very interesting to note in the article - could help prevent some annoying mis-purchases: Newegg don't list the newer one (-ZDBC) as a "Black Edition", and searching for "black edition" will only find the reference-cooled card, whereas this article doesn't mention the full model number.

I would almost have bought the loud reference edition one. Thank god I re-read the comments and re-did the search on Newegg one final time.Reply

So, when am I gonna start seeing 1080p in these charts; as that's really all I care about. I was hoping 2011 would be the year of 16:9 only, to my great dismay this is wrong. Please update soon, 16:9 has been the standard for like 2 years at this point, longer depending on how you look at it.Reply

The problem with this is that it's not guaranteed. While you can always flash back if problems arise, making buying decisions strictly based on what the card might be able to do (granted, there's not a lot of cards in the general review cycle that haven't shown that it can be unlocked) sounds an awful lot like "two in the bush".Reply

RYAN: Hi Ryan, while I usually find AnandTech articles quite entertaining and informative, I always wonder why the f*ck professional editors won't get it into their head to test 2GB cards in areas where they belong to. Meaning: a 2GB vs. 1 GB card test should be about graphically overly intensive games and game mods, like the Half-Life 2 Fake Factory mod, or the STALKER Complete mod (Oblivion too has such mods). There are a number of other mods that put massive numbers of huge textures into the graphics RAM, and I think they should be the ones you need to test the cards with. After all, you can't expect games that were written with 1GB VRAM in mind to utilize the full power of double VRAM.

So please, please run some tests with the above mentioned mods. Thanks in advance.Reply

Also, canyou post an edit in the article itself with a link to the actual xfx card or a picture or something? The link you have currently seems to go to one with a stock cooler, and I don't want to just guess which one it is that you tested. Thank you!Reply

woopsies! I didn't see the *multiple* pictures you posted of the card. But it would still be awesome to get a corrected link to the exact card, I'm still not 100% clear which one you tested. I apologize if I (again) missed this bit if information

I haven't seen this question asked yet, so I'm hoping one of you may have the answer. I have noticed that the XFX 6950 is now available with the same dual-fan 3-pipe vapor chamber heatsink setup as is found on their 6870 Black Edition. They literally look identical. I'm wondering if anyone can confirm whether they also share the same acoustic properties... in other words, the XFX 6870BE (as reported by AnandTech) has an idle and load rating of 41.4dB. I wonder if the 6950 has identical (or close) acoustics. Because if the 6950 is as quiet as the 6870BE, I think I'll go with the 6950. Thanks.Reply