What? Really? What? Given that the 6870 is the replacement at best for the 5850, please do compare the 5850 performance in unigine heaven compared to the 6870 (link below). Unigine is a good guide at tesselation power - thus why the 580 kicks ass on it. But the 6870 clearly does tesselation far better than a 5850 (about 30% better at 1920).

Sometimes your posts are a wee bitty PRO nvidia and you downtalk AMD far too much and without logic (this tesselation talk being an example). And i'm not a fanboi of AMD - i'm trying very hard to not buy an Nvidia card (waiting for the competition so i can compare) but i really do like the new 580.

So it turns out that the reference GTX580 comes with a higher voltage although it probably doesn't need it. Well, it's probably a measure to improve yields, but just like with the 480 newer cards will probably get better as the chip matures in future batches and partners will start cherry picking them and offer the GTX580 equivalents of the GTX480 cards you are mentioning.

Click to expand...

great info there man cheers for posting it.

I for one think it's a feat all unto itself how well Nvidia refined essentially the same GPU.

they've gotten between 15-20% more performance through better yeilds, reworking circutry at many different areas of the GPU, and of course more aggressive clockspeeds.

not only that but will solidly whooping a GTX480 in every game or test it consumes less power, and makes less noise.

it would be commendable IMO to keep the same performance and reduce power useage and noise, just remember they did that and added 15%+ more power. not freakin bad at all for a refresh.

t'aint next gen, but it restores some of my faith in Nvidia. Jen Hsun must have been pissed when GF100 went tits up, but this goes to show he's comitted to solutions.

Just look at AMD. They misfired pretty badly with the HD 2900 three years ago. That chip was supposed to take out the 8800 GTX, but ended up being hot and embarrassingly slow in the benchmarks at stock clocks. Then look at how AMD came back with the 4000 series. Sounds similar to nvidia's misfire this year, doesn't it?

I for one think it's a feat all unto itself how well Nvidia refined essentially the same GPU.

they've gotten between 15-20% more performance through better yeilds, reworking circutry at many different areas of the GPU, and of course more aggressive clockspeeds.

not only that but will solidly whooping a GTX480 in every game or test it consumes less power, and makes less noise.

it would be commendable IMO to keep the same performance and reduce power useage and noise, just remember they did that and added 15%+ more power. not freakin bad at all for a refresh.

t'aint next gen, but it restores some of my faith in Nvidia. Jen Hsun must have been pissed when GF100 went tits up, but this goes to show he's comitted to solutions.

Click to expand...

As I mentioned though, I don't know how much of these improvements come from the GPU rework itself, and how much comes from the improved cooler design. Since W1z showed that a GTX480 is capable of similar power number when the GPU runs cooler.

As I mentioned though, I don't know how much of these improvements come from the GPU rework itself, and how much comes from the improved cooler design. Since W1z showed that a GTX480 is capable of similar power number when the GPU runs cooler.

Click to expand...

I guess it wouldnt be that hard to force the fan to run slower, and heat up the GPU to the mid 90's and see how power fares at that temperature.

however from all of the good reviews I've read, Nvidia really did tackle the GF100 GPU to the ground, pull it's pants down and tear it a new one. major work has been done to the GPU if the reviewers are to be believed. it really is a pity it took so long but this is what the GTX480 should have been.

and still, for it to get so close to a 5970 is a feat all unto itself, lightly overclocked cards will likely match or exceed it. no matter which way you look at it it's a butload of power for one GPU to have.

I'd love to see 5970CF vs GTX580 SLi, quad GPU scaling ftl.

EDIT: some info on what changed between GF100 and GF110;

Little did we know at the time, but back in February of this year, before the first GF100 chips even shipped in commercial products, the decision had been made in the halls of Nvidia to produce a new spin of the silicon known as GF110. The goal: to reduce power consumption while improving performance. To get there, Nvidia engineers scoured each block of the chip, employing lower-leakage transistors in less timing-sensitive logic and higher-speed transistors in critical paths, better adapting the design to TSMC's 40-nm fabrication process.

At the same time, they made a few targeted tweaks to the chip's 3D graphics hardware to further boost performance. The first enhancement was also included in the GF104, a fact we didn't initially catch. The texturing units can filter 16-bit floating-point textures at full speed, whereas most of today's GPUs filter this larger format at half their peak speed. The additional filtering oomph should improve frame rates in games where FP16 texture formats are used, most prominently with high-dynamic-range (HDR) lighting algorithms. HDR lighting is fairly widely used these days, so the change is consequential. The caveat is that the GPU must have the bandwidth needed to take advantage of the additional filtering capacity. Of course, the GF110 has gobs of bandwidth compared to most.

The second enhancement is unique to GF110: an improvement in Z-culling efficiency. Z culling is the process of ruling out pixels based on their depth; if a pixel won't be visible in the final, rendered scene because another pixel is in front of it, the GPU can safely neglect lighting and shading the occluded pixel. More efficient Z culling can boost performance generally, although the Z-cull capabilities of current GPUs are robust enough that the impact of this tweak is likely to be modest.

The third change is pretty subtle. In the Fermi architecture, the shader multiprocessors (SMs) have 64KB of local data storage that can be partitioned either as 16KB of L1 cache and 48KB of shared memory or vice-versa. When the GF100 is in a graphics context, the SM storage is partitioned in a 16KB L1 cache/48KB shared memory configuration. The 48KB/16KB config is only available for GPU computing contexts. The GF110 is capable of running with a 48KB L1 cache/16KB shared memory split for graphics, which Nvidia says "helps certain types of shaders."

... I am not red nor green; I just speak my mind; and what I think is that AMD has been surprised by the unexpected launch and performance of the GTX 580, as everyone else have ... we were only expecting a dual GF104 Chip or a fully-enabled GF104 GPU from nVidia and no one was expecting a "proper" GF100 (i.e. GF 110) chip GPU, but nVidia managed to keep a good secret long enough to surprise competition. Now, the ball is in AMD's court; AMD has had a year now since they launched the 5870 & 5970 and has no excuse in my opinion not to build single-chip GPUs that can outperform the GTX 580, but if they couldn't, that'll definitely mean that 2011 is going to be nVidia's year, especially that the GTX 560 & GTX 570 should be on the way ...

Just look at AMD. They misfired pretty badly with the HD 2900 three years ago. That chip was supposed to take out the 8800 GTX, but ended up being hot and embarrassingly slow in the benchmarks at stock clocks. Then look at how AMD came back with the 4000 series. Sounds similar to nvidia's misfire this year, doesn't it?

Click to expand...

Regarding the 2900xt, nearly bought one the other day to experience , but just why did it do so badly, fermi type troubles?

As I mentioned though, I don't know how much of these improvements come from the GPU rework itself, and how much comes from the improved cooler design. Since W1z showed that a GTX480 is capable of similar power number when the GPU runs cooler.

Click to expand...

Look at the entire Anand review. Temps are exactly the same for both GTX580 models, ut power consumption on the Asus one is much lower.

Here's some results from the GTX480 lightning:

As you can see the temperatures are lower on this one thanks to a much better cooler and because of that (and PCB, etc) the card consumes the same as the reference GTX480 despite running at 750 Mhz and having a stock voltage of 1.06 V. So yeah temps do help power consumption, but look at this small comparison between Wizzard reviewed GTX480 Lightning/reference and reference GTX580:

Some sites have awfully low power draws, others have marginal lower (~ 20 Watts) others have the same and Guru 3D had a shit engineering card with much higher draw.

I think focussing on power consumption though is a moot point. What i see as vaild is it's other performance figures. This review (below) shows the problems faced by crossfire set ups. This is why i will be going single card come December (Red or Green, unsure).

The 6870 crossfire and 5970 beat the GTX 580 in : BFBC2, AvP, poorer MINIMUM fps in Crysis, COD Black Ops (is a crock of shit and NV optimised for 580 - i have friends that cant play it with a GTX 260 - console port ahoy!), F1 2020 runs poorly on dual gpu, Fallout New Vegas is slower on dual gpu...

Basically, though the crossfire set ups are quicker technically, unless the game is optimised they run worse.

I'm just a little concerned the GTX 580 is so much better simply because they put on a superior cooler, thats been around on AIC/AIB cards for ages. I'm waiting for other cards and reviews before i decide.

I think I wait for HD 6970. The 580 is fast and much better than 480 but IMO only good for helping competition and to drop price of graphics cards. I feel the 580 should have been much much faster for the high price they ask.

... I am not red nor green; I just speak my mind; and what I think is that AMD has been surprised by the unexpected launch and performance of the GTX 580, as everyone else have ... we were only expecting a dual GF104 Chip or a fully-enabled GF104 GPU from nVidia and no one was expecting a "proper" GF100 (i.e. GF 110) chip GPU, but nVidia managed to keep a good secret long enough to surprise competition. Now, the ball is in AMD's court; AMD has had a year now since they launched the 5870 & 5970 and has no excuse in my opinion not to build single-chip GPUs that can outperform the GTX 580, but if they couldn't, that'll definitely mean that 2011 is going to be nVidia's year, especially that the GTX 560 & GTX 570 should be on the way ...http://images.bit-tech.net/blog/201...the-geforce-500-series/gtx-500-prediction.jpg

Click to expand...

those 500 series estimates look well possible from their currant lineup, at least specs and speed wise.

GTX570 will be GF110 with near GTX480 performance, should be considerably less power consumption too.
GTX560 is the fully enabled and clocked up GF104 we've been waiting for.
GTS550 is also a fully enabled GF106, and clocked up, giving it considerably more performance IMO, given at the moment it lacks 50% of its ROPS and memory bandwidth.

But once again it depends on which cards those reviews are comparing. Hexus and bjorn3D are comparing GTX580 reference against Asus and Galaxy GTX480's which are much more refined than reference designs. Compare Asus GTX480 to the Asus GTX480 in Anandtech and you are going to be closer to the truth. My guess is that the reference card is made so that every candidate can attain the required specs, even the really "bad ones", it's like playing a lot on the safe side, because every card that can be released at launch counts, due to arguably high demand and low supply. In the meantime important partners like Asus (just as an example) get better candidates and they also do some binning themselves and by getting rid of the worst ones* they can improve a lot over the reference design, using far lower voltages to do the same thing. A lot of this has already happened with GTX480's and GTX470's on the wild, which have much much better thermals and power consumption than the reference ones reviewed at launch.

*You could be surprised how much it can improve if you get rid of only the worst 2% (because of Normal or Gaussian distribution you know) and because they are getting rid of 2% or lets say 5%, they just need to sell them at 2-5% higher price or just rely in the higher number sales derived from being the best supplier for that card.

Regarding the 2900xt, nearly bought one the other day to experience , but just why did it do so badly, fermi type troubles?

Because it's specs are damn good! ( for the time)

Did decent cooling yield good results?

Click to expand...

The HD2900 felt really hard on its face because it have such an abyssmal default clock. (Yes yield/power consumption problems)
That and the fact that the SP on the R600 was severely under-utilized.
So the first thing that changed in the RV770 (4870/4850) is that they added a ton of SPs to the thing.

Methinks this launch, though real was very much a 6970 pre-empt. There simply isn't much stock around - wasn't to start, probably less than a hundred units over those 5 stores and prices vary from £394 (pre-order) up past £450.

But it worked regardless. I like the GTX 580. But i'm thinking, hmm, how many are there? I hope production is ramped up and not used to feed the 570/560 lines.

I also think HD6970 will be the same. Few at launch, if they do get the 13th Dec nailed.

Would you guys reccommend the Intel or the AMD? I'm tempted to go AMD again, but I'm just worried that it won't be up to the task of supporting my 580.

Click to expand...

I would say go with AMD if you intend to overclock, if you intend to leave your cpu at stock speeds i would expect an i7 to be the better buy than i5 or anything from AMD.

My phenom 965 at 4ghz does pretty well at keeping up with my pair of 6870's but i admit an overclocked i7 would feed them better but only as i have 2 cards, i hope to go back to a single card soon so that i can keep my phenom a little longer without it holding me back.