dual-GPU not sli, are you trolling?... I believe SLI is the phrase coined for linking 2 physically seperate cards together... I've never owned a dual-GPU maybe someone could elaborate on the issues they've seen with dual-GPU... i.e. do you need driver updates or profiles? ...and what about micro stuttering...

From my understanding dual-GPU is (compared to SLI) more stable, less hassles, requries less power and is upgradable to SLI in the future...

dual-GPU not sli, are you trolling?... I believe SLI is the phrase coined for linking 2 physically seperate cards together... I've never owned a dual-GPU maybe someone could elaborate on the issues they've seen with dual-GPU... i.e. do you need driver updates or profiles? ...and what about micro stuttering...

From my understanding dual-GPU is (compared to SLI) more stable, less hassles, requries less power and is upgradable to SLI in the future...

By the way here's the price comparisons:

Actually dual-gpu is the exact same thing as sli, it's just done on one card. You'll have all the benefits and all the drawback's of an SLI system with the 590.

Actually dual-gpu is the exact same thing as sli, it's just done on one card. You'll have all the benefits and all the drawback's of an SLI system with the 590.

Oh, my mistake then... I though software would see it as a single card solution and I wouldn't have to set "SLI" options in games or wait for driver updates... hmmm, so the only real advantage is the ability to upgrade later then, correct?

QUOTE
IMPORTANT: The supplied Geforce Drivers 267.52 for Geforce GTX 590 will not stop the card from overheating when overclocking. Please use newer versions from the Nvidia website and stay away from 267.52. Otherwise this may happen ...

From a video showing the card burning up.

TechPowerUp review was using 267.71 driver as recomended by Nvivia. So it shouln't be driver issue, at least not the same as with the 267.52 driver in your post.

I wonder a little since when the Nvidia drivers have any problems. After reading many of the red777star post, that's very hard for me to believe. I can mention few others, but why waste time and ink, they know who they are.

Anyway driver included in the released card should not blow it up.

So probably it's just some faulty part hopefully on only few rushed out card without enough testing. Not only my speculation, according to this post by Micutzu at other forum.

Quote:

It's not about the cooling SKYMTL, the PWM is too weak, just as it is on the GTX 570. Two GTX 590 died the same way here, one in our test lab and one in a demo system of a reseller, both not overclocked or overvolted; Nvidia says some faulty components not found on retail parts caused it, but both us and TPU had retail package GTX 590's.

I would go with the GTX 590 because of it's quietness and 3D features and Phys-X. I sure it would overclock to 700 without the fan being to loud.The HD6990 is a fast card and you would have to say it is the fastest single card solution,but the fan is just to loud ,looks like they could have come up with a better solution for the heatsink-fan.Personally I would go with two GTX 570's.But I imangine i will be keeping what i have for a long time.

I saw most reviews, and the results really go back and forth between each card depending on the game or settings used, and even then, relative wins or losses for either one are usually less than 10%, so it isn't really performance that dictates a winner here so the good side of this is that competition between both cards is extremely tight, and users can't really go wrong either way.

Now the bad points....Both cards use way too much power,run too hot and are too loud,especially if they're overclocked with the standout here being the HD6990, and both also aren't PCI-e certified as the power limit within the specification, clearly states a 300 watt power limit for a single video card, regardless if it's a single or dual GPU card....Basically both companies didn't care for PCI-e certification when it came to power consumption and can greatly exceed it....Your power supply is going to hate this...

Then add the fact that a pair of GTX570's in SLI or HD6950/70 in crossfire, will perform the same or better and cost the same or less while being quieter and having more overclock headroom, simply because each card has it's own heatsink and fan assembly....Nothing is shared, and those cards are well under the 300 watt PCI-e power limit too...

If it were me, i'd use the fact that the PCI-e specification actually allows triple slot cooling assemblies, rather than the dual slot solutions used on both cards and that would likely lower temperatures and noise levels, since the heatsinks would be larger and able to dissipate more heat, and it's also likely that the fans wouldn't need to run as fast either, so the cards would be quieter too...

As for power consumption, i'd use the same idea that Asus did with their Ares line on the HD5970 cards using a custom PCB, as they used a triple power connector setup to feed the cards with power(8 + 8 + 6 pin arrangement), so that could feed them with up to 450 watts of power, and give more maneuvering room for more agressive stock clocks right from the factory....Giving either one just 375 watts with the current setup, knowing full well both are capable of using 450~500 watts, especially when overclocked, is playing with fire....

I'd like to see someone throw a clamp meter around the cables though... The HD6990 was supposed to be pushing the limits of the PCI-E spec and the GTX 590 draws another 45W beyond that. Just make sure you have some pretty hefty rails going to that GPU and you will probably be okay.

It is quite amazing what Nvidia was able to do with the cooler. Given the temps are in the same range the card is quieter while drawing more power. What I think happened was Nvidia was making a cooler designed for a higher TDP and they decided to lower the clocks and bring the power level down some. This obviously hurt the performance but it helped the noise levels.

EDIT: Hmm I'm seeing numbers all over the place. Some claim lower, some higher. It would matter which profile people were using for the AMD card though. Nvidia lists the card as 365W TDP so that keeps it in spec. There might be some headroom to overclock the card if you turn the fan to manual.

I'll quote myself...

To answer my own question anandtech has a nice little blurb that most likey has something to do with this.

Quote:

At this point our biggest complaint is that OCP’s operation is still not transparent to the end user. If you trigger it you have no way of knowing unless you know how the game/application should already be performing. NVIDIA tells us that at some point this will be exposed through NVIDIA’s driver API, but today is not that day. Along those lines, at least in the case of Furmark and OCCT OCP still throttles to an excessive degree—whereas AMD gets this right and caps anything and everything at the PowerTune limit, we still see OCP heavily clamp these programs to the point that our GTX 590 draws 100W more under games than it does under Furmark. Clamping down on a program to bring power consumption down to safe levels is a good idea, but clamping down beyond that just hurts the user and we hope to see NVIDIA change this.

Anyone who used Furmark for their power tests will not have valid results as the OCP will go out of it's way to throttle the program. That's also going to skew temperatures considerably as it's drawing a lot less power.

It is incredible that Nvidia managed to make this card at all. The same pretty much goes for ATI and there 6990.

I can't believe also that Nvidia managed to beat ATI in the quieter category. I thought that the 590 would be louder that the 6990, I honestly thought there would be no way around that. So that is a massive kudos to Nvidia.

The clocks on the 590 are a bit of a disappointment one has to say though. I was hoping the core clock would be in the high 600's. I guess that is just no possible or the wattage needed would just be to great. It would probably have a big impact on temps as well. So the 6990 is the winner in the performance stakes, which isn't really surprising considering the clocks.

It is great to have competition in the very high end products though, one has to say. So well done to both ATI and Nvidia for making some beastly graphics cards. Now all we really need are some die shrinks. So come on TSMC and Global Foundries. Get your act together. Just imagine how much cooler(literally) and how much faster these cards would have been, had they not been limited to the 40nm process.

I don't see why nvidia did'nt add a third 8pin PCI-e plug so the card get all the power it needed,Looks like that would be better then overpowering the slot ,because they know people are going to be overclocking it.Most people that have the money for that card have a 1200watt or 1500watt powersuply.