GPU Runs at a High Performance Level (full clock speeds) inMulti-display ModesThis is a hardware limitation and not a software bug. Even when no 3D programs arerunning, the driver will operate the GPU at a high performance level in order toefficiently drive multiple displays. In the case of SLI or multi‐GPU PCs, the second GPUwill always operate with full clock speeds; again, in order to efficiently drive multipledisplays. Today, all hardware from all GPU vendors have this limitation.

So, what's the deal? This seems like a real problem for the likes of you and I (I can't be the only one here who runs dual screen). This could be an adequate reason for me to go Radeon if it's better from that point of view.

Both. I find it unconscionable to be burning so much excess electricity on fancy GPU when I'm basically just editing some plain text documents in an IDE... and it's in those situations that I really want both quiet and two screens.

Both. I find it unconscionable to be burning so much excess electricity on fancy GPU when I'm basically just editing some plain text documents in an IDE... and it's in those situations that I really want both quiet and two screens.

An answer to both issues would be a 3rd party BIOS or control program like Rivatuner.

Or you could get a Radeon, they tend to run cooler and consume less power from the comparisions I've seen. If you aren't playing 3D games and have decent airflow then a PowerColor Go Green HD 5xxx card should do, they have large fanless coolers and some are designed to consume less power than reference cards.

ATI cards do not idle the memclock when you run multiple monitors, either. This is a long time "feature". I'm bemused that I could buy a Mac Mini and the additional computer hooked to my 2nd monitor would draw less power than my current 5770 and two monitor set up.

ATI cards do not idle the memclock when you run multiple monitors, either. This is a long time "feature". I'm bemused that I could buy a Mac Mini and the additional computer hooked to my 2nd monitor would draw less power than my current 5770 and two monitor set up.

If it would be only memclock, no one would complain. But in case of NVIDIA, it is all clocks and full 3D voltage.

Thanks for posting about this. I was pretty certain I was going to get an NVidia card, and maybe I still will, but, I don't know, but all of that pointlessly created noise, even with an AC Accelero Xtreme Pro, and wasted energy will be irritating.

I would appreciate the power of a GTX 580 when I need it, but not when I'm rather gently using the computer and it's not much more than On.

They seem to have made such good progress in idle power usage... then this. Bad. I'd like to know in simple terms what the differences between different manufacturers and models are... guess I'll have to do a bit more reading.

I'm also going to measure the difference in energy my laptop consumes when the external display is disconnected. Negligible difference, I presume.

Thanks for posting about this. I was pretty certain I was going to get an NVidia card, and maybe I still will, but, I don't know, but all of that pointlessly created noise, even with an AC Accelero Xtreme Pro, and wasted energy will be irritating.

I would appreciate the power of a GTX 580 when I need it, but not when I'm rather gently using the computer and it's not much more than On.

My solution is to get a lowend NVIDIA for second display, in my case 8400GS from XFX. It will eat sub-20W while idling - and that is what it will do, as my secondary display is for stuff like IRC chat, web browser, console outputs etc etc. I don't game in dual or tripple monitor setups, so for me this solution is good enough. But having this in one card would be so much better.

As it happens, there is a spare passive 8600GT kicking about my house. Not only is an extra 20W (or whatever it would be) unpalatable on its own, it would also be annoying to have the two separate physical devices from the point of view of sharing GPU resources between them. I don't know how it works, but presume there may be some problem if you want to run a 3d application with windows on each display and shared resources.

Anyway, I suppose we have to look for whatever compromises are available. Does anyone know if it would be possible to use the Sandy Bridge IGP on an H67 motherboard?

This is normal and it's this way to avoid tearing issues between the two monitors (according to evga support). My impression was that ATI cards capable of lowering clock speed also do the same thing(?)

On my card, running dual monitor config uses up an additional ~60W. One solution would be to use two identical monitors running at the same resolution.

I find this hard to accept when I've been running laptops with external displays for years, and never noticed higher draw / increased noise etc as a result of the two displays, generally at different resolutions. In fact, I've just been measuring power on my laptop (with 8600M GT) now, pretty much idling, in different screen configurations... the lowest readings I've got have been while running dual screen with the laptop at 1680x1050 and the external display at 1920x1080. Not the most controlled testing ever, but still...

I'm having a very hard time appreciating what could possibly be so different about what the laptop has to do to support this as opposed to a desktop.

This is normal and it's this way to avoid tearing issues between the two monitors (according to evga support). My impression was that ATI cards capable of lowering clock speed also do the same thing(?)

On my card, running dual monitor config uses up an additional ~60W. One solution would be to use two identical monitors running at the same resolution.

Uhm, sorry but what ? No, it is not normal. Normal would be have a little bit higher clocks, if needed. Not run at full 3D clocks and voltage all the time. Ever heard of the term "overkill" ? That is what NVIDIA does. I don't believe it must run at full 3D clocks just to be able to drive 2 displays instead of one.

So, what's the deal? This seems like a real problem for the likes of you and I (I can't be the only one here who runs dual screen). This could be an adequate reason for me to go Radeon if it's better from that point of view.

Thoughts?

Depends somewhat on the card. Substituting one for the other in exactly the same environment, my fanless 9600GT stays at 650 MHz at all times, but my GTX460 downclocks from 720 to 405. That's still quite a ways from the 50 the GTX460 drops to with just one monitor connected. It doesn't matter if the TV is on or off; simply being connected is enough to incur the middle clock speed at all times. The 9600GT appears to be locked at 650 MHz; at least when I temporarily disconnected my TV just now, it never downclocked.

Is this really a big deal? Both cards run at low temps while I'm at the desktop or doing anything at all inside Windows Media Center, and the GTX460 remained at its lowest fan speed when I had it in my HTPC. Only when I run a 3D app (i.e. game) do they actually get much hotter, and that's the only time the GTX460 increases its clock speed and its fan ramps up. I know the system power usage increases by 100W or more when the GTX460 is under load, but I've never bothered to compare the power usage at the desktop for the GTX460 with one vs. two monitors connected. I have a very hard time believing the difference would be anywhere near the 60W previously mentioned. Has anyone else tested this?

In my case it is (total system consumption from socket):127W with GTX570 for one display (downclocks to 51/101/135), 8400GS running but disconnected.139W with GTX570 for one display (downclocks to 51/101/135) and 8400GS for second .178W with GTX570 for both displays (runs all the time at 750/1500/1950).

That is 39W difference between using two cards for two displays and 51W difference between one and two displays. And those 51W equals more thermal loss, means noisier cooling.

With my old 4670 (or was it my 3870, I forget) the mem clock is changed for a reason. I didn't like the way it changed and tweaked the states, making the frequency the same no matter what. That resulted in artifacts and instability when the second display was enabled. Since I only used the secondary display for watching things on my LCD TV, I instead disabled the second output when I didn't need it (used a Autohotkey script with WinXP and the Win+P shortcut with Win7).Now I use a dedicated box for movies so for me it's not an issue anymore, but I understand the frustration of the multi monitor people. Lots of power wasted..

Hopefully this doesn't confuse the issue more than it already is, but on my dual head Linux workstation, I certainly do not experience the clocks running at full speed all the time. I'm using Twinview on an Asus EN210, and while it tends to run full speed if you do much of anything, it does clock down and up based on perceived load.Screenshot attached.

Attachment:

clocks.jpg

You do not have the required permissions to view the files attached to this post.

You might try finding MSI afterburner software (http://event.msi.com/vga/afterburner/), it gives you graphs of gpu processor usage, so that might show if running two display is actually taxing it or not. It should also let you drop the clock speeds and fan speeds if you need to. I've been using it lately, it's pretty easy to use, informative, and it works with non-msi cards.

You might try finding MSI afterburner software (http://event.msi.com/vga/afterburner/), it gives you graphs of gpu processor usage, so that might show if running two display is actually taxing it or not. It should also let you drop the clock speeds and fan speeds if you need to. I've been using it lately, it's pretty easy to use, informative, and it works with non-msi cards.

MSI Afterburner WILL NOT underclock your card with 2 displays. Actually, MSI Afterburner 2D support is next to none. To get the stock 2D clocks, you have to create a profile with default 3D clocks and set it as 2D profile. MSI Afterburner will then switch to those clocks, which in end let the card do its standard power management. And the standard power management is the issue there - there is none if you have 2 displays.

So MSI Afterburner is useless for this issue. I don't say it's useless, where do you think i noticed that my card didn't downlock ? You guess is right, in MSI Afterburner.

If you have two monitors running at same resolution, then it does matter if they are same model from same manufacturer or not. If they are same model from same manufacturer (for example two pieces of HP LP2475w), the card downclocks. If resolution or display model is not the same (for example one HP LP2475w and one Benq T2210HD like in my case), card runs at 3D.

There is no progress on this, this is a "feature". If you want two displays, you either get two identical displays, or you don't connect the secondary display to your primary card, but to something else - IGP in my case.

Any progress on this? I just setup a 2nd monitor and the airflow coming out of my case is considerably hotter...

Buy ATI? lol

My 4670 (750/1000) downclocks to 165/250 with single screen and 200/1000 with second monitor hooked up. Staying at 165/250 with both screens causes tearing and other artifacts on the second screen so the extra power usage does have a purpose. Nvidia seems to take it one step further though.

It's really weird that dual-display should put a bigger load on the graphics card, than running a single high-resolution display.A Dell U3011 is 2560x1600, while two 19" monitors are 2560x1024. I guess if your doing productive work with dual displays, you should just use the onboard Sandy Bridge IGP instead of using a discrete graphics card. If you're not gaming it doesn't make any difference anyways.

Who is online

Users browsing this forum: No registered users and 3 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum