choosing between GPUs, even nvidia ones

Hi,

I discovered a way to trick the nvidia driver to use any chosen by you nvidia gpu from the available on the machine.
It works but unfortunately it has a brief visible to the user side effect.

choosing between any other combination of GPUs can be done perfectly without any drawbacks.
first force opengl32 to load the appropriate ICD - this is very easy:
HDC dc = CreateDCA("\\\\.\\DISPLAY3", NULL, NULL, NULL); // substitute "\\\\.\\DISPLAY3" with the device name you need
SetPixelFormat(dc, 1, NULL);
DeleteDC(dc);
at this point the ICD is loaded.
If there is only one GPU controlled by this ICD (read vendor) then you can just go ahead
with the normal opengl initialization - the GPU that will be used is the one corresponding to the display you specified (e.g. \\\\.\\DISPLAY3)

however, if this ICD controls more than 1 GPU, you need to do some further work to choose between them.
On ATI you must create a window that is contained within some of the monitors attached to the GPU you are interested in,
and then setup the opengl context using it. This window can stay invisible, so there are no annoying side-effects here.
The ATI driver deduces which GPU to use by the window location.

Now the hardest part - if you have more than 1 nvidia.
Their driver does not provide any "normal" means to the programmer to choose which GPU to use.
But they always seem to use the windows' primary display device.
So here is the trick - you can temporarily change the primary device with ChangeDisplaySettingsEx
then create context and then change the primary device back to what it was. The gl context, once created, will stay on the same gpu.
Changing the primary display with ChangeDisplaySettingsEx is a bit awkward, but it works - you must rearrange all displays by setting
their positions in the virtual desktop with flags CDS_UPDATEREGISTRY|CDS_NORESET, the new primary display must have also the flag CDS_SET_PRIMARY
AND must be at position (0,0), and finally call ChangeDisplaySettingsEx(0,0,0,0,0). You must make sure no 2 displays overlap, otherwise it wont work.
Use EnumDisplayMonitors to find all display devices that are part of the virtual screen.
Before you do all this, you may want to create a topmost window that covers the entire virtual screen
in order to hide the temporary shuffling from the user and present him with e.g. black screens instead.
Also, apparently there is a windows bug that causes any maximized windows to look strange after mode change - they look like
non-maximized but are actually maximized. I fix this by collecting all maximized windows in a list (EnumWindows), and after the above operation is done,
call ShowWindow(w, SW_RESTORE) AND ShowWindow(w, SW_MAXIMIZE) for all of them - this fixes them. Finally i destroy the window that covers the entire virtual screen.

How can you tell given device to which vendor belongs - from EnumDisplayDevices you can get the vendor ID of the device (NVIDIA = 0x10DE, ATI = 0x1002, intel = 0x8086)

Edit:
It appears the "side effect" from ChangeDisplaySettingsEx is worse than what I thought. After the display devices are re-aranged the various currently visible windows seem to get moved around between the displays in totally unpredictable way. On which display will each particular window end up is totally random, and different random on every try. Microsoft seem to have done quite nice mess with this. The same happens when you change the primary display from the windows control panel dialog, so this is a more general misbehavior of the os, not a problem with our concrete usage. The situation appears to be worse when composition is enabled.
Still, if you don't mind this mess, the GPU choosing works fine.

Edit2:
I found better way to trick nvidia driver, without any side-effects:
they detect which is the primary display by checking for which monitor GetMonitorInfo will give position (0,0)
So all need to do is just hook GetMonitorInfo to return pos (0,0) for our chosen device and it works.
They do the detection only once, and after that you are stuck with the same device for the lifetime of the process.

I guess it is a matter of policy for them and they wont do it. Are they are really trying to promote their expensive quadro line with this? That seems silly to me, but what do i know.

I find it silly to do that at the expense of opengl, given that d3d allows programmers to choose device.
Its not like opengl is extremely popular on PC anyway so that we shall cut basic features from it to promote something else.

You should nag NVIDIA into exposing WGL_NV_gpu_affinity on consumer cards.

AMD exposes a similar WGL_AMD_gpu_association on consumer cards.

The gpu association is not needed for on-screen windows on AMD cards. The driver automatically selects the GPU which 'has the most pixels' at window creation time. It's only useful for off-screen FBOs.