If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

There used to be. But don't expect anything from it. About a year ago I thorougly tested fglrx for OpenGL implementation bugs, found quite a number of them. Wrote testcases and demonstration programs reliably triggering the bugs (including X.Org crashes and HW DoS, i.e. you'd have to reboot the machine), submitted it all. Never got even a status update.

They doesn't update status on this bugtracker. This bugtracker used only for submitting issues to the catalyst linux team.
So what status of issues you submitted in current driver? I mean does it still reproducible in Catalyst 12.10-12.11?

Originally Posted by datenwolf

Regarding bugs in NVidia drivers: I usually report them directly to my contacts at NVidia, but last time I found a bug first in NVidia drivers was 2006.

Thanks for the writeup Datenwolf, appreciate it a lot.
So do you think I would have more success with a multi gpu setup of the same family? I've also been reading into using the Eyefinity display ports on my 7850. I was looking at trying to use the 7850 for everything. HDMI for the TV, 2x mini display port to DVI adapters and the last DVI port.

The only way to use multi-GPU multi-display is with zaphod mode and optionally xinerama if you want all the displays to act as one big desktop. The xserver still supports zaphod mode and xinerama. Basically no one has written to the code yet to support this in X like it is in windows. The prime/hotplug stuff Dave recently landed in the latest xserver lays the groundwork, but it has not yet been extended to support multi-GPU multi-display. Finishing that would be a good project for someone looking to get a deep understanding of xserver internals.

IMHO GPUs should be treated as a co-processor that can be used by and program (given the right permissions) without requiring to have some on-screen framebuffer available. What a GPU renders should not go out to a display device directly, but to some portion of memory (the bandwidth of PCI-E does suffice for this). The output connectors (by which I mean the image transmitters) should not be depending to the GPUs RAM, but to a separate portion of memory and work independent from the GPU drawing operations. Programs like X.Org would only connect to the display transmitters which would act like a 1990-ies style VGA framebuffer-to-display adaptor with no HW drawing acceleration at all. And it should be possible to map the render output of GPU on card A to the display transmitter memory on card B.

I agree 100%. GPU's should no be considered as co-processors and it should be possible to pass information between them if so desired.

As it turns out, today's hardware is perfectly capable of doing this. It's just that the current Linux graphics driver model doesn't support it. And unfortunately those functions required to support it (DMA-BUF) have been locked down to GPL only, which means NVidia will probably never support it.