I noticed that AIGLX consume lots of CPU (about 30% when I move a window, sometimes to 100% when I rotate the cube very fast) . It is said that AIGLX is designed not to consume as much CPU power. What causes that much CPU consumption? Do you have any plans to optimize AIGLX support? I have Xelo FX5200, Fedora Core 6, 1.0.9629 drivers (Livna's package; the same result with nVidia generic tarball package). I tried AIGLX with some integrated Intel graphic card and it works perfectly, it consume almost none of CPU power.

with nvidia, you don't need AIGLX. nvidia has its implementation of tex_from_pixmap as GLX. thus on a nvidia card, using aiglx will makes it tex_from_pixmap in software, whereas with intel cards, the driver being opensource, they may have made it accelerated even with AIGLX. using GLX with nvidia will make tex_from_pixmap accelerated again, with currently a few caveats (see black windows topics).

to nvidia forum mods/devs, this should be made clear once and for all, you should really make a sticky with clear title explaining that AIGLX != nvidia tex_from_pixmap. that will clear up many doubts in term of feedback, as more people are thinking that ( compiz & !XGL ) => AIGLX, even with nvidia.

with nvidia, you don't need AIGLX. nvidia has its implementation of tex_from_pixmap as GLX. thus on a nvidia card, using aiglx will makes it tex_from_pixmap in software, whereas with intel cards, the driver being opensource, they may have made it accelerated even with AIGLX. using GLX with nvidia will make tex_from_pixmap accelerated again, with currently a few caveats (see black windows topics).

Ok, how can I get all these fancy effects without AIGLX? I've read somewhere that nVidia will support AIGLX rather than XGL. I don't really care much about which technology I use, I just want it to do the job.

Being a sceptic and having experience with running computers with flaky graphic cards I haven't tried out 3D-accelerated desktops yet, but you should probably start by reading HOWTO: Compiz with NVIDIA Graphics Drivers. There's probably also a section about the same subject in the readme that came with your nvidia-drivers. (Installed in /usr/share/doc/nvidia-glx/README.txt.gz for me, but probably different places in other distros)

I noticed that AIGLX consume lots of CPU (about 30% when I move a window, sometimes to 100% when I rotate the cube very fast) . It is said that AIGLX is designed not to consume as much CPU power. What causes that much CPU consumption? Do you have any plans to optimize AIGLX support? I have Xelo FX5200, Fedora Core 6, 1.0.9629 drivers (Livna's package; the same result with nVidia generic tarball package). I tried AIGLX with some integrated Intel graphic card and it works perfectly, it consume almost none of CPU power.

Thanks for the answer in advance.

(correct me if I'm wrong) By installing the nvidia driver (>=9625) you are using nvidia's implementation of the GLX_EXT_texture_from_pixmap extension (provided by their libglx.so). You do need Xorg 7.1 which happens to be AIGLX enabled (that's the native libglx.so, which is replaced by the nvidia driver), which is what causes the confusion in my opinion.

(correct me if I'm wrong) By installing the nvidia driver (>=9625) you are using nvidia's implementation of the GLX_EXT_texture_from_pixmap extension (provided by their libglx.so). You do need Xorg 7.1 which happens to be AIGLX enabled (that's the native libglx.so, which is replaced by the nvidia driver), which is what causes the confusion in my opinion.

You may be right. But, AIGLX works, and it didn't work with the previous version of nVidia propriatery driver. Hm...

I'm not sure what you mean by "AIGLX works". How you can test this using nvidia's closed driver package. I haven't tried to restore Xorg's libglx and see if it works with the nvidia driver though, which might be what you are saying.

I find the following helpful in understanding the subject: Communication between Xgl and Xorg
mainly talks about Xgl, but also shows how things look like with AIGLX. The whole confusion comes from the fact that nvidia devs provide their own accelerated indirect GLX implementation via their libglx.so

to nvidia devs: what's the reason behind a custom implementation of the libglx.so? It looks very redundant to me. You must have your reasons.

please please please people, make proper difference between those really different things:
- GLX_EXT_texture_from_pixmap: the thing needed for a X rendered window to become a texture handlable by opengl
- XGL: an implemental solution to the above, where two X servers are running, one relaying things to the other
- AIGLX: another implemental solution, where things are rendered indirectly offscreen, then put on screen, thus inducing an overhead.
- GLX: the usual opengl+x solution, where opengl things are rendered directly on X screen. this is the nvidia solution to implement texfrompixmap.

Quote:

to nvidia devs: what's the reason behind a custom implementation of the libglx.so?