The 177.67 NVIDIA BETA Linux graphics driver release addresses several known 2D performance issues. However, some of these performance improvements rely on new and/or experimental features of the NVIDIA X driver, some of which are not yet enabled by default. In order to achieve optimal performance with the 177.67 X driver, please read the discussion of newly added and pre-existing performance tuning options.

We plan to enable these options by default in a future NVIDIA Linux graphics driver release. Please see the more detailed notes below for a thorough description of what each of the options do.

Please note: if you still experience performance issues, please create a new thread with a detailed description of your problem.

Before creating a new thread, PLEASE:
make sure you are using version 177.67 of the NVIDIA X driver
attach an nvidia-bug-report.log to your message as per http://www.nvnews.net/vbulletin/showthread.php?t=46678
make sure you enabled all of the the performance tweaks described below

To get the best performance out of the current BETA driver, we recommend that you do the following:
after the X server has started, use the following command:
# nvidia-settings -a InitialPixmapPlacement=2
This command will enable X pixmaps to be placed in your GPU's video memory instead of the traditional system memory, allowing the NVIDIA X driver to optimally accelerate rendering operations involving such pixmaps. As this BETA release includes fixes for most of the remaining issues that were previously caused by this setting, it is strongly recommended that you add the above command to your login manager or xinit startup file (i.e. ~/.xinitrc/~/.kderc/~/.gnomerc, etc.).

Please note: it has been reported that using a compositing manager in conjunction with this option will cause newly-created windows to briefly display random contents instead of black before being first drawn. We are aware of this non-critical issue and planning to fix it in a future NVIDIA Linux graphics driver release.
add the following line in the Screen section of your X configuration file:
Option "PixmapCacheSize" "1000000"
This option reserves a chunk of your GPU's video memory for fast pixmap allocation and greatly improves performance of any X application heavily relying on pixmap allocation. The PixmapCache size is measured in pixels; a pixel of PixmapCache uses a little more than 5 bytes of video memory, meaning a PixmapCacheSize of 1,000,000 will reserve a little under 5MB of video memory for pixmap allocation. Feel free to increase this size to get better performance, taking into account the amount of memory on your graphics card and the fact that this memory will be unavailable for OpenGL texture allocation at this point (we are planning to improve that limitation in a future driver release). Operations such as resizing windows using Compiz or the KDE4 compositing manager are known to benefit from setting this option to a high value.

Please note: this option relies on InitialPixmapPlacement being set to 2. If you are having performance problems, please make sure InitialPixmapPlacement is set to 2 by querying its value using the following command:
# nvidia-settings -q InitialPixmapPlacement
add the following line in the Screen section of your X configuration file:
Option "AllowSHMPixmaps" "0"
This option prevents applications from allocating Shared Memory pixmaps. While such pixmaps generally yield better performance using non-accelerated operations, they can't be permanently stored in video memory by the NVIDIA X driver. As this causes the NVIDIA X driver to be unable to optimally accelerate rendering operations involving such pixmaps, it is highly recommended that you set this option to 0 for best performance.

If you own a GeForce 8, 9 or GTX series GPU, it is also strongly encouraged that you do the following:
after the X server has started, use the following command:
# nvidia-settings -a GlyphCache=1
This command will allocate a RENDER GlyphSet caching surface to store Xft fonts in video memory. This allows the NVIDIA X driver to optimally accelerate text rendering, effectively making anti-aliased text rendering and subpixel-hinted anti-aliased text rendering as fast as regular text rendering. It is strongly recommended that you add the above command to your login manager or xinit startup file (i.e. ~/.xinitrc/~/.kderc/~/.gnomerc, etc.).

We apologize for the burden incurred by requiring users to set these options. We will refine our support for those experimental acceleration features during the course of the 177.x driver release cycle and we're planning to enable them by default, so that future NVIDIA Linux graphics driver releases deliver optimal performance out of the box. We are asking that you report any issues caused by setting those options; your feedback will be greatly appreciated and will enable us to further improve the performance of the NVIDIA X driver.

Please note: Some users have reported the poor performance measurements on certain RENDER operations as a bug (such as reported by the xrenderbenchmark program). The NVIDIA X driver does not currently accelerate Disjoint and Conjoint operations, causing a software fall-back and very low performance results. As those operations are not used outside of benchmarks, we have no immediate plans to accelerate them. These operations are not used by KDE4 and we do not believe they can cause performance issues for end-users.