This article covers installing and configuring [http://www.nvidia.com NVIDIA]'s ''proprietary'' graphic card driver. For information about the open-source drivers, see [[Nouveau]].

This article covers installing and configuring [http://www.nvidia.com NVIDIA]'s ''proprietary'' graphic card driver. For information about the open-source drivers, see [[Nouveau]].

−

==Installing==

+

== Installing ==

+

These instructions are for those using the stock {{Pkg|linux}} package. For custom kernel setup, skip to the [[#Alternate install: custom kernel|next]] subsection.

These instructions are for those using the stock {{Pkg|linux}} package. For custom kernel setup, skip to the [[#Alternate install: custom kernel|next]] subsection.

Line 38:

Line 39:

:For the very latest GPU models, it may be required to install {{AUR|nvidia-beta}} from the [[Arch User Repository]], since the stable drivers may not support the newly introduced features.

:For the very latest GPU models, it may be required to install {{AUR|nvidia-beta}} from the [[Arch User Repository]], since the stable drivers may not support the newly introduced features.

−

:{{Note|The {{Pkg|nvidia-libgl}} or ''nvidia-{304xx,173xx,96xx}-utils'' package is a dependency and will be pulled in automatically. It may conflict with the {{Pkg|libgl}} package; this is normal. If pacman asks to remove {{Pkg|libgl}} and fails due to unsatisfied dependencies, remove it with {{ic|pacman -Rdd libgl}}. Or, if pacman asks to remove {{Pkg|mesa-libgl}} and fails due to unsatisfied dependencies, remove it with {{ic|pacman -Rdd mesa-libgl}}.}}

+

{{Note|

−

+

* The {{Pkg|nvidia-libgl}} or ''nvidia-{304xx,173xx,96xx}-utils'' package is a dependency and will be pulled in automatically. It may conflict with the {{Pkg|libgl}} package; this is normal. If pacman asks to remove {{Pkg|libgl}} and fails due to unsatisfied dependencies, remove it with {{ic|pacman -Rdd libgl}}. Or, if pacman asks to remove {{Pkg|mesa-libgl}} and fails due to unsatisfied dependencies, remove it with {{ic|pacman -Rdd mesa-libgl}}.

−

:{{Note|The {{AUR|nvidia-96xx-utils}} package requires a legacy X.Org server release ({{AUR|xorg-server1.12}}). It conflicts with the {{Pkg|xorg-server}} from the official repositories.}}

+

* The {{AUR|nvidia-96xx-utils}} package requires a legacy X.Org server release ({{AUR|xorg-server1.12}}). It conflicts with the {{Pkg|xorg-server}} from the official repositories.

+

}}

:If you are on 64-bit and also need 32-bit OpenGL support, you must also install the equivalent ''lib32'' package from the [[multilib]] repository (e.g. {{Pkg|lib32-nvidia-libgl}} or ''lib32-nvidia-{304xx,173xx}-utils'').

:If you are on 64-bit and also need 32-bit OpenGL support, you must also install the equivalent ''lib32'' package from the [[multilib]] repository (e.g. {{Pkg|lib32-nvidia-libgl}} or ''lib32-nvidia-{304xx,173xx}-utils'').

Line 46:

Line 48:

:{{Tip|The legacy nvidia-96xx and nvidia-173xx drivers can also be installed from the unofficial [http://pkgbuild.com/~bgyorgy/city.html <nowiki>[city] repository</nowiki>].}}

:{{Tip|The legacy nvidia-96xx and nvidia-173xx drivers can also be installed from the unofficial [http://pkgbuild.com/~bgyorgy/city.html <nowiki>[city] repository</nowiki>].}}

−

3. '''Reboot'''. The ''nvidia'' package contains a file which blacklists the ''nouveau'' module, so rebooting is necessary.

+

3. '''Reboot'''. The '''nvidia''' package contains a file which blacklists the ''nouveau'' module, so rebooting is necessary.

−

Once the driver has been installed, continue to: [[#Configuring]].

+

Once the driver has been installed, continue to [[#Configuring|configure]].

−

===Alternate install: custom kernel===

+

=== Alternate install: custom kernel ===

First of all, it's good to know how the ABS works by reading some of the other articles about it:

First of all, it's good to know how the ABS works by reading some of the other articles about it:

Line 82:

Line 84:

The {{ic|-c}} operand tells makepkg to clean left over files after building the package, whereas {{ic|-i}} specifies that makepkg should automatically run pacman to install the resulting package.

The {{ic|-c}} operand tells makepkg to clean left over files after building the package, whereas {{ic|-i}} specifies that makepkg should automatically run pacman to install the resulting package.

−

===Automatic re-compilation of the NVIDIA module with every update of any kernel===

+

=== Automatic re-compilation of the NVIDIA module with every update of any kernel ===

This is possible thanks to {{AUR|nvidia-hook}} from the [[AUR]]. You will need to install the module sources: either {{AUR|nvidia-source}} for the stable drivers or {{AUR|nvidia-source-beta}} for the beta drivers. In '''nvidia-hook''', the 'automatic re-compilation' functionality is done by a '''nvidia hook''' on [[mkinitcpio]] after forcing to update the '''linux-headers''' package. You will need to add 'nvidia' to the HOOKS array in /etc/mkinitcpio.conf as well as 'linux-headers' and your custom kernel(s) headers to the SyncFirst array in /etc/pacman.conf for this to work.

This is possible thanks to {{AUR|nvidia-hook}} from the [[AUR]]. You will need to install the module sources: either {{AUR|nvidia-source}} for the stable drivers or {{AUR|nvidia-source-beta}} for the beta drivers. In '''nvidia-hook''', the 'automatic re-compilation' functionality is done by a '''nvidia hook''' on [[mkinitcpio]] after forcing to update the '''linux-headers''' package. You will need to add 'nvidia' to the HOOKS array in /etc/mkinitcpio.conf as well as 'linux-headers' and your custom kernel(s) headers to the SyncFirst array in /etc/pacman.conf for this to work.

The hook will call the '''dkms''' command to update the NVIDIA module for the version of your new kernel.

The hook will call the '''dkms''' command to update the NVIDIA module for the version of your new kernel.

−

{{Note|If you are using this functionality it's '''important''' to look at the installation process of the linux (or any other kernel) package. nvidia hook will tell you if anything goes wrong.}}

+

{{Note|

+

* If you are using this functionality it's '''important''' to look at the installation process of the linux (or any other kernel) package. nvidia hook will tell you if anything goes wrong.

+

* If you would like to do this manually please see this section [[Dynamic Kernel Module Support#Usage| in the dkms arch wiki.]]

+

}}

+

== Configuring ==

−

{{Note|If you would like to do this manually please see this section [[Dynamic Kernel Module Support#Usage| in the dkms arch wiki.]]}}

−

−

==Configuring==

It is possible that after installing the driver it may not be needed to create an Xorg server configuration file. You can run [[Xorg#Running| a test]] to see if the Xorg server will function correctly without a configuration file. However, it may be required to create a {{ic|/etc/X11/xorg.conf}} configuration file in order to adjust various settings. This configuration can be generated by the NVIDIA Xorg configuration tool, or it can be created manually. If created manually, it can be a minimal configuration (in the sense that it will only pass the basic options to the [[Xorg]] server), or it can include a number of settings that can bypass Xorg's auto-discovered or pre-configured options.

It is possible that after installing the driver it may not be needed to create an Xorg server configuration file. You can run [[Xorg#Running| a test]] to see if the Xorg server will function correctly without a configuration file. However, it may be required to create a {{ic|/etc/X11/xorg.conf}} configuration file in order to adjust various settings. This configuration can be generated by the NVIDIA Xorg configuration tool, or it can be created manually. If created manually, it can be a minimal configuration (in the sense that it will only pass the basic options to the [[Xorg]] server), or it can include a number of settings that can bypass Xorg's auto-discovered or pre-configured options.

{{Tip|If upgrading from nouveau make sure to remove "{{ic|nouveau}}" from {{ic|/etc/mkinitcpio.conf}}. See [[NVIDIA#Switching between nvidia and nouveau drivers]], if switching between the open and proprietary drivers often.}}

+

{{Tip|If upgrading from nouveau make sure to remove "{{ic|nouveau}}" from {{ic|/etc/mkinitcpio.conf}}. See [[NVIDIA#Switching between NVIDIA and nouveau drivers|Switching between NVIDIA and nouveau drivers]], if switching between the open and proprietary drivers often.}}

=== Multiple monitors ===

=== Multiple monitors ===

+

:''See [[Multihead]] for more general information''

:''See [[Multihead]] for more general information''

Line 183:

Line 190:

}}

}}

−

====TwinView====

+

==== TwinView ====

+

You want only one big screen instead of two. Set the {{ic|TwinView}} argument to {{ic|1}}. This option should be used instead of Xinerama (see above), if you desire compositing.

You want only one big screen instead of two. Set the {{ic|TwinView}} argument to {{ic|1}}. This option should be used instead of Xinerama (see above), if you desire compositing.

Option "TwinView" "1"

Option "TwinView" "1"

−

TwinView only works on a per card basis: If you have multiple cards (and no SLI?), you'll have to use xinerama or zaphod mode (multiple X screens). You can combine TwinView with zaphod mode, ending up, for example, with two X screens covering two monitors each. Most window managers fail miserably in zaphod mode. The shining exception is Awesome. KDE almost works.

+

TwinView only works on a per card basis: If you have multiple cards (and no SLI?), you'll have to use xinerama or zaphod mode (multiple X screens). You can combine TwinView with zaphod mode, ending up, for example, with two X screens covering two monitors each. Most window managers fail miserably in zaphod mode. Awesome is the shining exception, and KDE almost works.

The NVIDIA package provides Twinview. This tool will help by automatically configuring all the monitors connected to your video card. This only works for multiple monitors on a single card.

The NVIDIA package provides Twinview. This tool will help by automatically configuring all the monitors connected to your video card. This only works for multiple monitors on a single card.

To configure Xorg Server with Twinview run:

To configure Xorg Server with Twinview run:

# nvidia-xconfig --twinview

# nvidia-xconfig --twinview

−

=====Manual CLI configuration with xrandr=====

+

===== Manual CLI configuration with xrandr =====

+

If the latest solutions doesn't works for you, you can use the ''autostart'' trick of your window manager to run a {{ic|xrandr}} command like this one :

If the latest solutions doesn't works for you, you can use the ''autostart'' trick of your window manager to run a {{ic|xrandr}} command like this one :

xrandr --output DVI-I-0 --auto --primary --left-of DVI-I-1

xrandr --output DVI-I-0 --auto --primary --left-of DVI-I-1

−

or

+

or:

xrandr --output DVI-I-1 --pos 1440x0 --mode 1440x900 --rate 75.0

xrandr --output DVI-I-1 --pos 1440x0 --mode 1440x900 --rate 75.0

Line 257:

Line 267:

You must adapt the {{ic|xrandr}} options with the help of the output of the command {{ic|xrandr}} run alone in a terminal.

You must adapt the {{ic|xrandr}} options with the help of the output of the command {{ic|xrandr}} run alone in a terminal.

−

====Using NVIDIA Settings====

+

==== Using NVIDIA Settings ====

+

You can also use the {{ic|nvidia-settings}} tool provided by {{Pkg|nvidia-utils}}. With this method, you will use the proprietary software NVIDIA provides with their drivers. Simply run {{ic|nvidia-settings}} as root, then configure as you wish, and then save the configuration to {{ic|/etc/X11/xorg.conf.d/10-monitor.conf}}.

You can also use the {{ic|nvidia-settings}} tool provided by {{Pkg|nvidia-utils}}. With this method, you will use the proprietary software NVIDIA provides with their drivers. Simply run {{ic|nvidia-settings}} as root, then configure as you wish, and then save the configuration to {{ic|/etc/X11/xorg.conf.d/10-monitor.conf}}.

−

====ConnectedMonitor====

+

==== ConnectedMonitor ====

+

If the driver doesn't properly detect a second monitor, you can force it to do so with ConnectedMonitor.

If the driver doesn't properly detect a second monitor, you can force it to do so with ConnectedMonitor.

Line 311:

Line 323:

The duplicated device with {{ic|Screen}} is how you get X to use two monitors on one card without {{ic|TwinView}}. Note that {{ic|nvidia-settings}} will strip out any {{ic|ConnectedMonitor}} options you have added.

The duplicated device with {{ic|Screen}} is how you get X to use two monitors on one card without {{ic|TwinView}}. Note that {{ic|nvidia-settings}} will strip out any {{ic|ConnectedMonitor}} options you have added.

−

====Mosaic Mode====

+

==== Mosaic mode ====

+

Mosaic mode is the only way to use more than 2 monitors across multiple graphics cards with compositing. Your window manager may or may not recognize the distinction between each monitor.

Mosaic mode is the only way to use more than 2 monitors across multiple graphics cards with compositing. Your window manager may or may not recognize the distinction between each monitor.

−

=====Base Mosaic=====

+

+

===== Base mosaic =====

+

Base mosaic mode works on any set of Geforce 8000 series or higher GPUs. It cannot be enabled from withing the nvidia-setting GUI. You must either use the nvidia-xconfig command line program or edit xorg.conf by hand. Metamodes must be specified. The following is an example for four DFPs in a 2x2 configuration, each running at 1920x1024, with two DFPs connected to two cards:

Base mosaic mode works on any set of Geforce 8000 series or higher GPUs. It cannot be enabled from withing the nvidia-setting GUI. You must either use the nvidia-xconfig command line program or edit xorg.conf by hand. Metamodes must be specified. The following is an example for four DFPs in a 2x2 configuration, each running at 1920x1024, with two DFPs connected to two cards:

If you have an SLI configuration and each GPU is a Quadro FX 5800, Quadro Fermi or newer then you can use SLI Mosaic mode. It can be enabled from within the nvidia-settings GUI or from the command line with:

If you have an SLI configuration and each GPU is a Quadro FX 5800, Quadro Fermi or newer then you can use SLI Mosaic mode. It can be enabled from within the nvidia-settings GUI or from the command line with:

The NVIDIA package includes the {{ic|nvidia-settings}} program that allows adjustment of several additional settings.

The NVIDIA package includes the {{ic|nvidia-settings}} program that allows adjustment of several additional settings.

Line 336:

Line 354:

This is documented in [http://cgit.freedesktop.org/~aplattner/nvidia-settings/tree/src/libXNVCtrl/NVCtrl.h?id=b27db3d10d58b821e87fbe3f46166e02dc589855#n2797 nvidia-settings source code]. For this setting to persist, this command needs to be run on every startup. You can add it to ~/.xinitrc file for auto-startup with X.

This is documented in [http://cgit.freedesktop.org/~aplattner/nvidia-settings/tree/src/libXNVCtrl/NVCtrl.h?id=b27db3d10d58b821e87fbe3f46166e02dc589855#n2797 nvidia-settings source code]. For this setting to persist, this command needs to be run on every startup. You can add it to ~/.xinitrc file for auto-startup with X.

−

{{Tip | On rare occasions the {{ic|~/.nvidia-settings-rc}} may become corrupt. If this happens, the Xorg server may crash and the file will have to be deleted to fix the issue.}}

+

{{Tip|On rare occasions the {{ic|~/.nvidia-settings-rc}} may become corrupt. If this happens, the Xorg server may crash and the file will have to be deleted to fix the issue.}}

+

+

=== Enabling MSI (Message Signaled Interrupts) ===

−

===Enabling MSI (Message Signaled Interrupts)===

By default, the graphics card uses a shared interrupt system. To give a small performance boost, edit {{ic|/etc/modprobe.d/modprobe.conf}} and add:

By default, the graphics card uses a shared interrupt system. To give a small performance boost, edit {{ic|/etc/modprobe.d/modprobe.conf}} and add:

options nvidia NVreg_EnableMSI=1

options nvidia NVreg_EnableMSI=1

Line 344:

Line 363:

To confirm, run:

To confirm, run:

−

# cat /proc/interrupts | grep nvidia

−

43: 0 49 4199 86318 PCI-MSI-edge nvidia

−

===Advanced: 20-nvidia.conf===

+

{{hc|# grep nvidia /proc/interrupts|

+

43: 0 49 4199 86318 PCI-MSI-edge nvidia

+

}}

+

+

=== Advanced: 20-nvidia.conf ===

+

Edit {{ic|/etc/X11/xorg.conf.d/20-nvidia.conf}}, and add the option to the correct section. The Xorg server will need to be restarted before any changes are applied.

Edit {{ic|/etc/X11/xorg.conf.d/20-nvidia.conf}}, and add the option to the correct section. The Xorg server will need to be restarted before any changes are applied.

As of NVIDIA driver version 180.44, support for GLX with the Damage and Composite X extensions is enabled by default. Refer to [[Xorg#Composite]] for detailed instructions.

+

==== Enabling desktop composition ====

+

+

As of NVIDIA driver version 180.44, support for GLX with the Damage and Composite X extensions is enabled by default. Refer to [[Xorg#Composite|Xorg page]] for detailed instructions.

+

+

==== Disabling the logo on startup ====

−

====Disabling the logo on startup====

Add the {{ic|"NoLogo"}} option under section {{ic|Device}}:

Add the {{ic|"NoLogo"}} option under section {{ic|Device}}:

Option "NoLogo" "1"

Option "NoLogo" "1"

−

====Enabling hardware acceleration====

+

==== Enabling hardware acceleration ====

+

{{Note|RenderAccel is enabled by default since drivers version 97.46.xx}}

{{Note|RenderAccel is enabled by default since drivers version 97.46.xx}}

Add the {{ic|"RenderAccel"}} option under section {{ic|Device}}:

Add the {{ic|"RenderAccel"}} option under section {{ic|Device}}:

Option "RenderAccel" "1"

Option "RenderAccel" "1"

−

====Overriding monitor detection====

+

==== Overriding monitor detection ====

+

The {{ic|"ConnectedMonitor"}} option under section {{ic|Device}} allows to override monitor detection when X server starts, which may save a significant amount of time at start up. The available options are: {{ic|"CRT"}} for analog connections, {{ic|"DFP"}} for digital monitors and {{ic|"TV"}} for televisions.

The {{ic|"ConnectedMonitor"}} option under section {{ic|Device}} allows to override monitor detection when X server starts, which may save a significant amount of time at start up. The available options are: {{ic|"CRT"}} for analog connections, {{ic|"DFP"}} for digital monitors and {{ic|"TV"}} for televisions.

Line 370:

Line 397:

{{Note| Use "CRT" for all analog 15 pin VGA connections, even if the display is a flat panel. "DFP" is intended for DVI digital connections only.}}

{{Note| Use "CRT" for all analog 15 pin VGA connections, even if the display is a flat panel. "DFP" is intended for DVI digital connections only.}}

−

====Enabling triple buffering====

+

==== Enabling triple buffering ====

+

Enable the use of triple buffering by adding the {{ic|"TripleBuffer"}} Option under section {{ic|Device}}:

Enable the use of triple buffering by adding the {{ic|"TripleBuffer"}} Option under section {{ic|Device}}:

Option "TripleBuffer" "1"

Option "TripleBuffer" "1"

Line 378:

Line 406:

{{Note|This option may introduce full-screen tearing and reduce performance. As of the R300 drivers, vblank is enabled by default.}}

{{Note|This option may introduce full-screen tearing and reduce performance. As of the R300 drivers, vblank is enabled by default.}}

−

====Using OS-level events====

+

==== Using OS-level events ====

+

Taken from the NVIDIA driver's [http://http.download.nvidia.com/XFree86/Linux-x86/304.51/README/xconfigoptions.html README] file: ''"[...] Use OS-level events to efficiently notify X when a client has performed direct rendering to a window that needs to be composited."'' It may help improving performance, but it is currently incompatible with SLI and Multi-GPU modes.

Taken from the NVIDIA driver's [http://http.download.nvidia.com/XFree86/Linux-x86/304.51/README/xconfigoptions.html README] file: ''"[...] Use OS-level events to efficiently notify X when a client has performed direct rendering to a window that needs to be composited."'' It may help improving performance, but it is currently incompatible with SLI and Multi-GPU modes.

Line 385:

Line 414:

{{Note|This option is enabled by default in newer driver versions.}}

{{Note|This option is enabled by default in newer driver versions.}}

−

====Enabling power saving====

+

==== Enabling power saving ====

+

Add under section {{ic|Monitor}}:

Add under section {{ic|Monitor}}:

Option "DPMS" "1"

Option "DPMS" "1"

−

====Enabling Brightness Control====

+

==== Enabling brightness control ====

+

Add under section {{ic|Device}}:

Add under section {{ic|Device}}:

Option "RegistryDwords" "EnableBrightnessControl=1"

Option "RegistryDwords" "EnableBrightnessControl=1"

Line 396:

Line 427:

and your brightness control doesn't work try to comment it out.}}

and your brightness control doesn't work try to comment it out.}}

−

====Enabling SLI====

+

==== Enabling SLI ====

+

{{Warning|As of May 7, 2011, you may experience sluggish video performance in GNOME 3 after enabling SLI.}}

{{Warning|As of May 7, 2011, you may experience sluggish video performance in GNOME 3 after enabling SLI.}}

Add the BusID (3 in the previous example) under section {{ic|Device}}:

Add the BusID (3 in the previous example) under section {{ic|Device}}:

Line 436:

Line 467:

To verify that SLI mode is enabled from a shell:

To verify that SLI mode is enabled from a shell:

−

$ nvidia-settings -q all | grep SLIMode

+

{{hc|<nowiki>$ nvidia-settings -q all | grep SLIMode</nowiki>|

Attribute 'SLIMode' (arch:0.0): AA

Attribute 'SLIMode' (arch:0.0): AA

'SLIMode' is a string attribute.

'SLIMode' is a string attribute.

'SLIMode' is a read-only attribute.

'SLIMode' is a read-only attribute.

'SLIMode' can use the following target types: X Screen.

'SLIMode' can use the following target types: X Screen.

+

}}

+

+

==== Forcing Powermizer performance level (for laptops) ====

−

====Forcing Powermizer performance level (for laptops)====

Add under section {{ic|Device}}:

Add under section {{ic|Device}}:

# Force Powermizer to a certain level at all times

# Force Powermizer to a certain level at all times

Line 448:

Line 481:

# level 0x2=med

# level 0x2=med

# level 0x3=lowest

# level 0x3=lowest

−

+

# AC settings:

# AC settings:

Option "RegistryDwords" "PowerMizerLevelAC=0x3"

Option "RegistryDwords" "PowerMizerLevelAC=0x3"

Line 454:

Line 487:

Option "RegistryDwords" "PowerMizerLevel=0x3"

Option "RegistryDwords" "PowerMizerLevel=0x3"

−

=====Letting the GPU set its own performance level based on temperature=====

+

===== Letting the GPU set its own performance level based on temperature =====

+

Add under section {{ic|Device}}:

Add under section {{ic|Device}}:

Option "RegistryDwords" "PerfLevelSrc=0x3333"

Option "RegistryDwords" "PerfLevelSrc=0x3333"

−

====Disable vblank interrupts (for laptops)====

+

==== Disable vblank interrupts (for laptops) ====

+

When running the interrupt detection utility [[powertop]], it can be observed that the Nvidia driver will generate an interrupt for every vblank. To disable, place in the {{ic|Device}} section:

When running the interrupt detection utility [[powertop]], it can be observed that the Nvidia driver will generate an interrupt for every vblank. To disable, place in the {{ic|Device}} section:

Option "OnDemandVBlankInterrupts" "1"

Option "OnDemandVBlankInterrupts" "1"

This will reduce interrupts to about one or two per second.

This will reduce interrupts to about one or two per second.

−

====Enabling overclocking====

+

==== Enabling overclocking ====

+

{{Warning|Please note that overclocking may damage hardware and that no responsibility may be placed on the authors of this page due to any damage to any information technology equipment from operating products out of specifications set by the manufacturer.}}

{{Warning|Please note that overclocking may damage hardware and that no responsibility may be placed on the authors of this page due to any damage to any information technology equipment from operating products out of specifications set by the manufacturer.}}

To enable GPU and memory overclocking, place the following line in the {{ic|Device}} section:

To enable GPU and memory overclocking, place the following line in the {{ic|Device}} section:

Line 474:

Line 510:

===== Setting static 2D/3D clocks =====

===== Setting static 2D/3D clocks =====

+

Set the following string in the {{ic|Device}} section to enable PowerMizer at its maximum performance level:

Set the following string in the {{ic|Device}} section to enable PowerMizer at its maximum performance level:

Option "RegistryDwords" "PerfLevelSrc=0x2222"

Option "RegistryDwords" "PerfLevelSrc=0x2222"

Line 481:

Line 518:

Option "Coolbits" "5"

Option "Coolbits" "5"

−

==Tips and tricks==

+

== Tips and tricks ==

−

===Fixing Terminal Resolution===

+

+

=== Fixing terminal resolution ===

+

Transitioning from nouveau may cause your startup terminal to display at a lower resolution. A possible solution (if you are using GRUB) is to edit the {{ic|GRUB_GFXMODE}} line of {{ic|/etc/default/grub}} with desired display resolutions. Multiple resolutions can be specified, including the default {{ic|auto}}, so it is recommended that you edit the line to resemble {{ic|GRUB_GFXMODE&#61;<desired resolution>,<fallback such as 1024x768>,auto}}. See http://www.gnu.org/software/grub/manual/html_node/gfxmode.html#gfxmode for more information.

Transitioning from nouveau may cause your startup terminal to display at a lower resolution. A possible solution (if you are using GRUB) is to edit the {{ic|GRUB_GFXMODE}} line of {{ic|/etc/default/grub}} with desired display resolutions. Multiple resolutions can be specified, including the default {{ic|auto}}, so it is recommended that you edit the line to resemble {{ic|GRUB_GFXMODE&#61;<desired resolution>,<fallback such as 1024x768>,auto}}. See http://www.gnu.org/software/grub/manual/html_node/gfxmode.html#gfxmode for more information.

−

===Enabling Pure Video HD (VDPAU/VAAPI)===

+

=== Enabling Pure Video HD (VDPAU/VAAPI) ===

+

'''Hardware Required:'''

'''Hardware Required:'''

−

At least a video card with second generation PureVideo HD [http://en.wikipedia.org/wiki/Nvidia_PureVideo#Table_of_PureVideo_.28HD.29_GPUs]

+

At least a video card with second generation PureVideo HD [http://en.wikipedia.org/wiki/Nvidia_PureVideo#Table_of_PureVideo_.28HD.29_GPUs].

'''Software Required:'''

'''Software Required:'''

Line 494:

Line 534:

Nvidia video cards with the proprietary driver installed will provide video decoding capabilities with the VDPAU interface at different levels according to PureVideo generation.

Nvidia video cards with the proprietary driver installed will provide video decoding capabilities with the VDPAU interface at different levels according to PureVideo generation.

−

You can also add support for the VA-API interface with:

+

You can also add support for the VA-API interface with {{Pkg|libva-vdpau-driver}}.

{{ic|Edit > Preference}}, then set {{ic|'''video output'''}} to {{ic|vdpau}}

'''Playing HD movies on cards with low memory:'''

'''Playing HD movies on cards with low memory:'''

Line 526:

Line 565:

Additionally increasing the MPlayer's cache size in {{ic|~/.mplayer/config}} can help, when your hard drive is spinning down when watching HD movies.

Additionally increasing the MPlayer's cache size in {{ic|~/.mplayer/config}} can help, when your hard drive is spinning down when watching HD movies.

−

===Hardware accelerated video decoding with XvMC===

+

=== Hardware accelerated video decoding with XvMC ===

+

Accelerated decoding of MPEG-1 and MPEG-2 videos via [[XvMC]] are supported on GeForce4, GeForce 5 FX, GeForce 6 and GeForce 7 series cards. To use it, create a new file {{ic|/etc/X11/XvMCConfig}} with the following content:

Accelerated decoding of MPEG-1 and MPEG-2 videos via [[XvMC]] are supported on GeForce4, GeForce 5 FX, GeForce 6 and GeForce 7 series cards. To use it, create a new file {{ic|/etc/X11/XvMCConfig}} with the following content:

libXvMCNVIDIA_dynamic.so.1

libXvMCNVIDIA_dynamic.so.1

Line 532:

Line 572:

See how to configure [[XvMC#Supported software|supported software]].

See how to configure [[XvMC#Supported software|supported software]].

−

===Using TV-out===

+

=== Using TV-out ===

−

A good article on the subject can be found [http://en.wikibooks.org/wiki/NVidia/TV-OUT here]

+

+

A good article on the subject can be found [http://en.wikibooks.org/wiki/NVidia/TV-OUT here].

+

+

=== X with a TV (DFP) as the only display ===

−

===X with a TV (DFP) as the only display===

The X server falls back to CRT-0 if no monitor is automatically detected. This can be a problem when using a DVI connected TV as the main display, and X is started while the TV is turned off or otherwise disconnected.

The X server falls back to CRT-0 if no monitor is automatically detected. This can be a problem when using a DVI connected TV as the main display, and X is started while the TV is turned off or otherwise disconnected.

−

To force nvidia to use DFP, store a copy of the EDID somewhere in the filesystem so that X can parse the file instead of reading EDID from the TV/DFP.

+

To force NVIDIA to use DFP, store a copy of the EDID somewhere in the filesystem so that X can parse the file instead of reading EDID from the TV/DFP.

To acquire the EDID, start nvidia-settings. It will show some information in tree format, ignore the rest of the settings for now and select the GPU (the corresponding entry should be titled "GPU-0" or similar), click the {{ic|DFP}} section (again, {{ic|DFP-0}} or similar), click on the {{ic|Acquire Edid}} Button and store it somewhere, for example, {{ic|/etc/X11/dfp0.edid}}.

To acquire the EDID, start nvidia-settings. It will show some information in tree format, ignore the rest of the settings for now and select the GPU (the corresponding entry should be titled "GPU-0" or similar), click the {{ic|DFP}} section (again, {{ic|DFP-0}} or similar), click on the {{ic|Acquire Edid}} Button and store it somewhere, for example, {{ic|/etc/X11/dfp0.edid}}.

Line 549:

Line 591:

This way, one can automatically start a display manager at boot time and still have a working and properly configured X screen by the time the TV gets powered on.

This way, one can automatically start a display manager at boot time and still have a working and properly configured X screen by the time the TV gets powered on.

−

===Check the power source===

+

=== Check the power source ===

+

The NVIDIA X.org driver can also be used to detect the GPU's current source of power. To see the current power source, check the 'GPUPowerSource' read-only parameter (0 - AC, 1 - battery):

The NVIDIA X.org driver can also be used to detect the GPU's current source of power. To see the current power source, check the 'GPUPowerSource' read-only parameter (0 - AC, 1 - battery):

−

$ nvidia-settings -q GPUPowerSource -t

+

{{hc|$ nvidia-settings -q GPUPowerSource -t|1}}

−

1

If you're seeing an error message similiar to the one below, then you either need to install [[acpid]] or start the systemd service via {{ic|systemctl start acpid.service}}

If you're seeing an error message similiar to the one below, then you either need to install [[acpid]] or start the systemd service via {{ic|systemctl start acpid.service}}

Line 566:

Line 608:

(If you are not seeing this error, it is not necessary to install/run acpid soley for this purpose. My current power source is correctly reported without acpid even installed.)

(If you are not seeing this error, it is not necessary to install/run acpid soley for this purpose. My current power source is correctly reported without acpid even installed.)

−

===Displaying GPU temperature in the shell===

+

=== Displaying GPU temperature in the shell ===

−

====Method 1 - nvidia-settings====

+

−

{{Note|This method requires that you are using X. Use Method 2 or Method 3 if you are not. Also note that Method 3 currently does not not work with newer nvidia cards such as the G210/220 as well as embedded GPUs such as the Zotac IONITX's 8800GS.}}

+

==== Method 1 - nvidia-settings ====

+

+

{{Note|This method requires that you are using X. Use Method 2 or Method 3 if you are not. Also note that Method 3 currently does not not work with newer NVIDIA cards such as the G210/220 as well as embedded GPUs such as the Zotac IONITX's 8800GS.}}

To display the GPU temp in the shell, use {{ic|nvidia-settings}} as follows:

To display the GPU temp in the shell, use {{ic|nvidia-settings}} as follows:

Line 582:

Line 626:

In order to get just the temperature for use in utils such as {{ic|rrdtool}} or {{ic|conky}}, among others:

In order to get just the temperature for use in utils such as {{ic|rrdtool}} or {{ic|conky}}, among others:

−

$ nvidia-settings -q gpucoretemp -t

+

{{hc|$ nvidia-settings -q gpucoretemp -t|41}}

−

41

−

====Method 2 - nvidia-smi====

+

==== Method 2 - nvidia-smi ====

Use nvidia-smi which can read temps directly from the GPU without the need to use X at all. This is important for a small group of users who do not have X running on their boxes, perhaps because the box is headless running server apps.

Use nvidia-smi which can read temps directly from the GPU without the need to use X at all. This is important for a small group of users who do not have X running on their boxes, perhaps because the box is headless running server apps.

−

To display the GPU temp in the shell, use nvidia-smi as follows:

+

To display the GPU temperature in the shell, use nvidia-smi as follows:

Use {{Pkg|nvclock}} which is available from the official repositories.

−

Use nvclock which is available from the [extra] repo.

{{Note|{{ic|nvclock}} cannot access thermal sensors on newer NVIDIA cards such as the G210/220.}}

{{Note|{{ic|nvclock}} cannot access thermal sensors on newer NVIDIA cards such as the G210/220.}}

There can be significant differences between the temperatures reported by nvclock and nvidia-settings/nv-control. According to [http://sourceforge.net/projects/nvclock/forums/forum/67426/topic/1906899 this post] by the author (thunderbird) of nvclock, the nvclock values should be more accurate.

There can be significant differences between the temperatures reported by nvclock and nvidia-settings/nv-control. According to [http://sourceforge.net/projects/nvclock/forums/forum/67426/topic/1906899 this post] by the author (thunderbird) of nvclock, the nvclock values should be more accurate.

−

===Set Fan Speed at Login===

+

=== Set fan speed at login ===

−

You can adjust the fan speed on your graphics card with {{ic|nvidia-settings}}'s console interface. First ensure that your Xorg configuration sets the Coolbits option to {{ic|4}} or {{ic|5}} in your {{ic|Device}} section to enable fan control.

+

+

You can adjust the fan speed on your graphics card with '''nvidia-settings'''&#39s console interface. First ensure that your Xorg configuration sets the Coolbits option to {{ic|4}} or {{ic|5}} in your {{ic|Device}} section to enable fan control.

Option "Coolbits" "4"

Option "Coolbits" "4"

Line 650:

Line 694:

{{Note|GTX 4xx/5xx series cards cannot currently set fan speeds at login using this method. This method only allows for the setting of fan speeds within the current X session by way of nvidia-settings.}}

{{Note|GTX 4xx/5xx series cards cannot currently set fan speeds at login using this method. This method only allows for the setting of fan speeds within the current X session by way of nvidia-settings.}}

−

Place the following line in your [[xinitrc|{{ic|~/.xinitrc}}]] file to adjust the fan when you launch Xorg. Replace {{ic|<n>}} with the fan speed percentage you want to set.

+

Place the following line in your [[xinitrc|{{ic|~/.xinitrc}}]] file to adjust the fan when you launch Xorg. Replace {{ic|''n''}} with the fan speed percentage you want to set.

You can also configure a second GPU by incrementing the GPU and fan number.

You can also configure a second GPU by incrementing the GPU and fan number.

Line 658:

Line 702:

nvidia-settings -a "[gpu:0]/GPUFanControlState=1" \

nvidia-settings -a "[gpu:0]/GPUFanControlState=1" \

-a "[gpu:1]/GPUFanControlState=1" \

-a "[gpu:1]/GPUFanControlState=1" \

−

-a "[fan:0]/GPUCurrentFanSpeed=<n>" \

+

-a "[fan:0]/GPUCurrentFanSpeed=''n''" \

−

-a [fan:1]/GPUCurrentFanSpeed=<n>" &

+

-a [fan:1]/GPUCurrentFanSpeed=''n''" &

−

If you use a login manager such as GDM or KDM, you can create a desktop entry file to process this setting. Create {{ic|~/.config/autostart/nvidia-fan-speed.desktop}} and place this text inside it. Again, change {{ic|<n>}} to the speed percentage you want.

+

If you use a login manager such as GDM or KDM, you can create a desktop entry file to process this setting. Create {{ic|~/.config/autostart/nvidia-fan-speed.desktop}} and place this text inside it. Again, change {{ic|''n''}} to the speed percentage you want.

{{Out of date|This bug is most likely resolved. See the [https://bugs.freedesktop.org/show_bug.cgi?id&#61;49534 this bug report]}}

{{Out of date|This bug is most likely resolved. See the [https://bugs.freedesktop.org/show_bug.cgi?id&#61;49534 this bug report]}}

−

On some machines, recent nvidia drivers introduce a bug(?) that causes X11 to redraw pixmaps really slow. Switching tabs in Chrome/Chromium (while having more than 2 tabs opened) takes 1-2 seconds, instead of a few milliseconds.

+

On some machines, recent NVIDIA drivers introduce a bug(?) that causes X11 to redraw pixmaps really slow. Switching tabs in Chrome/Chromium (while having more than 2 tabs opened) takes 1-2 seconds, instead of a few milliseconds.

It seems that setting the variable '''InitialPixmapPlacement''' to '''0''' solves that problem, although (like described some paragraphs above) '''InitialPixmapPlacement=2''' should actually be the faster method.

It seems that setting the variable '''InitialPixmapPlacement''' to '''0''' solves that problem, although (like described some paragraphs above) '''InitialPixmapPlacement=2''' should actually be the faster method.

Line 734:

Line 781:

The variable can be (temporarily) set with the command

The variable can be (temporarily) set with the command

−

nvidia-settings -a InitialPixmapPlacement=0

+

$ nvidia-settings -a InitialPixmapPlacement=0

To make this permanent, this call can be placed in a startup script.

To make this permanent, this call can be placed in a startup script.

−

===Gaming using Twinview===

+

=== Gaming using Twinview ===

+

In case you want to play fullscreen games when using Twinview, you will notice that games recognize the two screens as being one big screen. While this is technically correct (the virtual X screen really is the size of your screens combined), you probably do not want to play on both screens at the same time.

In case you want to play fullscreen games when using Twinview, you will notice that games recognize the two screens as being one big screen. While this is technically correct (the virtual X screen really is the size of your screens combined), you probably do not want to play on both screens at the same time.

Line 749:

Line 797:

Another method that may either work alone or in conjunction with those mentioned above is [[Gaming#Starting_games_in_a_separate_X_server|starting games in a separate X server]].

Another method that may either work alone or in conjunction with those mentioned above is [[Gaming#Starting_games_in_a_separate_X_server|starting games in a separate X server]].

−

===Vertical sync using TwinView===

+

=== Vertical sync using TwinView ===

−

If you're using TwinView and vertical sync (the "Sync to VBlank" option in {{ic|nvidia-settings}}), you will notice that only one screen is being properly synced, unless you have two identical monitors. Although {{ic|nvidia-settings}} does offer an option to change which screen is being synced (the "Sync to this display device" option), this does not always work. A solution is to add the following environment variables at startup:

+

−

nano /etc/profile

+

If you're using TwinView and vertical sync (the "Sync to VBlank" option in '''nvidia-settings'''), you will notice that only one screen is being properly synced, unless you have two identical monitors. Although '''nvidia-settings''' does offer an option to change which screen is being synced (the "Sync to this display device" option), this does not always work. A solution is to add the following environment variables at startup, for example append in {{ic|/etc/profile}}:

−

Add to the end of the file:

+

export __GL_SYNC_TO_VBLANK=1

export __GL_SYNC_TO_VBLANK=1

export __GL_SYNC_DISPLAY_DEVICE=DFP-0

export __GL_SYNC_DISPLAY_DEVICE=DFP-0

export __VDPAU_NVIDIA_SYNC_DISPLAY_DEVICE=DFP-0

export __VDPAU_NVIDIA_SYNC_DISPLAY_DEVICE=DFP-0

+

You can change {{ic|DFP-0}} with your preferred screen ({{ic|DFP-0}} is the DVI port and {{ic|CRT-0}} is the VGA port).

You can change {{ic|DFP-0}} with your preferred screen ({{ic|DFP-0}} is the DVI port and {{ic|CRT-0}} is the VGA port).

−

===Old Xorg Settings===

+

=== Old Xorg settings ===

+

If upgrading from an old installation, please remove old {{ic|/usr/X11R6/}} paths as it can cause trouble during installation.

If upgrading from an old installation, please remove old {{ic|/usr/X11R6/}} paths as it can cause trouble during installation.

−

===Corrupted screen: "Six screens" issue===

+

=== Corrupted screen: "Six screens" issue ===

+

For some users using Geforce GT 100M's, the screen turns out corrupted after X starts; divided into 6 sections with a resolution limited to 640x480.

For some users using Geforce GT 100M's, the screen turns out corrupted after X starts; divided into 6 sections with a resolution limited to 640x480.

The same problem has been recently reported with Quadro 2000 and hi-res displays.

The same problem has been recently reported with Quadro 2000 and hi-res displays.

Line 771:

Line 822:

...

...

EndSection

EndSection

−

==='/dev/nvidia0' Input/Output error===

+

−

{{Accuracy|verify that the BIOS related suggestions work and are not coincidentally set while troubleshooting.|section='/dev/nvidia0' Input/Output error... suggested fixes}}

+

=== '/dev/nvidia0' input/output error ===

+

+

{{Accuracy|Verify that the BIOS related suggestions work and are not coincidentally set while troubleshooting.|section='/dev/nvidia0' Input/Output error... suggested fixes}}

This error can occur for several different reasons, and the most common solution given for this error is to check for group/file permissions, which in almost every case is ''not'' the issue. The NVIDIA documentation does not talk in detail on what you should

This error can occur for several different reasons, and the most common solution given for this error is to check for group/file permissions, which in almost every case is ''not'' the issue. The NVIDIA documentation does not talk in detail on what you should

do to correct this problem but there are a few things that have worked for some people. The problem can be a IRQ conflict with another device or bad routing by either the kernel or your BIOS.

do to correct this problem but there are a few things that have worked for some people. The problem can be a IRQ conflict with another device or bad routing by either the kernel or your BIOS.

Line 781:

Line 834:

vmalloc=256M

vmalloc=256M

−

If running a 64bit kernel, a driver defect can cause the nvidia module to fail initializing when IOMMU is on. Turning it off in the BIOS has been confirmed to work for some users. [http://www.nvnews.net/vbulletin/showthread.php?s=68bb2fabadcb53b10b286aa42d13c5bc&t=159335][[User:Clickthem#nvidia module]]

+

If running a 64bit kernel, a driver defect can cause the NVIDIA module to fail initializing when IOMMU is on. Turning it off in the BIOS has been confirmed to work for some users. [http://www.nvnews.net/vbulletin/showthread.php?s=68bb2fabadcb53b10b286aa42d13c5bc&t=159335][[User:Clickthem#nvidia module]]

Another thing to try is to change your BIOS IRQ routing from {{ic|Operating system controlled}} to {{ic|BIOS controlled}} or the other way around. The first one can be passed as a kernel parameter:

Another thing to try is to change your BIOS IRQ routing from {{ic|Operating system controlled}} to {{ic|BIOS controlled}} or the other way around. The first one can be passed as a kernel parameter:

Line 790:

Line 843:

{{Note|The kernel parameters can be passed either through the kernel command line or the bootloader configuration file. See your bootloader Wiki page for more information.}}

{{Note|The kernel parameters can be passed either through the kernel command line or the bootloader configuration file. See your bootloader Wiki page for more information.}}

−

==='/dev/nvidiactl' errors===

+

=== '/dev/nvidiactl' errors ===

+

Trying to start an opengl application might result in errors such as:

Trying to start an opengl application might result in errors such as:

Error: Could not open /dev/nvidiactl because the permissions are too

Error: Could not open /dev/nvidiactl because the permissions are too

Line 800:

Line 854:

# gpasswd -a username video

# gpasswd -a username video

−

===32 bit applications do not start===

+

=== 32 bit applications do not start ===

+

Under 64 bit systems, installing {{ic|lib32-nvidia-libgl}} that corresponds to the same version installed for the 64 bit driver fixes the issue.

Under 64 bit systems, installing {{ic|lib32-nvidia-libgl}} that corresponds to the same version installed for the 64 bit driver fixes the issue.

−

===Errors after updating the kernel===

+

=== Errors after updating the kernel ===

+

If a custom build of NVIDIA's module is used instead of the package from [extra], a recompile is required every time the kernel is updated. Rebooting is generally recommended after updating kernel and graphic drivers.

If a custom build of NVIDIA's module is used instead of the package from [extra], a recompile is required every time the kernel is updated. Rebooting is generally recommended after updating kernel and graphic drivers.

−

===Crashing in general===

+

=== Crashing in general ===

+

* Try disabling {{ic|RenderAccel}} in xorg.conf.

* Try disabling {{ic|RenderAccel}} in xorg.conf.

* If Xorg outputs an error about "conflicting memory type" or "failed to allocate primary buffer: out of memory", add {{ic|nopat}} at the end of the {{ic|kernel}} line in {{ic|/boot/grub/menu.lst}}.

* If Xorg outputs an error about "conflicting memory type" or "failed to allocate primary buffer: out of memory", add {{ic|nopat}} at the end of the {{ic|kernel}} line in {{ic|/boot/grub/menu.lst}}.

Line 814:

Line 871:

More information about troubleshooting the driver can be found in the [http://www.nvnews.net/vbulletin/forumdisplay.php?s=&forumid=14 NVIDIA forums.]

More information about troubleshooting the driver can be found in the [http://www.nvnews.net/vbulletin/forumdisplay.php?s=&forumid=14 NVIDIA forums.]

−

===Bad performance after installing a new driver version===

+

=== Bad performance after installing a new driver version ===

+

If FPS have dropped in comparison with older drivers, first check if direct rendering is turned on:

If FPS have dropped in comparison with older drivers, first check if direct rendering is turned on:

$ glxinfo | grep direct

$ glxinfo | grep direct

Line 823:

Line 881:

A possible solution could be to regress to the previously installed driver version and rebooting afterwards.

A possible solution could be to regress to the previously installed driver version and rebooting afterwards.

−

===CPU spikes with 400 series cards===

+

=== CPU spikes with 400 series cards ===

+

If you are experiencing intermittent CPU spikes with a 400 series card, it may be caused by PowerMizer constantly changing the GPU's clock frequency. Switching PowerMizer's setting from Adaptive to Performance, add the following to the {{ic|Device}} section of your Xorg configuration:

If you are experiencing intermittent CPU spikes with a 400 series card, it may be caused by PowerMizer constantly changing the GPU's clock frequency. Switching PowerMizer's setting from Adaptive to Performance, add the following to the {{ic|Device}} section of your Xorg configuration:

===Laptops: X hangs on login/out, worked around with Ctrl+Alt+Backspace===

+

=== Laptops: X hangs on login/out, worked around with Ctrl+Alt+Backspace ===

+

If while using the legacy NVIDIA drivers Xorg hangs on login and logout (particularly with an odd screen split into two black and white/gray pieces), but logging in is still possible via Ctrl-Alt-Backspace (or whatever the new "kill X" keybind is), try adding this in {{ic|/etc/modprobe.d/modprobe.conf}}:

If while using the legacy NVIDIA drivers Xorg hangs on login and logout (particularly with an odd screen split into two black and white/gray pieces), but logging in is still possible via Ctrl-Alt-Backspace (or whatever the new "kill X" keybind is), try adding this in {{ic|/etc/modprobe.d/modprobe.conf}}:

options nvidia NVreg_Mobile=1

options nvidia NVreg_Mobile=1

Line 844:

Line 904:

See [http://http.download.nvidia.com/XFree86/Linux-x86/1.0-7182/README/readme.txt NVIDIA Driver's Readme:Appendix K] for more information.

See [http://http.download.nvidia.com/XFree86/Linux-x86/1.0-7182/README/readme.txt NVIDIA Driver's Readme:Appendix K] for more information.

The XRandR X extension is not presently aware of multiple display devices on a single X screen; it only sees the {{ic|MetaMode}} bounding box, which may contain one or more actual modes. This means that if multiple MetaModes have the same bounding box, XRandR will not be able to distinguish between them.

The XRandR X extension is not presently aware of multiple display devices on a single X screen; it only sees the {{ic|MetaMode}} bounding box, which may contain one or more actual modes. This means that if multiple MetaModes have the same bounding box, XRandR will not be able to distinguish between them.

In order to support {{ic|DynamicTwinView}}, the NVIDIA driver must make each MetaMode appear to be unique to XRandR. Presently, the NVIDIA driver accomplishes this by using the refresh rate as a unique identifier.

In order to support {{ic|DynamicTwinView}}, the NVIDIA driver must make each MetaMode appear to be unique to XRandR. Presently, the NVIDIA driver accomplishes this by using the refresh rate as a unique identifier.

−

Use {{ic|nvidia-settings -q RefreshRate}} to query the actual refresh rate on each display device.

+

Use {{ic|$ nvidia-settings -q RefreshRate}} to query the actual refresh rate on each display device.

The XRandR extension is currently being redesigned by the X.Org community, so the refresh rate workaround may be removed at some point in the future.

The XRandR extension is currently being redesigned by the X.Org community, so the refresh rate workaround may be removed at some point in the future.

Line 855:

Line 916:

This workaround can also be disabled by setting the {{ic|DynamicTwinView}} X configuration option to {{ic|false}}, which will disable NV-CONTROL support for manipulating MetaModes, but will cause the XRandR and XF86VidMode visible refresh rate to be accurate.

This workaround can also be disabled by setting the {{ic|DynamicTwinView}} X configuration option to {{ic|false}}, which will disable NV-CONTROL support for manipulating MetaModes, but will cause the XRandR and XF86VidMode visible refresh rate to be accurate.

−

===No screens found on a laptop / NVIDIA Optimus===

+

=== No screens found on a laptop/NVIDIA Optimus ===

+

On a laptop, if the NVIDIA driver cannot find any screens, you may have an NVIDIA Optimus setup : an Intel chipset connected to the screen and the video outputs, and a NVIDIA card that does all the hard work and writes to the chipset's video memory.

On a laptop, if the NVIDIA driver cannot find any screens, you may have an NVIDIA Optimus setup : an Intel chipset connected to the screen and the video outputs, and a NVIDIA card that does all the hard work and writes to the chipset's video memory.

NVIDIA has [http://www.phoronix.com/scan.php?page=news_item&px=MTE3MzY announced plans] to support Optimus in their Linux drivers at some point in the future.

NVIDIA has [http://www.phoronix.com/scan.php?page=news_item&px=MTE3MzY announced plans] to support Optimus in their Linux drivers at some point in the future.

−

You need to install the [[Intel Graphics|Intel]] driver to handle the screens, then if you want 3D software you should run them through [[Bumblebee]] to tell them to use the NVIDIA card.

+

You need to install the [[Intel]] driver to handle the screens, then if you want 3D software you should run them through [[Bumblebee]] to tell them to use the NVIDIA card.

−

'''Possible Workaround'''

+

==== Possible Workaround ====

−

On my Lenovo W520 with a Quadro 1000M and Nvidia Optimus, I entered the BIOS and changed my default graphics setting from 'Optimus' to 'Discrete' and the pacman Nvidia drivers(295.20-1 at time of writing) recognized the screens.

+

Enter the BIOS and changed the default graphics setting from 'Optimus' to 'Discrete' and the install NVIDIA drivers (295.20-1 at time of writing) recognized the screens.

By default, DPMS should turn off backlight with the timeouts set or by running xset. However, probably due to a bug in the proprietary Nvidia drivers the result is a blank screen with no powersaving whatsoever. To workaround it, until the bug has been fixed you can use the {{ic|vbetool}} as root.

By default, DPMS should turn off backlight with the timeouts set or by running xset. However, probably due to a bug in the proprietary Nvidia drivers the result is a blank screen with no powersaving whatsoever. To workaround it, until the bug has been fixed you can use the {{ic|vbetool}} as root.

−

Install the {{pkg|vbetool}} package.

+

Install the {{Pkg|vbetool}} package.

Turn off your screen on demand and then by pressing a random key backlight turns on again:

Turn off your screen on demand and then by pressing a random key backlight turns on again:

Line 916:

Line 982:

xrandr --output DP-1 --off; read -n1; xrandr --output DP-1 --auto

xrandr --output DP-1 --off; read -n1; xrandr --output DP-1 --auto

−

===Blue tint on videos with Flash===

+

=== Blue tint on videos with Flash ===

An issue with {{Pkg|flashplugin}} versions 11.2.202.228-1 and 11.2.202.233-1 causes it to send the U/V panes in the incorrect order resulting in a blue tint on certain videos. There are a few potential fixes for this bug:

An issue with {{Pkg|flashplugin}} versions 11.2.202.228-1 and 11.2.202.233-1 causes it to send the U/V panes in the incorrect order resulting in a blue tint on certain videos. There are a few potential fixes for this bug:

* Right click on a video, select "Settings..." and uncheck "Enable hardware acceleration". Reload the page for it to take affect. Note that this disables GPU acceleration.

+

# Right click on a video, select "Settings..." and uncheck "Enable hardware acceleration". Reload the page for it to take affect. Note that this disables GPU acceleration.

−

* [[Downgrading Packages|Downgrade]] the {{Pkg|flashplugin}} package to version 11.1.102.63-1 at most.

+

# [[Downgrading Packages|Downgrade]] the {{Pkg|flashplugin}} package to version 11.1.102.63-1 at most.

−

* Use {{AUR|google-chrome}} with the new [https://aur.archlinux.org/packages/?O=0&K=chromium-pepper-flash Pepper API].

+

# Use {{AUR|google-chrome}} with the new Pepper API {{AUR|hromium-pepper-flash}}.

−

* Try one of the few Flash alternatives.

+

# Try one of the few Flash alternatives.

The merits of each are discussed in [https://bbs.archlinux.org/viewtopic.php?id=137877 this thread]. To summarize: if you want all flash sites (YouTube, Vimeo, etc) to work properly in non-Chrome browsers, without feature regressions (such as losing hardware acceleration), without crashes/instability (enabling hardware decoding), without security concerns (multiple CVEs against older flash versions) and without breaking the vdpau tracing library from its intended purpose, the LEAST objectionable is to install {{AUR|libvdpau-git-flashpatch}}.

The merits of each are discussed in [https://bbs.archlinux.org/viewtopic.php?id=137877 this thread]. To summarize: if you want all flash sites (YouTube, Vimeo, etc) to work properly in non-Chrome browsers, without feature regressions (such as losing hardware acceleration), without crashes/instability (enabling hardware decoding), without security concerns (multiple CVEs against older flash versions) and without breaking the vdpau tracing library from its intended purpose, the LEAST objectionable is to install {{AUR|libvdpau-git-flashpatch}}.

−

===Bleeding overlay with Flash===

+

=== Bleeding overlay with Flash ===

This bug is due to the incorrect colour key being used by the {{Pkg|flashplugin}} version 11.2.202.228-1 and causes the flash content to "leak" into other pages or solid black backgrounds. To avoid this issue simply install the latest {{Pkg|libvdpau}} or export {{ic|1=VDPAU_NVIDIA_NO_OVERLAY=1}} within either your shell profile (E.g. {{ic|~/.bash_profile}} or {{ic|~/.zprofile}}) or {{ic|~/.xinitrc}}

This bug is due to the incorrect colour key being used by the {{Pkg|flashplugin}} version 11.2.202.228-1 and causes the flash content to "leak" into other pages or solid black backgrounds. To avoid this issue simply install the latest {{Pkg|libvdpau}} or export {{ic|1=VDPAU_NVIDIA_NO_OVERLAY=1}} within either your shell profile (E.g. {{ic|~/.bash_profile}} or {{ic|~/.zprofile}}) or {{ic|~/.xinitrc}}

−

===Full system freeze using flash===

+

=== Full system freeze using Flash ===

If you experience occasional full system freezes (only the mouse is moving) using flashplugin

If you experience occasional full system freezes (only the mouse is moving) using flashplugin

a possible workaround is to switch off Hardware Acceleration in flash, setting

+

{{hc|/etc/adobe/mms.cfg|2=

+

EnableLinuxHWVideoDecode=0

+

}}

+

+

=== XOrg fails to load or Red Screen of Death ===

+

+

If you get a red screen and use GRUB disable the GRUB framebuffer by editing {{ic|/etc/defaults/grub}} and uncomment GRUB_TERMINAL_OUTPUT. For more information see [[GRUB#Disable_framebuffer|GRUB]].

+

+

=== Black screen on systems with Intel integrated GPU ===

+

+

If you have an Intel CPU with an integrated GPU (e.g. Intel HD 4000) and get a black screen on boot after installing the {{Pkg|nvidia}} package, this may be caused by a conflict between the graphics modules. This is solved by blacklisting the Intel GPU modules. Create the file /etc/modprobe.d/blacklist.conf and prevent the ''i915'' and ''intel_agp'' modules from loading on boot:

+

+

{{hc|/etc/modprobe.d/blacklist.conf|

+

install i915 /bin/false

+

install intel_agp /bin/false

+

}}

+

+

=== X fails with "no screens found" with Intel iGPU ===

+

+

Like above, if you have an Intel CPU with an integrated GPU and X fails to start with

For the very latest GPU models, it may be required to install nvidia-betaAUR from the Arch User Repository, since the stable drivers may not support the newly introduced features.

Note:

The nvidia-libgl or nvidia-{304xx,173xx,96xx}-utils package is a dependency and will be pulled in automatically. It may conflict with the libgl package; this is normal. If pacman asks to remove libgl and fails due to unsatisfied dependencies, remove it with pacman -Rdd libgl. Or, if pacman asks to remove mesa-libgl and fails due to unsatisfied dependencies, remove it with pacman -Rdd mesa-libgl.

As a standard user, make a temporary directory for creating the new package:

$ mkdir -p ~/abs

Make a copy of the nvidia package directory:

$ cp -r /var/abs/extra/nvidia/ ~/abs/

Go into the temporary nvidia build directory:

$ cd ~/abs/nvidia

It is required to edit the files nvidia.install and PKGBUILD file so that they contain the right kernel version variables.

While running the custom kernel, get the appropriate kernel and local version names:

$ uname -r

In nvidia.install, replace the EXTRAMODULES='extramodules-3.4-ARCH' variable with the custom kernel version, such as EXTRAMODULES='extramodules-3.4.4' or EXTRAMODULES='extramodules-3.4.4-custom' depending on what the kernel's version is and the local version's text/numbers. Do this for all instances of the version number within this file.

In PKGBUILD, change the _extramodules=extramodules-3.4-ARCH variable to match the appropriate version, as above.

If there are more than one kernels in the system installed in parallel (such as a custom kernel alongside the default -ARCH kernel), change the pkgname=nvidia variable in the PKGBUILD to a unique identifier, such as nvidia-344 or nvidia-custom. This will allow both kernels to use the nvidia module, since the custom nvidia module has a different package name and will not overwrite the original. You will also need to comment the line in package() that blacklists the nvidia module in /usr/lib/modprobe.d/nvidia.conf (no need to do it again).

Then do:

$ makepkg -ci

The -c operand tells makepkg to clean left over files after building the package, whereas -i specifies that makepkg should automatically run pacman to install the resulting package.

Automatic re-compilation of the NVIDIA module with every update of any kernel

This is possible thanks to nvidia-hookAUR from the AUR. You will need to install the module sources: either nvidia-sourceAUR for the stable drivers or nvidia-source-betaAUR for the beta drivers. In nvidia-hook, the 'automatic re-compilation' functionality is done by a nvidia hook on mkinitcpio after forcing to update the linux-headers package. You will need to add 'nvidia' to the HOOKS array in /etc/mkinitcpio.conf as well as 'linux-headers' and your custom kernel(s) headers to the SyncFirst array in /etc/pacman.conf for this to work.

The hook will call the dkms command to update the NVIDIA module for the version of your new kernel.

Note:

If you are using this functionality it's important to look at the installation process of the linux (or any other kernel) package. nvidia hook will tell you if anything goes wrong.

Configuring

It is possible that after installing the driver it may not be needed to create an Xorg server configuration file. You can run a test to see if the Xorg server will function correctly without a configuration file. However, it may be required to create a /etc/X11/xorg.conf configuration file in order to adjust various settings. This configuration can be generated by the NVIDIA Xorg configuration tool, or it can be created manually. If created manually, it can be a minimal configuration (in the sense that it will only pass the basic options to the Xorg server), or it can include a number of settings that can bypass Xorg's auto-discovered or pre-configured options.

Multiple monitors

To activate dual screen support, you just need to edit the /etc/X11/xorg.conf.d/10-monitor.conf file which you made before.

Per each physical monitor, add one Monitor, Device, and Screen Section entry, and then a ServerLayout section to manage it. Be advised that when Xinerama is enabled, the NVIDIA proprietary driver automatically disables compositing. If you desire compositing, you should comment out the Xinerama line in "ServerLayout" and use TwinView (see below) instead.

TwinView

You want only one big screen instead of two. Set the TwinView argument to 1. This option should be used instead of Xinerama (see above), if you desire compositing.

Option "TwinView" "1"

TwinView only works on a per card basis: If you have multiple cards (and no SLI?), you'll have to use xinerama or zaphod mode (multiple X screens). You can combine TwinView with zaphod mode, ending up, for example, with two X screens covering two monitors each. Most window managers fail miserably in zaphod mode. Awesome is the shining exception, and KDE almost works.

Automatic configuration

The NVIDIA package provides Twinview. This tool will help by automatically configuring all the monitors connected to your video card. This only works for multiple monitors on a single card.
To configure Xorg Server with Twinview run:

# nvidia-xconfig --twinview

Manual CLI configuration with xrandr

If the latest solutions doesn't works for you, you can use the autostart trick of your window manager to run a xrandr command like this one :

xrandr --output DVI-I-0 --auto --primary --left-of DVI-I-1

or:

xrandr --output DVI-I-1 --pos 1440x0 --mode 1440x900 --rate 75.0

When:

--output is used to indicate to which "monitor" set the options.

DVI-I-1 is the name of the second monitor.

--pos is the position of the second monitor respect to the first.

--mode is the resolution of the second monitor.

--rate is the Hz refresh rate.

You must adapt the xrandr options with the help of the output of the command xrandr run alone in a terminal.

Using NVIDIA Settings

You can also use the nvidia-settings tool provided by nvidia-utils. With this method, you will use the proprietary software NVIDIA provides with their drivers. Simply run nvidia-settings as root, then configure as you wish, and then save the configuration to /etc/X11/xorg.conf.d/10-monitor.conf.

ConnectedMonitor

If the driver doesn't properly detect a second monitor, you can force it to do so with ConnectedMonitor.

The duplicated device with Screen is how you get X to use two monitors on one card without TwinView. Note that nvidia-settings will strip out any ConnectedMonitor options you have added.

Mosaic mode

Mosaic mode is the only way to use more than 2 monitors across multiple graphics cards with compositing. Your window manager may or may not recognize the distinction between each monitor.

Base mosaic

Base mosaic mode works on any set of Geforce 8000 series or higher GPUs. It cannot be enabled from withing the nvidia-setting GUI. You must either use the nvidia-xconfig command line program or edit xorg.conf by hand. Metamodes must be specified. The following is an example for four DFPs in a 2x2 configuration, each running at 1920x1024, with two DFPs connected to two cards:

SLI Mosaic

If you have an SLI configuration and each GPU is a Quadro FX 5800, Quadro Fermi or newer then you can use SLI Mosaic mode. It can be enabled from within the nvidia-settings GUI or from the command line with:

Tweaking

GUI: nvidia-settings

The NVIDIA package includes the nvidia-settings program that allows adjustment of several additional settings.

For the settings to be loaded on login, run this command from the terminal:

$ nvidia-settings --load-config-only

The desktop environment's auto-startup method 'may' not work for loading nvidia-settings properly (KDE). To be sure that settings are really loaded put the command in ~/.xinitrc file (create if not present).

For a dramatic 2D graphics performance increase in pixmap-intensive applications, e.g. Firefox, set the InitialPixmapPlacement parameter to 2:

$ nvidia-settings -a InitialPixmapPlacement=2

This is documented in nvidia-settings source code. For this setting to persist, this command needs to be run on every startup. You can add it to ~/.xinitrc file for auto-startup with X.

Tip: On rare occasions the ~/.nvidia-settings-rc may become corrupt. If this happens, the Xorg server may crash and the file will have to be deleted to fix the issue.

Enabling MSI (Message Signaled Interrupts)

By default, the graphics card uses a shared interrupt system. To give a small performance boost, edit /etc/modprobe.d/modprobe.conf and add:

options nvidia NVreg_EnableMSI=1

Be warned, as this has been known to damage some systems running older hardware!

To confirm, run:

# grep nvidia /proc/interrupts

43: 0 49 4199 86318 PCI-MSI-edge nvidia

Advanced: 20-nvidia.conf

Edit /etc/X11/xorg.conf.d/20-nvidia.conf, and add the option to the correct section. The Xorg server will need to be restarted before any changes are applied.

Enabling desktop composition

As of NVIDIA driver version 180.44, support for GLX with the Damage and Composite X extensions is enabled by default. Refer to Xorg page for detailed instructions.

Disabling the logo on startup

Add the "NoLogo" option under section Device:

Option "NoLogo" "1"

Enabling hardware acceleration

Note: RenderAccel is enabled by default since drivers version 97.46.xx

Add the "RenderAccel" option under section Device:

Option "RenderAccel" "1"

Overriding monitor detection

The "ConnectedMonitor" option under section Device allows to override monitor detection when X server starts, which may save a significant amount of time at start up. The available options are: "CRT" for analog connections, "DFP" for digital monitors and "TV" for televisions.

The following statement forces the NVIDIA driver to bypass startup checks and recognize the monitor as DFP:

Option "ConnectedMonitor" "DFP"

Note: Use "CRT" for all analog 15 pin VGA connections, even if the display is a flat panel. "DFP" is intended for DVI digital connections only.

Enabling triple buffering

Enable the use of triple buffering by adding the "TripleBuffer" Option under section Device:

Option "TripleBuffer" "1"

Use this option if the graphics card has plenty of ram (equal or greater than 128MB). The setting only takes effect when syncing to vblank is enabled, one of the options featured in nvidia-settings.

Note: This option may introduce full-screen tearing and reduce performance. As of the R300 drivers, vblank is enabled by default.

Using OS-level events

Taken from the NVIDIA driver's README file: "[...] Use OS-level events to efficiently notify X when a client has performed direct rendering to a window that needs to be composited." It may help improving performance, but it is currently incompatible with SLI and Multi-GPU modes.

Add under section Device:

Option "DamageEvents" "1"

Note: This option is enabled by default in newer driver versions.

Enabling power saving

Add under section Monitor:

Option "DPMS" "1"

Enabling brightness control

Add under section Device:

Option "RegistryDwords" "EnableBrightnessControl=1"

Note: If you already have this enabled
and your brightness control doesn't work try to comment it out.

Enabling SLI

Warning: As of May 7, 2011, you may experience sluggish video performance in GNOME 3 after enabling SLI.

Taken from the NVIDIA driver's README appendix: This option controls the configuration of SLI rendering in supported configurations. A "supported configuration" is a computer equipped with an SLI-Certified Motherboard and 2 or 3 SLI-Certified GeForce GPUs. See NVIDIA's SLI Zone for more information.

Letting the GPU set its own performance level based on temperature

Disable vblank interrupts (for laptops)

When running the interrupt detection utility powertop, it can be observed that the Nvidia driver will generate an interrupt for every vblank. To disable, place in the Device section:

Option "OnDemandVBlankInterrupts" "1"

This will reduce interrupts to about one or two per second.

Enabling overclocking

Warning: Please note that overclocking may damage hardware and that no responsibility may be placed on the authors of this page due to any damage to any information technology equipment from operating products out of specifications set by the manufacturer.

To enable GPU and memory overclocking, place the following line in the Device section:

Option "Coolbits" "1"

This will enable on-the-fly overclocking within an X session by running:

$ nvidia-settings

Note: GeForce 400/500/600/700 series Fermi/Kepler cores cannot currently be overclocked using
the Coolbits method. The alternative is to edit and reflash the GPU BIOS either under DOS (preferred), or within a Win32 environment by way of nvflashTemplate:Linkrot and NiBiTor 6.0Template:Linkrot. The advantage of BIOS flashing is that not only can voltage limits be raised, but stability is generally improved over software overclocking methods such as Coolbits.

Setting static 2D/3D clocks

Set the following string in the Device section to enable PowerMizer at its maximum performance level:

Option "RegistryDwords" "PerfLevelSrc=0x2222"

Set one of the following two strings in the Device section to enable manual GPU fan control within nvidia-settings:

Option "Coolbits" "4"

Option "Coolbits" "5"

Tips and tricks

Fixing terminal resolution

Transitioning from nouveau may cause your startup terminal to display at a lower resolution. A possible solution (if you are using GRUB) is to edit the GRUB_GFXMODE line of /etc/default/grub with desired display resolutions. Multiple resolutions can be specified, including the default auto, so it is recommended that you edit the line to resemble GRUB_GFXMODE=<desired resolution>,<fallback such as 1024x768>,auto. See http://www.gnu.org/software/grub/manual/html_node/gfxmode.html#gfxmode for more information.

If your graphic card does not have a lot of memory (>512MB?), you can experience glitches when watching 1080p or even 720p movies.
To avoid that start simple window manager like TWM or MWM.

Additionally increasing the MPlayer's cache size in ~/.mplayer/config can help, when your hard drive is spinning down when watching HD movies.

Hardware accelerated video decoding with XvMC

Accelerated decoding of MPEG-1 and MPEG-2 videos via XvMC are supported on GeForce4, GeForce 5 FX, GeForce 6 and GeForce 7 series cards. To use it, create a new file /etc/X11/XvMCConfig with the following content:

Using TV-out

X with a TV (DFP) as the only display

The X server falls back to CRT-0 if no monitor is automatically detected. This can be a problem when using a DVI connected TV as the main display, and X is started while the TV is turned off or otherwise disconnected.

To force NVIDIA to use DFP, store a copy of the EDID somewhere in the filesystem so that X can parse the file instead of reading EDID from the TV/DFP.

To acquire the EDID, start nvidia-settings. It will show some information in tree format, ignore the rest of the settings for now and select the GPU (the corresponding entry should be titled "GPU-0" or similar), click the DFP section (again, DFP-0 or similar), click on the Acquire Edid Button and store it somewhere, for example, /etc/X11/dfp0.edid.

The ConnectedMonitor option forces the driver to recognize the DFP as if it were connected. The CustomEDID provides EDID data for the device, meaning that it will start up just as if the TV/DFP was connected during X the process.

This way, one can automatically start a display manager at boot time and still have a working and properly configured X screen by the time the TV gets powered on.

Check the power source

The NVIDIA X.org driver can also be used to detect the GPU's current source of power. To see the current power source, check the 'GPUPowerSource' read-only parameter (0 - AC, 1 - battery):

$ nvidia-settings -q GPUPowerSource -t

1

If you're seeing an error message similiar to the one below, then you either need to install acpid or start the systemd service via systemctl start acpid.service

ACPI: failed to connect to the ACPI event daemon; the daemon
may not be running or the "AcpidSocketPath" X
configuration option may not be set correctly. When the
ACPI event daemon is available, the NVIDIA X driver will
try to use it to receive ACPI event notifications. For
details, please see the "ConnectToAcpid" and
"AcpidSocketPath" X configuration options in Appendix B: X
Config Options in the README.

(If you are not seeing this error, it is not necessary to install/run acpid soley for this purpose. My current power source is correctly reported without acpid even installed.)

Displaying GPU temperature in the shell

Method 1 - nvidia-settings

Note: This method requires that you are using X. Use Method 2 or Method 3 if you are not. Also note that Method 3 currently does not not work with newer NVIDIA cards such as the G210/220 as well as embedded GPUs such as the Zotac IONITX's 8800GS.

To display the GPU temp in the shell, use nvidia-settings as follows:

$ nvidia-settings -q gpucoretemp

This will output something similar to the following:

Attribute 'GPUCoreTemp' (hostname:0.0): 41.
'GPUCoreTemp' is an integer attribute.
'GPUCoreTemp' is a read-only attribute.
'GPUCoreTemp' can use the following target types: X Screen, GPU.

The GPU temps of this board is 41 C.

In order to get just the temperature for use in utils such as rrdtool or conky, among others:

$ nvidia-settings -q gpucoretemp -t

41

Method 2 - nvidia-smi

Use nvidia-smi which can read temps directly from the GPU without the need to use X at all. This is important for a small group of users who do not have X running on their boxes, perhaps because the box is headless running server apps.
To display the GPU temperature in the shell, use nvidia-smi as follows:

Method 3 - nvclock

There can be significant differences between the temperatures reported by nvclock and nvidia-settings/nv-control. According to this post by the author (thunderbird) of nvclock, the nvclock values should be more accurate.

Set fan speed at login

You can adjust the fan speed on your graphics card with nvidia-settings&#39s console interface. First ensure that your Xorg configuration sets the Coolbits option to 4 or 5 in your Device section to enable fan control.

Option "Coolbits" "4"

Note: GTX 4xx/5xx series cards cannot currently set fan speeds at login using this method. This method only allows for the setting of fan speeds within the current X session by way of nvidia-settings.

Place the following line in your ~/.xinitrc file to adjust the fan when you launch Xorg. Replace n with the fan speed percentage you want to set.

If you use a login manager such as GDM or KDM, you can create a desktop entry file to process this setting. Create ~/.config/autostart/nvidia-fan-speed.desktop and place this text inside it. Again, change n to the speed percentage you want.

Troubleshooting

Bad performance, e.g. slow repaints when switching tabs in Chrome

On some machines, recent NVIDIA drivers introduce a bug(?) that causes X11 to redraw pixmaps really slow. Switching tabs in Chrome/Chromium (while having more than 2 tabs opened) takes 1-2 seconds, instead of a few milliseconds.

It seems that setting the variable InitialPixmapPlacement to 0 solves that problem, although (like described some paragraphs above) InitialPixmapPlacement=2 should actually be the faster method.

The variable can be (temporarily) set with the command

$ nvidia-settings -a InitialPixmapPlacement=0

To make this permanent, this call can be placed in a startup script.

Gaming using Twinview

In case you want to play fullscreen games when using Twinview, you will notice that games recognize the two screens as being one big screen. While this is technically correct (the virtual X screen really is the size of your screens combined), you probably do not want to play on both screens at the same time.

To correct this behavior for SDL, try:

export SDL_VIDEO_FULLSCREEN_HEAD=1

For OpenGL, add the appropiate Metamodes to your xorg.conf in section Device and restart X:

Vertical sync using TwinView

If you're using TwinView and vertical sync (the "Sync to VBlank" option in nvidia-settings), you will notice that only one screen is being properly synced, unless you have two identical monitors. Although nvidia-settings does offer an option to change which screen is being synced (the "Sync to this display device" option), this does not always work. A solution is to add the following environment variables at startup, for example append in /etc/profile:

You can change DFP-0 with your preferred screen (DFP-0 is the DVI port and CRT-0 is the VGA port).

Old Xorg settings

If upgrading from an old installation, please remove old /usr/X11R6/ paths as it can cause trouble during installation.

Corrupted screen: "Six screens" issue

For some users using Geforce GT 100M's, the screen turns out corrupted after X starts; divided into 6 sections with a resolution limited to 640x480.
The same problem has been recently reported with Quadro 2000 and hi-res displays.

To solve this problem, enable the Validation Mode NoTotalSizeCheck in section Device:

This error can occur for several different reasons, and the most common solution given for this error is to check for group/file permissions, which in almost every case is not the issue. The NVIDIA documentation does not talk in detail on what you should
do to correct this problem but there are a few things that have worked for some people. The problem can be a IRQ conflict with another device or bad routing by either the kernel or your BIOS.

First thing to try is to remove other video devices such as video capture cards and see if the problem goes away. If there are too many video processors on the same system it can lead into the kernel being unable to start them because of memory allocation problems with the video controller. In particular on systems with low video memory this can occur even if there is only one video processor. In such case you should find out the amount of your system's video memory (e.g. with lspci -v) and pass allocation parameters to the kernel, e.g.:

vmalloc=64M
or
vmalloc=256M

If running a 64bit kernel, a driver defect can cause the NVIDIA module to fail initializing when IOMMU is on. Turning it off in the BIOS has been confirmed to work for some users. [2]User:Clickthem#nvidia module

Another thing to try is to change your BIOS IRQ routing from Operating system controlled to BIOS controlled or the other way around. The first one can be passed as a kernel parameter:

PCI=biosirq

The noacpi kernel parameter has also been suggested as a solution but since it disables ACPI completely it should be used with caution. Some hardware are easily damaged by overheating.

Note: The kernel parameters can be passed either through the kernel command line or the bootloader configuration file. See your bootloader Wiki page for more information.

'/dev/nvidiactl' errors

Trying to start an opengl application might result in errors such as:

Error: Could not open /dev/nvidiactl because the permissions are too
restrictive. Please see the FREQUENTLY ASKED QUESTIONS
section of /usr/share/doc/NVIDIA_GLX-1.0/README
for steps to correct.

Solve by adding the appropiate user to the video group and relogin:

# gpasswd -a username video

32 bit applications do not start

Under 64 bit systems, installing lib32-nvidia-libgl that corresponds to the same version installed for the 64 bit driver fixes the issue.

Errors after updating the kernel

If a custom build of NVIDIA's module is used instead of the package from [extra], a recompile is required every time the kernel is updated. Rebooting is generally recommended after updating kernel and graphic drivers.

Crashing in general

Try disabling RenderAccel in xorg.conf.

If Xorg outputs an error about "conflicting memory type" or "failed to allocate primary buffer: out of memory", add nopat at the end of the kernel line in /boot/grub/menu.lst.

If the NVIDIA compiler complains about different versions of GCC between the current one and the one used for compiling the kernel, add in /etc/profile:

export IGNORE_CC_MISMATCH=1

If Xorg is crashing with a "Signal 11" while using nvidia-96xx drivers, try disabling PAT. Pass the argument nopat to kernel parameters.

More information about troubleshooting the driver can be found in the NVIDIA forums.

Bad performance after installing a new driver version

If FPS have dropped in comparison with older drivers, first check if direct rendering is turned on:

$ glxinfo | grep direct

If the command prints:

direct rendering: No

then that could be an indication for the sudden FPS drop.

A possible solution could be to regress to the previously installed driver version and rebooting afterwards.

CPU spikes with 400 series cards

If you are experiencing intermittent CPU spikes with a 400 series card, it may be caused by PowerMizer constantly changing the GPU's clock frequency. Switching PowerMizer's setting from Adaptive to Performance, add the following to the Device section of your Xorg configuration:

Laptops: X hangs on login/out, worked around with Ctrl+Alt+Backspace

If while using the legacy NVIDIA drivers Xorg hangs on login and logout (particularly with an odd screen split into two black and white/gray pieces), but logging in is still possible via Ctrl-Alt-Backspace (or whatever the new "kill X" keybind is), try adding this in /etc/modprobe.d/modprobe.conf:

options nvidia NVreg_Mobile=1

One user had luck with this instead, but it makes performance drop significantly for others:

Refresh rate not detected properly by XRandR dependant utilities

The XRandR X extension is not presently aware of multiple display devices on a single X screen; it only sees the MetaMode bounding box, which may contain one or more actual modes. This means that if multiple MetaModes have the same bounding box, XRandR will not be able to distinguish between them.

In order to support DynamicTwinView, the NVIDIA driver must make each MetaMode appear to be unique to XRandR. Presently, the NVIDIA driver accomplishes this by using the refresh rate as a unique identifier.

Use $ nvidia-settings -q RefreshRate to query the actual refresh rate on each display device.

The XRandR extension is currently being redesigned by the X.Org community, so the refresh rate workaround may be removed at some point in the future.

This workaround can also be disabled by setting the DynamicTwinView X configuration option to false, which will disable NV-CONTROL support for manipulating MetaModes, but will cause the XRandR and XF86VidMode visible refresh rate to be accurate.

No screens found on a laptop/NVIDIA Optimus

On a laptop, if the NVIDIA driver cannot find any screens, you may have an NVIDIA Optimus setup : an Intel chipset connected to the screen and the video outputs, and a NVIDIA card that does all the hard work and writes to the chipset's video memory.

Black Bars while watching full screen flash videos with TwinView

Backlight is not turning off in some occasions

By default, DPMS should turn off backlight with the timeouts set or by running xset. However, probably due to a bug in the proprietary Nvidia drivers the result is a blank screen with no powersaving whatsoever. To workaround it, until the bug has been fixed you can use the vbetool as root.

Turn off your screen on demand and then by pressing a random key backlight turns on again:

vbetool dpms off && read -n1; vbetool dpms on

Alternatively, xrandr is able to disable and re-enable monitor outputs without requiring root.

xrandr --output DP-1 --off; read -n1; xrandr --output DP-1 --auto

Blue tint on videos with Flash

An issue with flashplugin versions 11.2.202.228-1 and 11.2.202.233-1 causes it to send the U/V panes in the incorrect order resulting in a blue tint on certain videos. There are a few potential fixes for this bug:

The merits of each are discussed in this thread. To summarize: if you want all flash sites (YouTube, Vimeo, etc) to work properly in non-Chrome browsers, without feature regressions (such as losing hardware acceleration), without crashes/instability (enabling hardware decoding), without security concerns (multiple CVEs against older flash versions) and without breaking the vdpau tracing library from its intended purpose, the LEAST objectionable is to install libvdpau-git-flashpatchAUR.

Bleeding overlay with Flash

This bug is due to the incorrect colour key being used by the flashplugin version 11.2.202.228-1 and causes the flash content to "leak" into other pages or solid black backgrounds. To avoid this issue simply install the latest libvdpau or export VDPAU_NVIDIA_NO_OVERLAY=1 within either your shell profile (E.g. ~/.bash_profile or ~/.zprofile) or ~/.xinitrc

Full system freeze using Flash

If you experience occasional full system freezes (only the mouse is moving) using flashplugin
and get:

A possible workaround is to switch off Hardware Acceleration in Flash, setting

/etc/adobe/mms.cfg

EnableLinuxHWVideoDecode=0

XOrg fails to load or Red Screen of Death

If you get a red screen and use GRUB disable the GRUB framebuffer by editing /etc/defaults/grub and uncomment GRUB_TERMINAL_OUTPUT. For more information see GRUB.

Black screen on systems with Intel integrated GPU

If you have an Intel CPU with an integrated GPU (e.g. Intel HD 4000) and get a black screen on boot after installing the nvidia package, this may be caused by a conflict between the graphics modules. This is solved by blacklisting the Intel GPU modules. Create the file /etc/modprobe.d/blacklist.conf and prevent the i915 and intel_agp modules from loading on boot:

/etc/modprobe.d/blacklist.conf

install i915 /bin/false
install intel_agp /bin/false

X fails with "no screens found" with Intel iGPU

Like above, if you have an Intel CPU with an integrated GPU and X fails to start with