VSphere 6 Nvidia GRID driver install

INSTALLING THE NVIDIA VIRTUAL GPU MANAGER FOR VSPHERE

The NVIDIA Virtual GPU Manager runs on ESXi host. It is provided as a VIB file, which must be copied to the ESXi host and then installed.

Package installation

To install the vGPU Manager VIB you need to access the ESXi host via the ESXi Shell or SSH. Refer to VMware’s documentation on how to enable ESXi Shell or SSH for an ESXi host.

Note: Before proceeding with the vGPU Manager installation make sure that all VM’s are powered off and the ESXi host is placed in maintenance mode. Refer to VMware’s documentation on how to place an ESXi host in maintenance mode.

Verifying installation

After the ESXi host has rebooted, verify that the GRID package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.

[root@esxi:~] vmkload_mod -l | grep nvidia
nvidia 5 8420

If the nvidia driver is not listed in the output, check dmesg for any load-time errors reported by the driver.

Verify that the NVIDIA kernel driver can successfully communicate with the GRID physical GPUs in your system by running the nvidia-smi command, which should produce a listing of the GPUs in your platform:

Note: Information and debug messages from the NVIDIA kernel driver are logged in dmesg, prefixed with “NVRM” or ‘nvidia’. You can view NVIDIA kernel driver messages using:

[root@esxi:~] dmesg | grep -E "NVRM|nvidia”

CONFIGURING A VM WITH VIRTUAL GPU

Note: VMware vSphere does not support VM console in vSphere Web Client for VMs

configured with vGPU. Make sure that you have installed an alternate means of accessing the VM (such as VMware Horizon or a VNC server) before you configure vGPU.

VM console in vSphere Web Client should become active again once the vGPU parameters are removed from the VM’s configuration.

To configure vGPU for a VM:

Select Edit Settings after right-clicking on the VM in the vCenter Web UI

Select the Virtual Hardware tab

In the New device selection, select Shared PCI Device and hit Add

This should auto-populate NVIDIA GRID vGPU in the PCI device field, . In the GPU Profile dropdown menu, select the type of vGPU you wish to configure. The supported vGPU types are listed in Table 1.

VM’s running vGPU should have all their memory reserved, to do the same,

Select Edit virtual machine settings from vCenter Web UI

Expand Memory section and click Reserve all guest memory (All locked)

BOOTING THE VM AND INSTALLING DRIVERS

Once you have configured a VM with a vGPU, start the VM. VM console in vSphere Web Client is not supported in this vGPU release, use VMware Horizon or VNC to access the VM’s desktop.

The VM should boot to a standard Windows desktop in VGA mode at 800×600 resolution. The Windows screen resolution control panel may be used to increase the resolution to other standard resolutions, but to fully enable vGPU operation, as for a physical NVIDIA GPU, the NVIDIA driver must be installed.

Copy the 32- or 64-bit NVIDIA Windows driver package to the guest VM and execute it to unpack and run the driver installer.

Click through the license agreement

Select Express Installation

Once driver installation completes, the installer may prompt you to restart the platform. Select Restart Now to reboot the VM, or exit the installer and reboot the VM when ready.

Once the VM restarts, it will boot to a Windows desktop. Verify that the NVIDIA driver is running by right-clicking on the desktop. The NVIDIA Control Panel will be listed in the menu; select it to open the control panel. Selecting “System Information” in the NVIDIA control panel will report the Virtual GPU that the VM is using, its capabilities, and the NVIDIA driver version that is loaded.