NVIDIA vGPU 8.0 is a MAJOR release with many new software features + support for the new RTX6000 and RTX8000 GPUs, which are NVIDIA flagship for Real Time Raytracing. The great thing is now they have vGPU capabilities, which is a BIG NEWS.

I have in this article also included which Public Cloud instance is available with NVIDIA GPUs and which license is BYO or provided by the public cloud provider such as Azure, AWS, GCP.

Supported Hypervisor with migration of vGPU across hypervisors

Release 6.7 U2 and compatible updates support vMotion with vGPU and suspend-resume with vGPU.

Release 6.7 supports only suspend-resume with vGPU.

Releases earlier than 6.7 do not support any form of vGPU migration.

Supported guest OS releases: Windows and Linux

This release of NVIDIA vGPU software provides support for the following NVIDIA GPUs on Citrix XenServer, running on validated server hardware platform

Tesla M6

Tesla M10

Tesla M60

Tesla P4

Tesla P6

Tesla P40

Tesla P100 PCIe 16 GB (XenMotion with vGPU is not supported.)

Tesla P100 SXM2 16 GB (XenMotion with vGPU is not supported.)

Tesla P100 PCIe 12GB (XenMotion with vGPU is not supported.)

Tesla V100 SXM2 (XenMotion with vGPU is not supported.)

Tesla V100 SXM2 32GB (XenMotion with vGPU is not supported.)

Tesla V100 PCIe (XenMotion with vGPU is not supported.)

Tesla V100 PCIe 32GB (XenMotion with vGPU is not supported.)

Tesla V100 FHHL (XenMotion with vGPU is not supported.)

Tesla T4

RTX6000

RTX8000

Quadro Virtual Workstation on Microsoft Azure

Supported Microsoft Azure VM Sizes

This release of Quadro Virtual Workstation is supported with the Microsoft Azure VM sizes listed in the table. Each VM size is configured with a specific number of NVIDIA GPUs in GPU pass through mode.

NCv2 Series VM Sizes

VM Size

NVIDIA GPU

Quantity

NC6 v2

Tesla P100

1

NC12 v2

Tesla P100

2

NC24 v2

Tesla P100

4

NCv3 Series VM Sizes

VM Size

NVIDIA GPU

Quantity

NC6 v3

Tesla V100

1

NC12 v3

Tesla V100

2

NC24 v3

Tesla V100

4

ND Series VM Sizes

VM Size

NVIDIA GPU

Quantity

ND6

Tesla P40

1

ND12

Tesla P40

2

ND24

Tesla P40

4

Note: If an attempt is made to use Quadro Virtual Workstation with an unsupported VM size, a warning is displayed at console login time that the VM size is unsupported.

Guest OS Support

Quadro Virtual Workstation is available on Microsoft Azure images preconfigured with a choice of 64-bit Windows releases and Linux distributions as a guest OS.

Windows Guest OS Support

Quadro Virtual Workstation is available on Microsoft Azure VMs preconfigured only with following 64-bit Windows releases as a guest OS:

Note:

If a specific release, even an update release, is not listed, it’s not supported.

Windows Server 2016

Linux Guest OS Support

Quadro Virtual Workstation is available on Microsoft Azure VMs preconfigured only with the following Linux releases as a guest OS:

Note:

If a specific release, even an update release, is not listed, it’s not supported.

Ubuntu 18.04 LTS

What is MULTI-vGPU

Supported Hypervisor with Multiple vGPU support

Following Hypervisors is supported with assigning multiple vGPU to a single VM:

Nutanix AHV 5.5, 5.8, 5.9, 5.10, 5.10.1

RHEL KVM 7.5 & 7.6

RHV 4.2

The assignment of more than one vGPU device to a VM is supported only on a subset of vGPUs and Red Hat Enterprise Linux with KVM releases and Nutanix AHV releases.

Supported vGPUs profile with (Multiple vGPU support functionality)

Only Q-series vGPUs that are allocated all of the physical GPU’s frame buffer are supported.

GPU Architecture

Board

vGPU

Volta

V100 SXM2 32GB

V100DX-32Q

V100 PCIe 32GB

V100D-32Q

V100 SXM2

V100X-16Q

V100 PCIe

V100-16Q

V100 FHHL

V100L-16Q

Pascal

P100 SXM2

P100X-16Q

P100 PCIe 16GB

P100-16Q

P100 PCIe 12GB

P100C-12Q

P40

P40-24Q

P6

P6-8Q

P4

P4-8Q

Maxwell

M60

M60-8Q

M10

M10-8Q

M6

M6-8Q

Turing

T4

T4-16Q

Turing

RTX6000

RTX6000-24Q

Turin

RTX8000

RTX8000-48Q

Maximum vGPUs per VM

NVIDIA vGPU software supports up to a maximum of four vGPUs per VM on Red Hat Enterprise Linux with KVM.

Whats new in NVIDIA vGPU 8.0- 418.66-425.31-418.70

NVIDIA have released a new version of GRID 8.0 – 418.66-425.31-418.70 for NVIDIA vGPU (Tesla M6, M10, M60, P4, P6, P40, P100,V100,T4,RTX6000,RTX8000 platform)

The vGPU Manager and Windows guest VM drivers must be installed together. Older VM drivers will not function correctly with this release of vGPU Manager. Similarly, older vGPU Managers will not function correctly with this release of Windows guest drivers

# rpm -Uv NVIDIA-vGPU-xenserver-7.0-418.66.x86_64.rpm (#if you have for XenServer 7)

[root@localhost ~]

# rpm -Uv NVIDIA-vGPU-xenserver-7.1-418.66.x86_64.rpm (#if you have for XenServer 7.1)

Preparing packages for installation…

The recommendation from NVIDIA is to shutdown all VMs using a GPU. The machine does continue to work during the update, but since you need to reboot the XenServer itself, it’s better to gracefully shutdown the VMs. So after your VMs have been shutdown and you upgraded the NVIDIA driver, you can reboot your host.

[root@localhost ~]

# xe host-disable

[root@localhost ~]

# xe host-reboot

Methodology 2 – the “GUI” way

Select Install Update… from the Tools menu Click Next after going through the instructions on the Before You Start section Click Add on the Select Update section and open NVIDIA’s XenServer Supplemental Pack ISO

If you have NVIDIA M6/M10/M60/P4/P6/P40/P100/V100/T4/RTX6000/RTX8000 select following file:

Click Next on the Select Update section In the Select Servers section select all the XenServer hosts on which the Supplemental Pack should be installed on and click Next Click Next on the Upload section once the Supplemental Pack has been uploaded to all the XenServer hostsGetting Started Click Next on the Prechecks section Click Install Update on the Update Mode section Click Finish on the Install Update section

After the XenServer platform has rebooted, verify that the vGPU package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.

Validate from putty or XenCenter CLI

run lsmod | grep nvidia

Verify that the NVIDIA kernel driver can successfully communicate with the vGPU physical GPUs in your system by running the nvidia-smi command, which should produce a listing of the GPUs in your platform:

Check driver version is 418.66, if it is then your host is ready for GPU awesomeness and make your VM rock.

NVIDIA vGPU Manager 418.66 for Citrix XenServer 7.6

Upgrading an existing installation of the NVIDIA driver on Citrix XenServer 7.6, use the rpm -U command to upgrade:

If you have NVIDIA TESLA M6/M10/M60/P4/P6/P40/P100/V100/T4/RTX6000/RTX8000

[root@localhost ~]

# rpm -Uv NVIDIA-vGPU-xenserver-7.6-418.66x86_64.rpm

Preparing packages for installation…

The recommendation from NVIDIA is to shutdown all VMs using a GPU. The machine does continue to work during the update, but since you need to reboot the XenServer itself, it’s better to gracefully shutdown the VMs. So after your VMs have been shutdown and you upgraded the NVIDIA driver, you can reboot your host.

[root@localhost ~]

# xe host-disable

[root@localhost ~]

# xe host-reboot

Methodology 2 – the “GUI” way

Select Install Update… from the Tools menu Click Next after going through the instructions on the Before You Start section Click Add on the Select Update section and open NVIDIA’s XenServer Supplemental Pack ISO

If you have NVIDIA GRID M6/ M10/M60/P4/P6/P40/P100/V100/T4/RTX6000/RTX8000 select following file:

“NVIDIA-vGPU-xenserver-7.5-418.66.x86_64.iso ” if XenServer 7.5

“NVIDIA-vGPU-xenserver-7.6-418.66.x86_64.iso ” if XenServer 7.6

Click Next on the Select Update section In the Select Servers section select all the XenServer hosts on which the Supplemental Pack should be installed on and click Next Click Next on the Upload section once the Supplemental Pack has been uploaded to all the XenServer hostsGetting Started Click Next on the Prechecks section Click Install Update on the Update Mode section Click Finish on the Install Update section

After the XenServer platform has rebooted, verify that the GRID package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.

Validate from putty or XenCenter CLI

run lsmod | grep nvidia

Verify that the NVIDIA kernel driver can successfully communicate with the GRID physical GPUs in your system by running the nvidia-smi command, which should produce a listing of the GPUs in your platform:

Check driver version is 418.66 if it is then your host is ready for GPU awesomeness and make your VM rock.

GRID vGPU Manager 418.66 for VMware vSphere 6.0

To update the NVIDIA GPU VIB, you must uninstall the currently installed VIB and install the new VIB.

After the ESXi host has rebooted, verify that the GRID package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.

[root@lesxi ~]

# vmkload_mod -l | grep nvidia

Preparing packages for installation…

Validate

run nvidia-smi

Verify that the NVIDIA kernel driver can successfully communicate with the GRID physical GPUs in your system by running the nvidia-smi command, which should produce a listing of the GPUs in your platform:

Check driver version is 418.66 if it is then your host is ready for GPU awesomeness and make your VM rock.

GRID vGPU Manager 418.66 for VMware vSphere 6.5

To update the NVIDIA GPU VIB, you must uninstall the currently installed VIB and install the new VIB.

After the ESXi host has rebooted, verify that the GRID package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.

[root@lesxi ~]

# vmkload_mod -l | grep nvidia

Preparing packages for installation…

Validate

run nvidia-smi

Verify that the NVIDIA kernel driver can successfully communicate with the GRID physical GPUs in your system by running the nvidia-smi command, which should produce a listing of the GPUs in your platform:

Check driver version is 418.66 if it is then your host is ready for GPU awesomeness and make your VM rock.

GRID vGPU Manager 418.66 for VMware vSphere 6.7

To update the NVIDIA GPU VIB, you must uninstall the currently installed VIB and install the new VIB.

After the ESXi host has rebooted, verify that the GRID package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.

[root@lesxi ~]

# vmkload_mod -l | grep nvidia

Preparing packages for installation…

Validate

run nvidia-smi

Verify that the NVIDIA kernel driver can successfully communicate with the NVIDIA physical GPUs in your system by running the nvidia-smi command, which should produce a listing of the GPUs in your platform:

Check driver version is 418.66 if it is then your host is ready for GPU awesomeness and make your VM rock.

Update existing NVIDIA vGPU Driver for (Virtual Machine)

When the hypervisor NVIDIA vGPU manager is updated, next is updating the Virtual Machines vGPU driver.

The vGPU driver for Windows 7, 8, 8.1, 10 is available with NVIDIA vGPU download. This is available for both M6/M10/M60/P4/P6/P40/P100/V100/T4/RTX6000/RTX8000

Update your Golden Images and reprovisioning the new virtual machines with updated vGPU drivers, if you have stateless machines update vGPU drivers on each.

#HINT – Express upgrade of drivers is the recommended option according to the setup. If you use the “Custom” option, you will have the option to do a “clean” installation. The downside of the “clean installation” is that it will remove all profiles and custom settings. The pro of using the clean installation option is that it will reinstall the complete driver, meaning that there will be no old driver files left on the system. I most of the time recommends using a “Clean” installation to keep it vanilla 🙂

The NVIDIA vGPU API provides direct access to the frame buffer of the GPU, providing the fastest possible frame rate for a smooth and interactive user experience. If you install NVIDIA drivers before you install a VDA with HDX 3D Pro, NVIDIA vGPU is enabled by default.

To enable NVIDIA vGPU on a VM, disable Microsoft Basic Display Adapter from the Device Manager. Run the following command and then restart the VDA: NVFBCEnable.exe -enable -noreset

If you install NVIDIA drivers after you install a VDA with HDX 3D Pro, NVIDIA vGPU is disabled. Enable NVIDIA vGPU by using the NVFBCEnable tool provided by NVIDIA.

To disable NVIDIA vGPU, run the following command and then restart the VDA: NVFBCEnable.exe -disable -noreset

Comments (4)

mcerveny

When will KVM migration be available (RmEnableKvmVgpuMigration) ?
When will SR-IOV be available (RMSetSriovMode/RMSriovVFFeatureMask/RmSriovVgpuFbScheme) ?
What telemetry data are sent (RMNvTelemetryCollection) ?