Jason Southern from NVIDIA released a video this week describing how to measure and understand the framebuffer usage for technologies used by Citrix XenApp, XenDesktop and HDX, the video is available here.

I’ve blogged before how you measure the GPU and CPU usage for NVIDIA and GPU technologies. Usually a 3D/CAD/graphically rich application will be limited by a particular resource. To understand application performance you need to consider RAM, CPU, GPU, vCPU contention, some links that might help you are here:

Typically for most applications, one resource will be exhausted before the others, often this is CPU. For some applications though the framebuffer available maybe the bottleneck, limiting performance. This is most likely to be the case on applications involving large data sets e.g. ESRi ArcGIS, Petrel and CAD applications handling very large parts.

Every GPU or vGPU has an allocated frame buffer:

When vGPU was initially launched there were vGPU profiles such as the K100 and K200 with a small 256MB framebuffer, users often noted corruptions when using Unigine Heaven which requires a minimum of 512MB framebuffer. Those artefacts are the same as seen on physical GPUs which have the same framebuffer and should be expected on physical servers as well as virtualised if too small a framebuffer is used.

For those applications such as Petrel, customers with a need for a large framebuffer may use NVIDIA K/M6000’s with remoting protocols, so they can get access to the 12GB of framebuffer if they have to load up huge amounts of seismic data.

NVIDIA have recently announced the successor to the GRID v1 Cards, the GRID v2 cards (Tesla M6/60 – see the datasheet, here) include a pass-through profile that has an 8GB framebuffer which should increase the options available for those applications handling large data sets.