Then we modified the VMWare virtual machines with these commands:
To do so through the vSphere Client, go to VM Settings  Options tab  Advanced General  Configuration
Parameters and add an entry for ethernetX.coalescingScheme with the value of "disabled"

We have 2 NIC's assigned to each of our PVS VM's. One NIC is dedicated for the provisioning traffic and one for access to the rest of the network. So I had to add 2 lines to my configuration:ethernet0.coalescingScheme = disabledethernet1.coalescingScheme = disabled

For the VMWare virtual machines we just had the one line:ethernet0.coalescingScheme = disabled

Upon powering up the VMWare virtual machines, the per packet latency dropped signficantly and our application was much more responsive.

Unfortunately, even with the settings being identical on the VMWare virtual machines and the Citrix PVS image, the PVS image will not disable interrupt coalescing, consistently showing our packets as have higher latency. We built the vDisk image a couple years ago (~2011) and the vDisk now has outdated drivers that I suspect may be the issue. The VMWare machines have a VMNET3 driver from August of 2013 and our PVS vDisk has a VMNET3 driver from March 2011.

To test if a newer driver would help, I did not want to reverse image the vDisk image as that is such a pain in the ass. So I tried something else. I made a new maintenance version of the vDisk and then mounted it on the PVS server:C:\Users\svc_ctxinstall>"C:\Program Files\Citrix\Provisioning Services\CVhdMount.exe" -p 1 X:\vDisks-XenApp\XenApp65Tn01.14.avhd

This mounted the vDisk as drive "D:\"

I then took the newer driver from the VMWare virtual machine and injected it into the vDisk: