VMware vSphere 5.5 Virtual Network Adapter Performance

As part of the development of Virtualizing SQL Server with VMware: Doing IT Right (VMware Press), which I co-authored with Michael Corey and Jeff Szastak, I needed to provide guidance around virtual networking. To do this I figured it would be a good idea to do some performance testing of various different virtual network adapters in VMware vSphere 5.5, as there wasn’t much in the way of performance data around. In all I would have performed approximately 600 individual test runs. All of the important details and much more (including tuning advice to get optimal performance) can be found in the book. But I thought I’d share with you some of the highlights of the results.

For the test harness I used NetPerf, as it was easy to use and could do both request / response to test small size transactions per second and latency, in addition to TCP stream to test throughput. The VM’s were configured with Virtual Hardware Version 10, 2 vCPU, 4GB RAM, Windows 2012. I used 4 VM’s in total, this was so I could perform tests on the same host with 2 VM’s and then across hosts also. All VM’s had 3 network adapters of different types (VMXNET3, E1000, E1000E) on different IP subnets. The hardware platform was a Nutanix NX-3450 with Intel Xeon IvyBridge Processors (E5-2650 v2 – 2.6GHz). Each test run of each combination of options was 60 seconds long. Multiple (3) test runs were executed per combination of configuration options (Local host, remote host, 1500MTU, 9000 MTU, driver tuning etc). All of the tests with results shown were done with Interrupt Moderation in the VMXNET3 driver disabled. The default setting is Interrupt Moderation enabled. The default is optimized for throughput and less CPU consumption, whereas I wanted to push the performance to the limit and reduce latency. The default setting would usually show more latency, but even lower CPU utiilization. For any workloads that are sensitive to latency interrupt moderation in the VMXNET3 driver should be disabled.

Note: This testing and the results are not based on real world application tests, are provided for informational purposes only, and your results may vary.

Standard 1500 MTU test between two hosts:

As you would expect VMXNET3 is the clear winner. VMXNET3 offers the lowest CPU usage in this test, combined with the highest throughput. This is important as you consolidate multiple high performance VM’s onto the same host.

Jumbo Frames 9000 MTU between two hosts:

In this test VMXNET3 is again the clear throughput leader, however it did use 5% more CPU cycles than E1000E.

Jumbo Frames 9000 MTU between VM’s on the same host:

In this test VMXNET3 is again the clear winner in terms of throughput. It used more CPU usage, however for that you got almost double the throughput compared to E1000 and E1000E. The effective throughput per CPU cycle is much more efficient on VMXNET3.

Standard 1500 MTU request response between hosts:

In the request response test VMXNET3 is again the clear leader in terms of transactions per second, even though it used slightly more CPU in this test. This equates to being able to process each request with less latency.

Standard 1500 MTU request response between VM’s on same host:

VMXNET3 again leads the way with a significant performance advantage over E1000E and E1000 with request response between VM’s on the same host.

Final Word

In all of the tests VMXNET3 comes out on top, this is why VMware made it a best practice to use VMXNET3. Even though you may have to adjust some settings to get optimal performance it is worthwhile using VMXNET3 as the default. This begs the question, why is VMXNET3 not the default? I think the answer is probably because it requires VMware Tools to be installed and there isn’t drivers automatically loaded into all operating systems that support it. But once you have VMware Tools you’re good to go. If you want a lot more detail on VMware vSphere Network performance, design considerations and network virtualization with NSX specifically related to SQL Server and business critical apps, then check out Virtualizing SQL Server with VMware: Doing IT Right. If you’re lucky enough to go to VMworld this year, or vForum Sydney I’ll be there with my co-authors signing copies of the books also. As always your comments and feedback appreciated.

Share this:

Like this:

Related

About the Author

Michael is Technical Director, Business Critical Applications Engineering at Nutanix. He has been using VMware products since 1998 and has been deploying ESX solutions since 2002. He specializes in designing virtualization solutions for Unix to Linux migrations, business critical applications, disaster avoidance, and mergers and acquisitions. Michael has been in the IT industry since 1995 and consulting since 2001. Michael is Nutanix Platform Expert (NPX) #007. In addition to VMware Certified Design Expert (VCDX) he holds VCP, VCP-Cloud, VCAP-DCD, VCAP-DCA, VCAP-CID, VCAP-CIA, ITIL Foundation, MCP I, and MCSE (NT4 – 2K3).

[…] VMware vSphere 5.5 Virtual Network Adapter Performance Michael has done a nice job proving what virtual network adapter we should use. If you just want to know the answer it is VMXNET3 but if you want to learn why and how that was determined check out Michael’s article. […]

[…] to find the optimal combination. This could be seen as mundane. For example, in my article VMware vSphere 5.5 Virtual Network Adapter Performance I performed over 2,000 (two thousand) individual combinations of tests to get the results, each […]

Sponsors

Featured Virtualization Book

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Email

Disclaimer

The views expressed anywhere on this site are strictly mine and not the opinions and views of VMware or anyone else. All content is provided without any form or warranty explicit or implied, for informational purposes and for use at your own risk.