a long shot, but figured I'd give here a try (no solution on VMware community forum).

In a Linux guest (CentOS 5.7 64-bit) with vmxnet3 vNIC we are getting a few hundred kernel errors per day on primary eth0, DMZ NIC, which handles majority of network traffic (eth1 & eth2 perform backups and other non-frequent network activity).

All 3 NICs have vmxnet3 as their adapter type, but the kernel errors only occur on eth0, the only NIC with public exposure (via Cisco ASA NAT'd public IPs).

The entries are disconcerting given that eth0 went down yesterday and had to be ifup'd (although new server has been up for 2 weeks without issue otherwise).

Going to downgrade to vmxnet2 in the AM and see if that resolves the issue, but for sake of myself and future sufferers of this issue, I'll leave this out there -- every problem at some point has a solution ;-)

The E1000 route is a strong one to take, it has solved a few of my weirder problems.
–
sysadmin1138♦Nov 2 '11 at 23:17

@mdpc, yes, yum updated machine, latest kernel and latest vmware tools. Ethernet ports on the host are fine, no dropped packets or errors of any kind. I have read that only in 2.6.32+ is vmxnet3 built-in to the kernel, that may be the issue, the kernel does not understand a NIC capable of passing 10gb/sec (obviously not realistic in physical network but between VMs, e.g. web server and database, or to a backup VM, it would be nice). Anyway, bringing down affected VM and downgrading the NIC now, will report back results of course...
–
virtualeyesNov 3 '11 at 8:26

e1000 was not needed, although as I mentioned in my answer, for modestly loaded servers there is likely zero noticeable performance difference between e1000 and vmxnet3. I like the bleeding edge, however, so vmxnet3 it will be...
–
virtualeyesNov 3 '11 at 11:23

The KB patch to update 2 does work, but you have to disable TSO (KB says that is only required for esxi 4.1 update 1 or less). So, ok, it works, but is it necessary in a host with 4X gigabit NICs and local SCSI disks? Probably not...

Just found it and start of business day is already here; will try tomorrow in the early hours and post back results...

Original **
As I mentioned ESXi host sits behind a Cisco ASA.

Affected Linux guest uses a plesk-like control panel which has APF software firewall enabled. Having already shutdown APF, I assumed software firewall was not the culprit. Turns out that shutting down APF does not flush iptables rule sets.

Would be nice to find the actual cause (i.e. I'd actually like APF enabled as the ASA lacks hardware resources [limited cpu/memory] to handle large deny lists). I'll do some more testing early AM tomorrow and see if I can find what APF does not like about inbound ASA NAT'd traffic.

In any case, having spent $5K on a virtualization server, taking advantage of the latest & greatest technology helps justify the expense (even if in reality there is likely zero performance gain between e1000 and vmxnet3 for this modestly loaded host).

To sum up:
vmxnet3 vNIC works just fine on a Dell R610 host running a CentOS 5.7 64-bit guest. TBD is why ASA + ESXi + APF do not play well together...