Thanks for letting me know ESXi 6.7 had been released. I wasn't aware.

I pulled out the ixgbe drivers from the 6.7 ISO. There is still no reference within to either X553 or the device IDs 15E4 and 15C8. I haven't been able to install 6.7 since I'm still waiting on my motherboard to be delivered. Once it's here I'll do that and see exactly what I get.

On the code merge project, I spent about 8 hours this weekend working my way through some of the header files. I've only merged about 10% of the total code so, as I suspected, this is going to take some time. I still want to do this, it just won't be quick.

The more code I look at, the more I'm thinking it might be easier to do option 2 - try to add the VMware modifications from VMware v4.5.3 into Intel v5.1.3. The reason I'm leaning this way is that so far the total amount of code in the VMware modifications appears to be much less than the X553 specific code. Once I've done the first pass on the merge I'll have a better understanding of the full picture and can then make a determination.

Unfortunately, it's only with the factory install of pfsense. If you download the ISO/usbstick and install it directly it appears to not have them.

I'm in the same boat as most of you. I bought a SYS-E200-9A, I was able to get around the X553 issues by plugging in a quadport pcie NIC with a pcie extension cable so that I could boot and use the server at home. It's sketchy and I'd really hope to get full support of the X553 for either pfsense or vmware really soon.

Something like this for example worked for me. It's ugly and I have to have the NIC taped to the case (with a foam spacer to stop it from shorting to the case) But it works. It does what I need right now for my home lab.

I'm still working through the code merge. It's proving harder than I originally thought. It's often difficult to determine whether a difference is a VMware addition, change or subtraction. It's clear some of the original Intel driver core code has been deliberately removed by VMware (I assume because the ESXi kernel doesn't support all features of the driver). The hard part is trying to determine whether those missing parts are required by X553/7 or can be safely left out.

I also found out that in order to compile ESXi drivers you must use the ESXi tool chain, which I also had to compile. That alone took a few weeks to get up and running (because the documentation is contradictory and confusing). As a test run, I used the tool chain to compile the VMware ixgbe v4.5.3 source code. I then successfully tested the compiled driver in ESXi 6.7. So, that at least was encouraging.

I'm now doing the code merge by comparing 2 different versions of the VMware source code against 4 different versions of the Intel source code. It's the only way I can accurately and safely determine what VMware have added, changed or removed. As you can probably imagine, that increases the amount of time it takes to compare and merge.

I'm still determined to finish. I hope you all can bare with me. Thanks for your patience.

I finished the main code merge about two weeks ago. Since then I've been testing and tweaking to ensure the driver loads and operates properly.

I have named the driver ixgbe_x553_7 to indicate that it's the ixgbe driver but specifically for Intel X553/7 devices. In the attached vib, I've mapped the driver to load only for the device IDs listed below.

I have tested the driver with ESXi 6.7 on my Supermicro A2SDi-16C-HLN4F motherboard which has 4 x X553 NICs (device ID 8086:15e4). I successfully tested the following configurations:

ESXi 6.7

As a VMkernel NIC

VM CentOS 7.4 x64

Standard NIC connected to a virtual switch

PCI passthrough device

VM Win 7 x64

Standard NIC connected to a virtual switch

PCI passthrough device (device was seen by OS but no Windows driver available)

VM Win 10 x64

Standard NIC connected to a virtual switch

PCI passthrough device (device was seen by OS but no Windows driver available)

The only thing I could not test was SR-IOV passthrough. I included SR-IOV in my code merge so it should work. Unfortunately, my ESXi license does not support SR-IOV so I was unable to test that feature.

Throughput on the NICs during my testing was between 30 MB/sec and 80 MB/sec (in both directions) but I was using an old 1TB HDD as a datastore which would have negatively affected performance (I didn't have a spare SSD available unfortunately).

By the way, from what I can tell, the X553 driver provided in the link above was compiled from the stock Intel source code. In other words, it does not contain the VMware code modifications. I suspect this is why users are reporting problems with it.

I finished the main code merge about two weeks ago. Since then I've been testing and tweaking to ensure the driver loads and operates properly.

I have named the driver ixgbe_x553_7 to indicate that it's the ixgbe driver but specifically for Intel X553/7 devices. In the attached vib, I've mapped the driver to load only for the device IDs listed below.

I have tested the driver with ESXi 6.7 on my Supermicro A2SDi-16C-HLN4F motherboard which has 4 x X553 NICs (device ID 8086:15e4). I successfully tested the following configurations:

ESXi 6.7

As a VMkernel NIC

VM CentOS 7.4 x64

Standard NIC connected to a virtual switch

PCI passthrough device

VM Win 7 x64

Standard NIC connected to a virtual switch

PCI passthrough device (device was seen by OS but no Windows driver available)

VM Win 10 x64

Standard NIC connected to a virtual switch

PCI passthrough device (device was seen by OS but no Windows driver available)

The only thing I could not test was SR-IOV passthrough. I included SR-IOV in my code merge so it should work. Unfortunately, my ESXi license does not support SR-IOV so I was unable to test that feature.

Throughput on the NICs during my testing was between 30 MB/sec and 80 MB/sec (in both directions) but I was using an old 1TB HDD as a datastore which would have negatively affected performance (I didn't have a spare SSD available unfortunately).

During last week I took my Supermicro A2SDi-16C-HLN4F motherboard (which I'd been using solely for testing while I was doing the driver code merge) and rebuilt it as my production server. This finally allowed me to do some proper performance testing (SSDs at both ends, dedicated test network).

Everything went well and, same as before, I was getting around 80 MB/sec throughput. However, when I took a closer look I noticed some sub-optimal timing values around buffering and flow control.

So, I decided to review the code again and I ended up making a few tweaks. When I tested the updated driver I was able to reach a sustained throughput of 90 MB/sec - a performance increase of 12%.

The updated driver is attached. Hopefully it performs just as well for you.

About Us

Our community has been around for many years and pride ourselves on offering unbiased, critical discussion among people of all different backgrounds. We are working every day to make sure our community is one of the best.