VMware Homeserver – ESXi on 6th Gen Intel NUC

Intels 6th Gen NUCs are out and it's time to take a look on their capabilities as homeserver running VMware ESXi. NUCs are not officially supported by VMware but they are very widespread in many homlabs or test environments. They are small, silent, transportable and have a very low power consumption, making it a great server for your homelab. I've posted a preview on the new models in December. Currently, 6th Gen NUCs are available with i3 and i5 CPU

HCL and VMware ESXi Support

The NUC itself is not supported by VMware and not listed in the HCL. However, some essential components are listed. ESXi runs out of the box starting with the following releases:

ESXi 6.0 with patch ESXi600-201601001 (Build 3380124) released in January 2016

ESXi 5.5 Update 3 (Build 3029944) released in September 2015

To clarify, the system is not supported by VMware, so do not use this system in a productive environment. I can not guarantee that it will work stable. As a home lab, or a small home server it should be fine.

Network (Intel I219-V)
On previous NUC versions it was required to create a customized Image in order to install ESXi on a NUC. The 6th Gen NUC is equipped with an Intel I219-V Ethernet controller which is listed in the HCL.

Currently only ESXi 5.5 U3 is listed with the e1000e driver version 3.2.2.1-2vmw but this driver has also been added to ESXi 6.0 with patch ESXi600-201601001. For older releases it was also possible to create a custom iso with the latest e1000e driver. Use the following PowerCLI Image Builder commands to create a custom ESXi Image with the latest e1000e driver:

Storage (Sunrise Point AHCI)The AHCI driver for previous NUC versions was always available, but unsupported controllers were not correctly mapped to the driver. Therefore it was required to use the additional sata-xahci package by Andreas Peetz (v-front.de). This is no longer required as the Sunrise Point AHCI driver is correctly mapped since ESXi 5.5 U3 and ESXi 6.0 U1.

The 6th Gen NUC is equipped with an Sunrise Point AHCI controller which is listed in the HCL for ESXi 5.5U3 and ESXi 6.0U1 with ahci driver version 3.0-22vmw.

SD Card6th Gen NUCs are equipped with a SDXC Slot. Unfortunately, there is currently no driver available for ESXi so it's not possible to use the SD Card slot at the moment. I'm trying to find a solution for that later.

SD Host controller Generic system peripheral:
Class 0805: 8086:9d2d

Tested ESXi Versions

VMware ESXi 5.5

VMware ESXi 6.0

Delivery and assembly

The Box contains a short description how to open and assemble the components. The system is a little bit heavier than it looks and has a high build quality. The upside is very scratch-sensitive, so be careful with it.

The installation is very simple. Remove 4 screws on the bottom and remove the lid which is also the 2.5" drive holder. The assembly takes about 5 Minutes to open the NUC, install the memory, M.SSD module and an 2.5" HDD.

Installation

No customization is required to install the latest ESXi 5.5 and ESXi 6.0 versions on 6th Gen NUCs. You can use the images provided by VMware to Install ESXi:

First NUC with native 32GB Memoy Support

While it was already possible to use 32GB Memory on 5th Gen NUCs, it is now fully supported. 6th Gen NUCs support up to 32GB of DDR4 SODIMM Memory.

The NUC requires:
2x 260-pin 1.2 V DDR4 2133 MHz SO-DIMM

Please note that DDR3 is not compatible with DDR4. They have completely different slots so DDR3 modules can not be used in 6th Gen NUCs.

Performance

The performance of a single NUC is sufficient to run a small homelab including a vCenter Server and 3 ESXi hosts. It's a great system to take along for demonstration purposes. Currently my NUC (5th Gen vPRO) runs 3 Windows VMs, 4 Linux VMs, 2 virtual ESXi hosts and a vCenter Server with a decent performance.

The following chart is a comparison of the latest Core i5 CPUs based on PassMark:

Power consumption

NUCs have a very low power consumption. My i3 NUC with a M.2 SSD and a SATA 2.5" SSD consumes about 28W (idle) - 33W (load). During normal usage the average consumption is about 30W.

85 Comments.

There are a couple of (more expencive) USB to LAN cards available that work with ESXi.
But, recently i bought a cheapo USB to 10/100 LAN adapter at the "Action" (dutch store) for only €2,99
It was recognized right away by ESXi when i plugged it in my NUC.
It's a Sologic 10CM USB 2.0 Ethernet Adapter.
It uses a Realtec Chipset (RTL8152B)

There are 16GB DDR3 SODIMMs available, but they are rare (as far as I know, only one company makes them at the moment). And they are HUGELY expensive - last time I looked, the 2 SODIMMs were more expensive than the NUC!

great blogs you have here. I'm looking to replace my homeserver (now still a laptop from 2006) with a NUC. With a few gadgets I threw in the corner for HTPC purposes, I'm now wondering if I could combine it all together with nowadays technology.

Looking over several forums online, there's not really a decisive answer yet, so maybe you could help me out.

The specs already state that it's a nice home theatre capable device. Intels site also claims VT-x & VT-d support, meaning hardware 3D/video acceleration could be forwarded to guest os's. Do you have any experience with this and how are your findings? Is it still highly experimental for hackers or is it becoming a true viable option?

Thnx for reading/answering, I'll stick around for some future posts and experiences anyway :)

The NUCs are great as HTPC, but if you want to install ESXi on it you have a problem as you can't get the Virtual Machines screen to the HDMI output. If you want to use it as HTPC and run VM on int, maybe VMware Workstation is an alternative.

So only 2 'true' options remain;
- Find a way to designate vPro's AMT as primary console for ESXi. Chances of getting this done is probably 0 since AMT was not designed for this or it works the other way around.
- Wait for Skull Canyon to arrive from Intel http://www.intel.com/content/www/us/en/nuc/nuc-kit-nuc6i7kyk-features-configurations.html and hook up a passively cooled graphics card for 1 of the Guest OS's. But that would be completely overkill + expensive again, also energy consumption almost triples with this beast.

I'll look into my options, thanks for responding. I'm was always very happy with the way VMware ESXi can be configured and handled, but VMWorkstation ... then I'd rather have KVM running again.

Today I went to the store to exchange the Gigabyte Brix gb-bsi7ht-6500 (6th gen i7) for the Intel NUC6i5SYH. I did this, because the Gigabyte Brix suffered from the issue on the Skylake architecture (Without running any vm's on the ESXi 60% overall CPU load, each core alternately peaking at 100% individually. Gigabyte doesn't have a (bios) fix available. During the unboxing of the Intel NUC I already noticed the huge difference in build quality. After a vanilla install of esxi 6.0.0 update 02 I noticed that the load was a flatliner, all cores average under 0.1%. @fgrehl, have you done some bios or config adjustments to keep the 6th gen running smoothly with ESXi 6?

I'm having big trouble getting any image to install on my new NUC6i3 box. After reading this I was happy that the drivers would be pre-installed in the image, but every time install fails with no network adapter found. I installed Windows on the box to confirm not a hardware issue and card was fine. Any ideas?

The driver is only included in ESXi 6.0 with patch ESXi600-201601001 (ESXi 6.0 Update 1b) or later. What image did you use?
If that's not the problem, please press "ALT+F1" during the "No Network Adapters" message, login as root (no password), run the following command and post the result:lspci -v | grep "Class 0200" -B 1

I have managed to fix the issue. Checking through the vmkernel.log, I could see the driver module was unable to associate to the NIC, showing "e1000_probe: The NVM Checksum Is Not Valid"

I followed the instructions at "https://thesorcerer.wordpress.com/2011/07/01/guide-intel-82573l-gigabit-ethernet-with-ubuntu-11-04-and-fix-pxe-e05/" to flash a default config to the NIC and all is working!

Hopefully this can help someone else who may find themselves with this issue.

Thank you fgrehl for your post and inspiration. I just finish my build yesterday and it didn't take much effort at all. I'm a network guy by trade, but a VMware customer for many years now. I didn't need a NUC ESXi box for VMware, but to run Cisco VIRL and GNS3 VMs for networking Labs.

I'm running a lab with two G6 NUCs and one G5, and I never had any stability issues with any of them.

Based on my experience, I would recommend the G6 for the following reasons:
1) out-of-the-box driver support in ESXi (with the G5 you need to jump through a few hoops to get the network card working)
2) their ability to accept 32GB of DDR4 memory. While fgrehl is right when he says that you _can_ run 32GB DDR3 in the G5, that memory is still twice as expensive as DDR4. And two 16GB sticks right now cost as much as the NUC box itself.
So if there is any chance you will need more memory in the future (and I'm willing to bet there is - considering that the VCSA itself will by default it 8GB of RAM :) ), the G6 is more future-proof. And the price is about the same.

Sorry for the late reply - the mail to confirm comment subscription was a false positive in my spam filter.

ootb driver support is pretty nice but not a killer feature for me: One of my first admin jobs was scripting unattended win2k setups so i'm not afraid of dirty tricks. ;-)

Getting it close to 20 degrees will be close to impossible, because i have to keep it in my flat. (Another reason for a NUC.) Anyway i will take a deeper look at the G6 NUCs now. If i hit these thermal issues during the first weeks, i will rma it.

I just want to say thanks to all of you. After weeks and weeks lookig for the best compromise for a VM Homelab thanks to your suggestions i have my new NUC(NUC6i5SYH) homelab up and runnig. Buy->unpack->bios update->Esxi 6.0u2: Zero issues. Really a good HW.

Hello! Why esxi sees only 1.9 Ghz with i5? What interests me is that they can be loaded VM .ova! At this point it is better to take the i3 version of the sixth generation? with i3, esxi should see 2.3 GHz! change anything?

Anyone else experiencing really slow transfer rates with their M2SSD (I have a crucial MX300 275GB m.2 SSD). I'm talking about 1.xMB/sec slow transfers (tested through a datastore copy, SCP and building a new VM). I'm about 90% certain that it's a bad SSD but I could be missing something.

Thanks. That does help. I recall using such WOL Windows utilities to wake up a Linux desktop in the basement. That was a long time ago.

I currently maintain several golden VM images and use copies of them for development and consulting purposes. While its cool to have these at my fingertips and on the laptop or an external WD My Passport USB3 drive, it would be better to have these hosted and available 'On Demand'. All the commercial options are very confusing and expensive.

My goal is to have a VMware based platform to host my VMs that would be accessible via Windows RDP. When not in use the, the VM host server should be in 'Sleep' mode. Accessing the ESXi host remotely for maintenance or trying to remote desktop into one of the hosted VMs should be able to wake up the host and connect to the Admin interface or the hosted guest VMs.

I can only imagine that others may have similar needs and have such a working setup.