VSAN 6.2 (vSphere 6.0 Update 2) homelab on 6th Gen Intel NUC

As many of you know, I have been happily using an Apple Mac Mini for my personal vSphere home lab for the past few years now. I absolutely love the simplicity and the versatility of the platform to easily run a basic vSphere lab to being able to consume advanced capabilities of the vSphere platform like VMware VSAN or NSX for example. The Mac Mini's also supports more complex networking configurations by allowing you to add an additional network adapter which leverages the built-in Thunderbolt adapter which many other similar form factors lack. Having said that all that, one major limitation with the Mac Mini platform has always been the limited amount of memory it can support which is a maximum of 16GB (same limitation as other form factors in this space). Although it is definitely possible to run a vSphere lab with only 16GB of memory, it does limit you some what on what you can deploy which is challenging if you want to explore other solutions like VSAN, NSX and vRealize.

I was really hoping that Apple would have released an update to their Mac Mini platform last year that would include support for 32GB of memory, but instead it was a very minor update and was mostly a let down which you can read more about here. Earlier this year, I found out from fellow blogger Florian Grehl that Intel has just released their 6th generation of the Intel NUC which officially adds support for 32GB of memory. I have been keeping an eye on the Intel NUC for some time now but due to the same memory limitation as the Mac Mini, I had never considered it as viable option, especially given that I own a Mac Mini already. With the added support for 32GB of memory and the ability to house two disk drives (M.2 and 2.5"), this was the update I was finally waiting for to pull the trigger and refresh my home lab given 16GB was just not cutting it for the work I was doing anymore.

There have been quite a bit of interest in what I ended up purchasing for running VSAN 6.2 (vSphere 6.0 Update 2) which has not GA'ed ... yet and so I figure I would together a post with all the details in case others were looking to build a similar lab. This article is broken down into the following sections:

Bill of Materials (BOM)

Installation

VSAN Configuration

Final Word

Disclaimer: The Intel NUC is not on VMware's official Hardware Compatibility List (HCL) and there for is not officially supported by VMware. Please use this platform at your own risk.

Bill of Materials (BOM)

Below are the components with links that I used for my configuration which is partially based on budget as well as recommendations from others who have a similar setup. If you think you will need more CPU horsepower, you can look at the Core i5 (NUC6i5SYH) model which is slightly more expensive than the i3. I opted for an all-flash configuration because I not only wanted the performance but I also wanted to take advantage of the much anticipated Deduplication and Compression feature in VSAN 6.2 which is only supported with an all-flash VSAN setup. I also did not have a need for large amount of storage capacity, but you could also pay a tiny bit more for the exact same drive giving you a full 1TB if needed. If you do not care for an all-flash setup, you can definitely look at spinning rust which can give you several TB's of storage at a very reasonable cost. The overall cost of the system for me was ~$700 USD (before taxes) and that was because some of the components were slightly discounted through the use of a preferred retailer that my employer provided. I would highly recommend you check with your employer to see if you have similiar HR benefits as that can help with the cost if that is important to you. The SSDs actually ended up being cheaper on Amazon and so I ended up purchasing them there.

Installation

The installation of the memory and the SSDs on NUC was super simple. You just need a regular phillips screwdriver and there were four screws at the bottom of the NUC that you will need to unscrew. Once loosen, you just need to flip the NUC unit back on top while holding the bottom and slowly taking the top off. The M.2 SSD requires a smaller phillips screwdriver which you will need to unscrew before you can plug in the device. The memory just plugs right in and you should hear a click to confirm its inserted all the way. The 2.5" SSD just plugs into the drive bay which is attached to the top of the NUC casing. If you are interested in more details, you can find various unboxing and installation videos online like this one.

UPDATE (05/25/16): Intel has just released BIOS v44 which fully enables unleashes the power of your NVMe devices. One thing to note from the article is that you do NOT need to unplug the security device, you can just update BIOS by simply download the BIOS file and loading it onto a USB key (FAT32).

UPDATE (03/06/16): Intel has just released BIOS v36 which resolves the M.2 SSD issue. If you have updated using earlier versions, to resolve the problem you just need to go into the BIOS and re-enable the M.2 device as mentioned in this blog here.

One very important thing to note which I was warned about by a fellow user was NOT to update/flash to a newer version of the BIOS. It turns out that if you do, the M.2 SSD will fail to be detected by the system which sounds like a serious bug if you ask me. The stock BIOS version that came with my Intel NUC is SYSKLi35.86A.0024.2015.1027.2142 in case anyone is interested. I am not sure if you can flash back the original version but another user just informed me that they had accidentally updated the BIOS and now he can no longer see the M.2 device 🙁

For the ESXi installation, I just used a regular USB key that I had lying around and used the unetbootin tool to create a bootable USB key. I am using the upcoming ESXi 6.0 Update 2 (which has not been released ... yet) and you will be able to use the out of the box ISO that is shipped from VMware. There are no additional custom drivers that are required. Once the ESXi installation loads up, you can then install ESXi back onto the same ESXi USB key which it initially boot it up. I know this is not always common knowledge and as some may think you need an additional USB device to install ESXi. Ensure you do not install anything on the two SSDs if you plan to use VSAN as it requires at least (2 x SSD) or (1 x SSD and 1 x MD).

If you are interested in adding a bit of personalization to your Intel NUC setup and replace the default Intel BIOS splash screen like I have, take a look at this article here for more details.

If you are interested in adding additional network adapters to your Intel NUC via USB Ethernet Adapter, have a look at this article here.

VSAN Configuration

Bootstrapping VSAN Datastore:

If you plan to run VSAN on the NUC and you do not have additional external storage to deploy and setup things like vCenter Server, you have the option to "bootstrap" VSAN using a single ESXi node to start with which I have written in more detail here and here. This option allows you to setup VSAN so that you can deploy vCenter Server and then help you configure the remainder nodes of your VSAN cluster which will require at least 3 nodes unless you plan on doing a 2-Node VSAN Cluster with the VSAN Witness Appliance. For more detailed instructions on bootstrapping an all-flash VSAN datastore, please take a look at my blog article here.

If you plan to *ONLY* run a single VSAN Node which is possible but NOT recommended given you need a minimum of 3 nodes for VSAN to properly function. After the vCenter Server is deployed, you will need to update the default VSAN VM Storage Policy to ether allow "Forced Provisioning" or changing the FTT from 1 to 0 (e.g. no protection given you only have a single node). This will be required else you will run into provisioning issues as VSAN will prevent you from deploying VMs as it is expecting two additional VSAN nodes. When logged into the home page of the vSphere Web Client, click on "VM Storage Policies" icon and edit the "Virtual SAN Default Storage Policy" and change the following values as show in the screenshot below:

Installing vCenter Server:

If you are new to deploying the vCenter Server, VMware has a deployment guide which you can follow here.

Optimizations:

In addition, because this is for a home lab, my buddy Cormac Hogan has a great tip on disabling device monitoring as the SSD devices may not be on the VMware's official HCL and can potentially negatively impact your lab environment. The following ESXCLI command needs to be run once on each of the ESXi hosts in the ESXi Shell or remotely:

I also recently learned from reading Cormac's blog that there is also new ESXi Advanced Setting in VSAN 6.2 which allows VSAN to provision a VM swap object as "thin" versus "thick" which has been the historically default. To disable the use of "thick" provisioning, you will need to run the following ESXCLI command on each ESXi host:

Lastly, if you plan to run Nested ESXi VMs on top of your physical VSAN Cluster, be sure to add this configuration change outlined in this article here, else you may see some strangeness when trying to create VMFS volumes.

Final Word

I have only had the NUC for a couple of days but so far I have been pretty impressed with the ease of setup and the super tiny form factor. I thought the Mac Mini's were small and portable, but the NUC really blows it out of the water. I was super happy with the decision to go with an all-flash setup, the deployment of the VCSA was super fast as you would expect. If I compare this to my Mac Mini which had spinning rust, for a portion of the VCSA deployment, the fan would go a bit psycho and you can feel the heat if you put your face close to it. I could barely feel any heat from the NUC and it was dead silent which is great as it sits in our living room. Like the Mac Mini, the NUC has regular HDMI port which is great as I can connect it directly to our TV and has plenty of USB ports which could come in handy if you wanted to play with VSAN using USB-based disks 😉

One neat idea that Duncan Epping had brought up in a recent chat with him was to run a 2-Node VSAN Cluster and have the VSAN Witness appliance running on a desktop or laptop. This would make for a very simple and affordable VSAN home lab without requiring a 3rd physical ESXi node. I had also thought about doing the same but instead of 2 NUCs, I would be combining my Mac Mini and NUC to form the 2-Node VSAN Cluster and then run the VSAN Witness on my iMac desktop which has 16GB of memory. This is just another slick way you can leverage this new and powerful platform to run a full blow VSAN setup. For those of you following my blog, I am also looking to see if there is a way to add a secondary network adapter to the NUC by the way of a USB 3.0 based ethernet adapter. I have already shown that it is definitelypossible with older releases of ESXi and if this works, could make the NUC even more viable.

Lastly, for those looking for a more beefier setup, there are rumors that Intel maybe close to releasing another update to the Intel NUC platform code named "Skull Canyon" which could include a Quad-Core i7 (Broadwell based) along with supporting the new USB-c interface which would be able to run Thunderbolt 3. If true, this could be another option for those looking for a bit more power for their home lab.

A few folks had been asking what I plan to do with my Mac Mini now that I have my NUC. I probably will be selling it, it is still a great platform and has Core i7 which definitely helps with any CPU intensive tasks. It also supports two drives, so it is quite inexpensive to purchase another SSD as it already comes with one to setup an all-flash VSAN 6.2 setup. Below are the the specs and If you interested in the setup, feel free to drop me an email at info.virtuallyghetto [at] gmail [dot] com.

No, it was mostly budgetary and I also know a few folks who have had success with that device on NUC. NVMe looks like a good buy if you don’t need a large cache, again it’ll vary but both are good options

Nvme drives are much faster, so not a good choice in my opinion. You bought an M2 – Sata drive, where you should have bought an M2 – NVMe drive. It all hast to do with the AHCI storage protocol and the amount of queues en quelenghts that storage can process.

I have just installed ESXi 6 U2 on a Skull Canyon but don’t seem to be able to see my two NVMe drives? The only thing I can think that is different to other setups, is that I have used Samsung PM951 drives instead of SM951s or Pros?
Any help will be very much appreciated

Can you tell us (or maybe in a follow-up blog post) how this/these devices are faring with VSAN in the picture? It’s great they now support 32GB of memory, but the biggest blocker for me and, I think, many others is the crippling restriction to just a single NIC. Considering that VSAN can very easily in a hybrid configuration saturate a 1 GbE NIC, I can’t conceive of how poor the performance is likely to be once you shove VSAN, Managment, vMotion, optional external IP storage, and then virtual machine traffic over the same NIC. I’d like to hear from someone with practical experience given this configuration.

I’m also very interested how the performance is with a single NIC. I’d like to have3/4 or even 5 of these for my VSAN cluster but think a single NIC may be a problem? I was thinking of trying to use two USB3 to Gigabit adapters and using these for VSAN,vMotion and FT traffic?

I’ve upgraded the vcsa and my lab hosts to latest version (hosts show: 6.0.0, 3568940, vsca 6.0.0.10200) but strangely enough it looks like I’m still on update 1. In my all flash VSAN I dont see any changes (dedup options). Could the upgrade have failed (no error messages during upgrade) but still show update 1b version?

It’s been over a month, how is the single vsan node working out or have you reverted to traditional VFMS stores? I ask as many people who have tried this before with consumer sata devices normally suffer a massive penalty over queue depths etc and thus performance suffers so they just revert to doing it the old way. I am keen to find out if the nvme m.2 device has worked around this and you are still running it in a vsan configuration.

Hi, how many (normal sized) vm’s do you think you can run on this esxi nuc host ? I would like to do a cluster with three nodes and i would like to know the vm capacity in total. Have you make some stress tests on this hardware to see until where you can push the esxi ? Regards. You’re blog rocks 😉

I have a similar setup using 2 i3 Nucs. A Samsung nvme for tier and a Sandisk for capacity with vsan Enterprise. Booting off USB. The problem is I wanted to store my persistent logs on another USB key. But I cannot get esx to see the 2nd USB key at all. Formatted or unformated by dd’ing the drive with zeros. It’s odd i go into the bios boot on Nuc and it will see both USB. During the initial esxi install it saw both USB. But once I installed esxi to one of the USB it just doesn’t see the other. Fdisk doesn’t see it. Host client doesn’t see it in adapters either. Did you have that problem??

I’m in the middle of upgrading my homelab, so thanks for this blog post William. I’m torn between 3x Intel NUCs, 3x Shuttle DS81 or 3x Fujitsu TX1320 M1 (because they are available with an Intel 1240L v3, the M2 version just comes with 1220 v5, which have a much higher TDP and actually selling them to replace them with 1240L v5 is just a little stupid, budget-wise).

As others have asked already, can you provide some numbers regarding the disk throughput/IOPs?

The Intel NUCs can store more RAM than Shuttle DS81, but the latter has two NICs. Fujitsu can do bring 32GB of RAM and 2 NICs to the table, but getting those hot-swap disk cages for a buck will be hard. Those towers are kinda small, compared to other solutions and you have the option to add a lot more storage, which is nice to. But, of course, on of those machines is a little more expensive than an Intel NUC.

First off, this is a great idea for an inexpensive lab, I love it! Am learning lots.
But, I have an issue now. Has anyone else had problems with the NUC not accepting/keeping a default gateway? I found the following in the logs: [esx.problem.net.gateway.set.failed] cannot connect to the specified gateway 172.16.0.254. Failed to set it.
The device’s IP is in the same network as the gateway, so I don’t know what the issue is and I could really use a lifeline if anyone has any ideas.

Intel has just announced new NUC with beefier specs. Up to 32 GB ram and Thunderbolt 3 (40gb/s). In theory, if drivers were available could connect 2 togther for a 2 node vSAN host for a very fast setup. I know there are Thunderbolt 10Gbe Ethernet adapters but are very pricey at the moment, but a driver might be written to handle the traffic.

Let me know if anyone has any ideas how to take advantage of the Thundebolt 3 high speed interface? Even at 1Gbe it would open up another port to dedicate vSAN traffic.

I want to build a home lab for testing and not subject my production setup to my tinking.

On a side note, I think vmware should create a virtual Sandbox where people should be able to test out, build environments. They could do this on the Amazon cloud and would probably help their business as people would feel more comfortable about purchasing their software once they were comfortable with it.

I have a USB setup made of ESXI 6.5
Only he stays hanging at 9% it is a Samsung SM951 M.2 MSATA 256GB HIGH SP I want to install it, but it stays on 9% when I install it on a 2.5 inch SSD that just goes through the installation .

Primary Sidebar

Search this website

Author

William Lam is a Staff Solutions Architect working in the VMware Cloud on AWS team within the Cloud Platform Business Unit (CPBU) at VMware. He focuses on Automation, Integration and Operation of the VMware Software Defined Datacenter (SDDC).