The virtualization blog by Joep Piscaer

#NextConf running Nutanix Community Edition nested on Fusion

So at the .NEXT conference today, Nutanix released Community Edition. CE is a free, fully featured version of the Nutanix Virtual Compute platform, with PRISM and Acropolis additions.

Community Edition is a 100% software solution enabling technology enthusiasts to easily evaluate the latest hyperconvergence technology at zero cost. Users can now experience the same Nutanix technology that powers the datacenters of thousands of leading enterprises around the world.

Y U SO PHYSICAL? (well, not anymore)

I wrote about Community Edition previously: Nutanix Community Edition: Y U SO physical?I discussed the lack of a virtual edition there, and today I’m announcing that the version that’s released today has support for nested installation! I guess Nutanix really does listen to it’s community, and I’m glad they did.

Yes, that’s right. It will run on, at least, VMware ESXi, Fusion and Workstation. During one of the sessions at the .NEXT Conference, I demoed how to install on top of Fusion, and I’d like to share how to do so.

Installing CE on Fusion

So, now we have a simple, easy way to deploy CE in your home lab. You don’t need to wipe your existing lab setup, you can simply deploy CE on top of whatever you’ve got running now.

Step zero: register and check requirements

So, to access the bits you’ll need to register and download the binaries. For now, you’ll need an invite code, but I’ve got you covered. Just RT this tweet and follow me on Twitter. I’ll announce five winners of an invite code on Wednesday, June 10th, noon EDT via Twitter.

Also, I recommend you check out the minimum requirements, even if you’re going to run it nested.

You will really need at least 16GB of vRAM. The CVM needs 12GB, and 4GB is really the low mark for the nested KVM host and any VMs you might want to run inside of CE.

You need a 200+GB SSD and a 500+GB HDD. But hey, we’re running nested, so who really cares? Just make sure the physical hardware is up to spec capacity- and performance-wise.

Intel NICs. When running on a VMware hypervisor, just assign a e1000 NIC.

Step one: register, download and prepare the binaries

CE consists of a single .img.gz file. Unpack that gzip file. You’ll end up with a .img file. When installing to a physical server, you’ll need to image that file onto an USB stick, but when running nested, you can just rename the file to ce-flat.vmdk. For the -flat.vmdk file to be recognized, you’ll need a disk descriptor file, which I’ve created for you here. Rename that file to ce.vmdk, and you’re all set.

Step two: create and customize a VM

So now we need to create and customize a Fusion VM to run Community Edition.

Choose the previously created disk and let Fusion move the disk into the VM folder

Customize the VM

Assign 4 vCPUs and 16GB of vRAM. Check if nested virtualization is enabled.

Assign the existing VMDK to SATA BUS and set to boot.

Add 200+ and a 500+ GB VMDKs to scsi0:0 and scsi0:1

Ensure VM Hardware version 11

Attach VM to virtual network

Step three: optionally change installer pre-requisite checks

So we now basically have everything set up to launch the installer. But before we do so, I want to show you two checks inside of the installer you could change, depending on your environment:

If you’re running an AMD systemYou could disable the CheckVtx and CheckIsIntel checks in the /home/install/phx_iso/phoenix/minimum_reqs.py file

If your SSD is just not fast enoughYou could lower the IOPS thresholds (SSD_rdIOPS_thresh and SSD_wrIOPS_thresh) in /home/install/phx_iso/phoenix/sysUtil.py.
There be dragons ahead, though, if you change these values. NOS really does need adequate performance, so please run a manual performance check before editing these values. Replace sdX with the SSD you want to check.

Step four: run the installer

That’s a pretty simple step. Run the installer, enter two IP-addresses, and off you go. In my experience, creating a single node cluster from the installer is hit-and-miss, so I opt to create a cluster manually afterwards.

Step five: create the cluster

After the install succeeds, log in to the CVM (the inner VM). You can SSH into the IP of the CVM or SSH into the local-only 192.168.5.2 address from the KVM. Log in with the ‘nutanix’ user (password ‘nutanix/4u’) and execute the cluster create command:

cluster –s $CVM_IP –f create

After cluster creation, first add a DNS server to the cluster. This is needed as the initial configuration in PRISM requires you to connect to the Nutanix Next Community.

ncli cluster add-to-name-servers servers=8.8.8.8

Finally, I always run diagnostics.py to validate cluster performance. Use –replication_factor 1 for single-node clusters.

./diagnostics.py --replication_factor 1 run

Step six: initial cluster configuration

Now, log in to PRISM (using the CVM IP-address) and execute these tasks:

Change the admin credentials and attach the cluster to your .NEXT Credentials

Rename Cluster to something useful

Create a storage pool and storage container. Please name the container ‘default’, as Acropolis expects this name. Oh, and you could enable compression and deduplication if you want.

Create VM Network in Acropolis for VM network connectivity

Add NTP servers for reliable timestamps and logging.

So, that’s it. You’ve now created a usable Community Edition install.

Step seven: create VM

You can even create and run VMs via Acropolis!

Concluding

All-in-all, installing Nutanix Community Edition is pretty simple, and I like this. In true Nutanix spirit, the developers have gone through considerable trouble in smoothing out the install process, and it really shows. CE feels like a decent product, from both an install process and usability (PRISM & Acropolis) standpoint. I might even consider using CE in my klauwd.com community-based IaaS project.

It’s certainly a very welcome addition to my virtual toolbox: CE allows me to quickly test and confirm some features and scenarios without touching my production clusters, it allows me to dive into the deeper stuff so I can learn more about the tech and it allows me to give demos to coworkers and prospects, which really is the biggest addition for me. Now I get to show all that cool Nutanix stuff to everyone!

I have created 3-node all-virtual and 3-node hybrid (virtual+physical) clusters, all without issues. I’m not sure why you’d need to connect to any kind ofIPMI? Just deployed three nested CE virtual machines.

Thanks for your reply. The only reason I ask is because I am seeing this: http://i.imgur.com/G8bzj7Q.png which is preventing me from adding a node to the cluster. I am unable to run ipmitool to investigate because it doesn’t exist.

I bet that’s a bug in the cluster expansion feature, as commercial Nutanix always has IPMI. The CE version does not (as IPMI is a hardware feature), and I’m figuring you’re seeing this now. I think you better open a topic on the CE forum (via next.nutanix.com)

FYI. The steps that require the “ncli …..” commands aren’t needed if you are doing a single node cluster. I was banging my head against the wall because I was getting an error saying the CVM was already part of a cluster. I watched this video and it confirms what I am seeing on Mac running Fusion. https://www.youtube.com/watch?v=3GGdy2I4THU&feature=youtu.be

Both the ‘cluster create’ and ‘ncli’ commands are needed my guide, especially if you’re doing a single node cluster. The difference in approach is that you selected to create a s single node cluster from the installer, whereas I chose to do it manually afterwards:
“In my experience, creating a single node cluster from the installer is hit-and-miss, so I opt to create a cluster manually afterwards.”

chunchiton July 17, 2015 at 17:29

Hi Joep, can I know how to edit this file? /home/install/phx_iso/phoenix/minimum_reqs.py.

As long as you don’t do stuff like dedupe, erasure coding, compression, etc. and you don’t run many (or any, to be safe), 12G might work.

Carlon June 9, 2016 at 07:25

Hi there, have you tried to run Nutanix nested on Hyper-V – if so do you have any instructions? I can get the Nutanix VM to boot in Hyper-V (converted vmdk to vhdx) and turned on virtualisation extensions, however it doesn’t appear to recognise the additional drives (210GB 510GB). I’ve tried these as SCSI attached and IDE but no joy. Any ideas?

I’m trying to do this in our vCenter environment through a normal VM. I’ve got it working and can ping the Nutanix Host IP but I can’t seem to be able to ping the Nutanix CVM IP. When the install completes successfully, it shows the IP I gave it, but there’s no network connectivity.

I’ve tried using a single NIC or dual NICs for the VM and its not helping.

I’m getting the same error about the unknown command line flag “f”..any ideas?

ronaldon November 23, 2016 at 22:35

Hi, i am new to this nutanix side and anything related to any networking stuff, i am trying to install Nutanix CE, i am stuck near what could possibly be the password for [email protected].5.2 in Step 5: Create Cluster steps Image 1????(please help me here as i dont know where can i find the password or hot to get the password for this one).