Post navigation

New updates from Nutanix – NOS 3.0 and NX-3000

Yesterday I had the opportunity to catch up with a very good friend of mine, Ray Hassan, who is now part of the Nutanix team in the UK. Ray took some time out of his very busy schedule to give me an overview of the new Nutanix platform, the NX-3000. He also gave me the low-down on the new Nutanix OS (NOS) 3.0 features. I don’t think Nutanix need much of an introduction these days. Although they are still a relatively young company, they have already made a significant impression in the storage space, and have won many awards for their innovation, including awards from VMware’s very own VMworld.

I too have blogged about Nutanix in the past. Earlier this year, I did a post on their vSphere integration features, including new NFS & VAAI support. I was now curious to see what enhancements they introduced in the NX-3000 series and NOS 3.0 release.

vSphere Integration

Full support for vSphere 5.1. Excellent! The array is already on the VMware HCL. Nice to see partners getting ahead of the curve here.

New Hardware Platform – NX-3000

Let’s talk about the architecture next. I guess it is important to call out that each Nutanix block comprises of 4 blades/nodes. The new NX-3000 series has a significant number of hardware improvements. Each node has two CPU sockets. Each socket can now contain 8 core CPUs, up from 6 core CPUs in the previous NX-2000 version. This gives a total of 16 physical cores per node and 64 physical cores per block (or 32 logical cores per node and 128 logical cores per block). Support for larger memory configurations has also been introduced, with the maximum memory increased to 1TB per block (or 256GB per host). Interestingly, Nutanix are in a position to customize the CPU and Memory configuration based on customer requirements – there is no need to purchase a full configuration. Nice approach for scale out. The final pieces of new hardware is the introduction of dual 10Gb NICs. The previous platform only supported a single 10Gb NIC with 2 x 1Gb NICs as standby.

Nutanix will now support up to 400 VMs running on a block which is based on this new platform (100 per node). This is a 33% increase over their previous platform.

NOS 3.0 – Security

Historically, the Nutanix OS (running in the CVM or Controller VM) was based on Ubuntu, but NOS 3.0 is now based on Centos. This makes this a much more secure platform when it comes to addressing and implementing STIGs (Security Technical Implementation Guides) to enable compliance, especially for federal customers. The CVM is clustered for availability, and it is the CVM that is responsible for providing localised NFS access to the underlying storage pool. This NFS store is then used to deploy the production VMs.

NOS 3.0 – Native DR

Nutanix have introduce a new VM-aware Disaster-Recovery solution which allows VMs to be grouped together into what are termed protection groups. These groups of VMs can then be snapshoted and replicated to a Nutanix block at a remote site. Indeed, Nutanix support replication to multiple DR sites, and promise that the protection group will be crash consistent in the event of a failover. Failover can also be ‘unattended’ if required. Another nice feature is that all replicated data is deduplicated, so once a block is transferred once, it will not be transferred to the DR site again. Nice feature.

NOS 3.0 – Compression

This is where things start to get very interesting. Nutanix have introduced two new compression types in NOS 3.0. The first of these is inline compression. This compression technique works on writes, reducing the amount of data which needs to be stored, but is very CPU intensive. Ideally suited for archival or sequential workloads.

A second compression technique is their offline compression mechanism. This is ideally suited for random, batch workloads, and it avoids impacting the current workload/IO path. This uses mapreduce techniques to essentially figure out which data has gone cold and then compress it. the compression only occurs when the system is idle and data is cold, and determining when data is cold is left up to the user (e.g. after one day or after one hour). This technique is based on Google’s snappy compression library and hits a nice balance between high-speed & reasonable compression rate.

NOS 3.0 – Scale Out

The next feature is the scale-out aspect of Nutanix, which many of you will already be aware of. With the Dynamic Cluster Expansion feature in NOS 3.0, when a new node is added, existing clusters will automatically discover this new node, pop-up a window with a query about whether to add the new node, and administrators simply click ‘yes’ to add it. Also of interest is the ability to dynamically hot-add new disks to nodes without the need to reboot. This can be done via UI or CLI. This allows Nutanix to scale on both compute and capacity.

NOS 3.0 – Supportability

There were two very interesting supportability features. The first of these is the support for rolling upgrades. This means that each node can be brought into maintenance mode, have their maintenance done (i.e. controller upgrade) and rejoin the cluster without impacting any of the VMs running on that node.

Another nice feature is boot from USB support. This feature makes it much easier & convenient to do hardware replacement procedures, e.g. if a node fails, Nutanix will ship a new blade with a new USB key containing ESXi. There is no need to do a new ESXi install on the replacement node. It is ready to run once plugged in. This is also useful from a HDD replacement perspective. There is no impact to the hypervisor layer – the disk can simply be replaced on the fly since the ESXi image is on USB.

You can read more about the new release here. I continue to be impressed at the rate at which Nutanix continue to turn out more and more features. Scale out storage is going to factor into a lot of storage decisions in 2013, and Nutanix are well positioned to leverage this with their new NOS 3.0 & NX-3000 series platform.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan