Archives for March 2015

Today we released the Nutanix Next Community Podcast on Docker. Our guest was Nigel Poulton who I was lucky enough to first meet at a Tech Field Day event back in 2012. Nigel has a couple of courses available on Docker at Pluralsight and has great insight knowing Docker and spending a ton of time working with infrastructure.

Important note from this weeks @nutanix podcast – We NEED more infrastructure peeps involved with Docker! Or it'll get out of control!

A question that gets raised will Docker cut into a significant portion of the virtualization game? Will Hyper-V and VMware lose potential revenue to people running docker? It’s a hard question to answer and I think we tried to address it in the podcast in a round about way. Unless your a Joyent and doing something very custom like with their SmartOS it’s doubtful to me. Developers don’t tend to manage infrastructure and DevOps is only a buzz word for a lot Enterprises. Enterprises tend to be slow moving so a common management platform is important and that tends to be the virtualization layer. If the delvoplers didn’t already move out and consume public cloud resources, virtualiaztion will still be needed. There were lots of reason why people moved away from bare metal installs and lots of those same reasons still apply. Yes Docker can provide isolation but host management, security and protecting the workloads are still very important. You still need to backup and manage your images and any persistent data that may be stored. Does it make sense to make for a Enterprise Plus license from VMware to run Docker? Probably not but maybe there is the right use case. I am still need to get my head around Docker Swarm \ lattice and how it will all tie together. Like AWS, I see Xen and KVM based hypervisors flourishing here. Get the features that you lose out on from going bare-metal and but lower the cost. This is were I can see people still running Docker on a virtualized host because of the familiarity with the management layer.

From a Nutanix perspective whichever hypervisor you want to run, ESXi, Hyper-V or KVM you get:

Well I am happy to say there really isn’t a lot of knowledge needed to wrap up the best practices for running App Volumes on Nutanix.

In general create 1 container for all of your App Stacks and turn on inline dedupe(Performance Tier). Could you put the App Stacks on the same volume as your desktops? Sure but then you can’t inline dedupe with no performance penalty. By turning inline dedupe on any time you go to create or update your App Stacks your applications will get fingerprinted. When the App Stack gets attached to your desktop it will be a read heavy workload. Any reads that have fingerprints associated with them will go into the content cache which is deduped on read. Your applications have a great chance to be served out of RAM instead of SSD or HDD! The RAM happens to be sitting right beside the CPU so you’ll save some CPU cycles to boot from the Nutanix Controller Virtual Machine(CVM). With the use of Nutanix Shadow Clones all of the caching can be done locally regardless of were the AppStack vmdk is being hosted.

Because you don’t have to keep inline dedupe turned on you can turn on inline compression for desktops. VAAI and inline compression will save a ton space if your plan on doing full clone desktops. The space saved will allow you not to buy storage heavy nodes and save on power and cooling too. VAAI can save around 20X of space and inline compression can save over 2X plus performance improvements if you’re moving big files around on the desktops.

Inline Compression Savings

I was running Uber Agent and Splunk to grab some results when I was working on Horizon DaaS thanks to Helge Klein . There is a ton of information that the Uber Agent can grab for you.

Here are the log on times for 300 users in a 48 minute window generated with LoginVSI and inline compression turned on.

UberAgent – Logon Times

I found attaching an App Stack to a desktop added just under 2s to the logon time compared to applications that were nativity installed. Pretty small penalty if you consider the consequence of updating your apps with View Composer or traditional methods.

One graph that I thought was super interesting that was generated UberAgent was the Total Start IO. The total number of IOs generated from the applications starting up.

UberAgent – showing how IOs are used starting applications

Imagine if you could get all of those IOs severed from a dedupe RAM cache? 🙂

The below graph is taken from the Prism UI. It shows in blue the hit rate in the content cache which is over 97% and the green line shows the hits.

App Stacks being delivered with RAM

The below picture shows the physical savings with inline dedupe turned on from a performance perspective. You can also manually fingerprint your golden image since it won’t be changing much and also enjoy the benefits of inline dedupe without the overhead.

Inline dedupe saving from the applications and fingerprinting the golden image.

The next graph is just the total number of IOPS. Why? Because everyone loves IOPS! Just over 4,500 IOPS. I turned off video during the test so more time could be spent launching applications.

To recap, 1 container for all of your applications and turn on inline deupe => done.

Yahoo Japan Corporation announced it’s deployment using the Nutanix Virtual Computing Platform for desktop virtualization. The article talks about why Nutanix was selected and the VDI Assurance program that Nutanix offers. < Read more here >

Marketing is starting a new campaign to get your data center fit. Looks to be some sweet swag but I would also like to point out the new TCO work that has been done as well. Check both out, old boring topics made fun.

Using a cloud destination like AWS to provide on-demand backup is quick and easy to provide protection for your important workloads. Nutanix customers can maintain and manage their infrastructure though Prism like they have always done before. Nutanix hides the complexity by using Prism to have all Async Remote site appear the same way one once the site is setup. Additional physical gear can be avoided and existing datacenter floor space can be saved for running workloads. The inclusion of Cloud Connect can help ensure recovery from large outages as AWS provides added value of having worldwide availability zones.

Data that is sent across the WAN can be compressed and the granularity of what is sent is at the byte level. If 32K of data is changed Nutanix will send 32K of data. If only 4K of data is changed then we will only send 4K of data.

One physical Nutanix cluster can replicate too many remote AWS clusters(no limits); however, today one remote AWS instance can only replicate to one physical cluster.

The NOS instance uses both Amazon EBS (Elastic Block Storage) and Amazon S3( Simple Storage Service) for data. EBS is used to store the metadata for the cluster and is stored on SSD. For added resiliency the EBS metadata is snapshotted every time a replication occurs to prevent against corruption that might be caused by AWS. Only 1 snapshot is kept at a time. When the next replication occurs the old snapshot will get deleted after a successful replication.

The replicated user data from the local site is stored in S3 buckets. The NOS instance will get attached a thin provisioned 100 TB thin provisioned disk called the Cloud Disk. Janus helps to identify the disk either local, S3 or Azure based for support of future releases.

Nutanix provides the automation to maintain your AWS sites with a script that is stored on the AWS Controller Virtual Machine to delete all EBS and S3 storage. This prevents an accidental data deletion on the remote cluster if someone accidental deletes the AWS remote site in the Prism UI.