I wrote about Cohesity Helios back in October and this week finally started to use Helios to manage my virtual cluster. Helios is a SaaS offering for managing a collection of Cohesity clusters from a central location. For today I only have a single cluster to manage so there is a simple process to add the cluster to Helios. I posted a video of the process, showing my first time using Helios and how it was very simple to get started. I talk about IT simplification a lot, this is definitely easy to operate.

There are plenty of reasons to copy your production data, in my last blog post I talked about the reasons that were protection against things going wrong. Today I want to talk about the more positive reasons to copy data, ways that data copies can make your business more productive and profitable. All of the data copies that we made in the last post were insurance, we are winning if we never need to access those copies. The positive reasons for copying data are all about making the data accessible immediately and getting value out of that immediate access. Insurance copies of data are all about durability and metadata searchability, production copies are about performance and are often short-lived. There will be value in having a platform for managing these valuable data copies, but it will need some sophisticated capabilities to deliver business value.

Enterprise IT organizations like to have multiple copies of every piece of data, but every copy we store has a cost. It is vital that you know why you are making a copy of your data and choose the right place and product to store that copy. Traditionally we made copies of data because bad things could happen, I will focus on that in this post. There are a few different categories of ways that things can go wrong, with varying requirements for the data copies. I will also talk about the good things that can happen when you make copies of data, that will be another post. There are also considerations when you want to use a single platform for all of your data copying, that may end up being another blog post too.

Now that I am back in front of classrooms, teaching AWS courses, it is time for the Notes from the Class blog posts to return. The nature of AWS means that every class I teach will have questions that I cannot immediately answer, these posts will allow me to share the questions and answers with students.

I have had my Cohesity Virtual Edition appliance in my lab for a couple of months. It has been happily protecting my virtual machines, but its storage has become rather full. I did setup cloud tiering, which allowed least recently used deduplicated blocks to be migrated out to AWS S3. This tiering does mean that all my backups have continued to complete, and my most recent backups are on-premises for fast restore if required. However, I would prefer to have all my backup data available on-premises, so I need to expand the storage of my backup appliance. I am also sending a daily archive to AWS Glacier, so I have off-premises copies for disaster recovery should anything happen to my on-premises data shed.

One of the central ideas of Hyperconverged and almost any modern IT infrastructure product is simplicity. This simplicity of deployment is great; it delivers fast time to value. Simplicity in operation is even more critical as it keeps the cost of ownership under control. Both converged, and hyperconverged products have simple deployment, a matter of a few hours from hardware delivery to a deployed platform. But they are very different when it comes time to apply updates. Updating a vBlock to a new standard release is a professional services engagement and might takes months to plan and weeks to execute. Most hyperconverged platforms include an updating process that can be initiated by customers and completed in one day, although you should test on a non-production system first. There are businesses replacing fleets of vBlocks with fleets of Hyperconverged clusters to make the updating process simpler. In the Build Day Live event last year with Pure Storage I was very impressed with the ability to update the Purity OS on the array without any downtime for the VMs that were hosted on the array. Equally impressive was the ability to upgrade from one model of the array to a more powerful one without any downtime. Hopefully, your on-premises infrastructure is this easy to update. Today I upgraded my Cohesity Virtual Edition appliance that is protecting my lab environment. It took me under 20 minutes, including the 10 minutes to download the update file from the Cohesity support site, I recorded a video of the process which is here on YouTube. I did test the update on another Cohesity Virtual Edition. Hopefully, I’ll be able to show you replication between those two appliances shortly.

I am continuing to learn about Cohesity and share my learnings with you. This week I added my Cohesity cluster to Active Directory so that I could use AD accounts to manage the platform rather than the built-in account. The process is shown in this video and took all of five minutes to complete. The security model in Cohesity is reasonably straightforward but flexible. Accounts are given a role which defaults to being global but can be filtered to specific objects. There are roles for cluster administrators, backup operators, and backup viewers as well as a couple more that I haven’t investigated. There is also a facility to create custom roles based on your specific security policies. I granted one AD group administrative rights to replace using the admin account and gave another group the operator role so that they could look after data management, but not change the cluster setup. One important thing is to secure the built-in admin account’s password, configuring AD authentication supplements built-in authentication, so the local accounts still exist. Set a complex password and document it in whatever safe location you use for system passwords. Now that the cluster is joined to AD, the login page has a drop-down for domain selection. The delegation of user authentication to Active Directory was quick and easy on my Cohesity cluster.

I am surprised that we do not have more SaaS-based management platforms, ever since Cloud Physics launched in 2013 it has made sense to me that SaaS was a great model for managing infrastructure. All of the usual SaaS benefits apply, the software is always up to date, and that is not the IT team’s problem. But the real genius of Cloud Physics is that they have a vast information warehouse of data about their customer’s environment and can learn from the data to help every customer operate better. Just before VMworld USA, my friends at Cohesity launched their own SaaS management platform called Helios. One aspect of Helios is to unify management of multiple Cohesity clusters, both on-premises and in public cloud. Another aspect is to enable the more intelligent use of the information inside those Cohesity clusters.

I have an off/on relationship with vForum Sydney. I first attended the event when it was called TSX in 2007, right at the start of my time teaching VMware training courses. Back then there were a couple of hundred people at TSX Sydney, now vForum attracts thousands of attendees and is a smaller VMworld. I’ve attended most of the vForum events, except when it conflicted with VMworld EMEA one year, and I think OpenStack Summit another year. Along with VMUG UserCons, vForum is a gathering of the virtualization community, and it brings some superstars in from overseas too. I will be at vForum Sydney this year and am really looking forward to seeing my friends and doing some community activities.

VMdownunderground

In 2011 I attended my first VMworld in 2011 and the community parties (VMundeground and CXI) were a revelation to me, a great place to meet people and talk. I came back and organized VMdownunderground, the community warmup party before vForum Sydney. The party has happened before vForum every year, with Ryan McBride taking over the organization when I couldn’t make it to vForum. There will be a VMdownunderground again, so you can come along to talk to other community people. Please register here on Eventbrite, so you get a reminder and the address. We started organizing things this week, a little too late to get a lot of sponsorship, which means you will need to buy your own drinks. Great thanks to Actifio for sponsoring the event at short notice, hopefully, we can provide some snacks.

vBrownBag TechTalks

There will also be vBrownBag TechTalks at vForum, as there have been often through the years. TechTalks are brief presentations that provide technical education of some sort. Presentations are video recorded and often live streamed, then published to the vBrownBag YouTube channel. If you would like to present a TechTalk at vForum Sydney this year, then just fill out this form, and I will be in touch to schedule your session.

We all like the idea of a single pane of glass system monitoring, but the reality is that often monitoring data is siloed away in a bunch of different tools that do not speak to each other, not even the same language. We end up with several single panes of glass, each dedicated to their own data. Often each team is only aware of their own data, with no ability to correlate data between different infrastructure and application layers. What we could use is a Rosetta Stone that allows translation between the various data languages in our enterprise and permits data to be ingested for analysis and delivery to our favorite pane of glass.