If you are familiar with the Horizon Stack from VMware, you will find one of the strategy is to enable cross datacenter, Active/Active or BCP solution for End user computing environment e.g. VDI. While Cloud Pod Architecture is one key feature in Horizon View for supporting the cross datacenter access, we have to be careful on designing other top up solution to ensure a true cross datacenter VDI environment.

In one of the case recently I’ve worked in, Replicating App Volume App Stack becoming a challenge to me. I would explain the problem and the solution accordingly, but let me introduce a bit about the background and requirement of the VDI infrastructure needed by my customer.

Background

My customer is wishing to deploy an Duel Site VDI environment. Where primarily they would like their endusers connecting to one of their VDI datacenter and in case of issue, they can be redirected to the DR datacenter for getting the BCP workstations. So the required behaviour should be as following:

Normal Scenario – VDI end users should go to Production Environment and DR VDI environment being marked down in the Global Load Balancer

DR Scenario – Enable the DR environment and users are all pointed to the BCP VDI machine

So, to clarify, the above Architecture comprise of Two Separate Horizon View Instances but NOT a truly Active/Active VDI environment which leverage Cloud Pod Architecture in Horizon View. This is for fulfilling the BCP control requirement and also Dedicated VDI Pool Environment Across Site.

Problem

Actually the VDI Architecture is fine and user is satisfying with the performance and behaviour a lot, but later the day, they have embraced App Volume for performing the Application Virtualization. And those we have to setup Architecture for replicating the App Volume App Stacks Across two datacenters. Well, if you have luxury Active Active Storage, that’s nothing. And if you have Joint App Volume Manager across site, it would be nothing too as you can use storage group to replicate the AppStack Across two Sites. For the detail you can refer to the WhitePaper HERE

As mentioned, usually the following true Active/Active Deployment is a best approach. As you only keep one copy of configuration, including AppStack assignment. But we have to employ a “Clustered SQL Database” across site, to be particular Always-On would be the solution.

Yet, the point is, my customer do NOT have Always-On Architecture in their environment and this is why we have to setup separated SQL Database instances between Site. And we need to keep two copies of configuration. Yes, Manual Sync of certain configuration is expected. But the point is how can we sync the App Stack Across two sites

To synchronize the AppStacks between two App Volume Managers, actually we can just copy the AppStack VMDKs across the datacenter. Well… but is it really “Just” a copying task? For vSAN, the answer is negative, you cannot directly do a SCP across two vSAN Datastores. We need some steps as mentioned in the VMware Blog HERE. This is because of the fact that vSAN Datastore is Object Based Storage where the VMDK hierarchy is different from the traditional storage.

Solution

Thus, as a solution for the vSAN replication. We have to again leverage the Storage Group feature of the App Volume. Following diagram you can refer to what we wanna achieve:

In Site 1) we will setup a Storage Group #1 which includes vSAN Datastore and Another Traditional Storage (Either NFS/iSCSI/FC). And in Site 2) we have another Storage Group #2 which includes the vSAN Datastore from the DR site and the same Traditional Datastore we use in the Production Site. The point for doing so is because of the characteristic of vSAN to vSAN copying, we have to use a staging intermediate traditional datastore. And for simplicity, I use an NFS for sharing across two Sites.

So in the Production Site, we setup the Storage Group #1

We Include the Datastores vSAN and the staging NFS such that every new created appstacks would be sync across the two datastore

Do noted that the NFS Storage should be configured as not “Attachable”, this is because we want to have vSAN Datastore presenting the App Stack for performance purpose. But NFS is just acting as the Staging Storage and Not for Performance purpose.

In the DR site App Volume Manager, we need to create another Storage Group #2, which again includes the DR vSAN datastore and the staging NFS share.

Do remember to check the two check boxes “Automatically Import AppStacks” and “Automatically Replicate AppStacks”. This is very critical as the DR App Volume Manager do not have information about the App Stack, and this is why the “Import” action helps adding the App Stacks with the existing VMDK from the Production Site.

The App Stack added will have NO assignment, as again the DR App Volume Manager do NOT know what the assignment is in the Production App Volume Manager. So, we needed add again the assignment in the DR one to ensure the correct applications are granted to the appropriate users

Result

So, finally, no matter the End User is logging into the Production or DR site, they will have the same user experience. Yet as the replication across site is schedule based and if you would like to push the update across site, you can manual trigger replication at Storage Group view.

While Appvol01 is the Production Site, Appvol11 is the BCP environment and both see the same Firefox App Vol.

While vSphere 6.5 have been released for some while ago, you should have already know about the vCenter Server Appliance migration function. There are a lot of blogs discussed about what it did and what it supports. In this post, I would like to share one use case which is to migrate Linked Mode venter 5.5 servers. In vSphere 5.5, it’s likely that one uses Windows Based vCenter Servers, and for Linked Mode vCenter actually you have to use Windows Based one ONLY. But as vCSA becomes more scalable and function rich in version 6.0 and 6.5, we don’t need Windows Based version vCenter to support Linked Mode. However how you should migrate a existing Linked Mode? Let me illustrate why it’s not that trivial and let me show you how actually the migration can be performed.

vSphere 5.5 environment

I prepared a three Linked vCenter Servers environment, while each of the nodes is with All In One Deployment. So the High Level Logical Diagram is as follows:

As the vCenter Servers are linked, you can see the inventories from other vCenter Instances both from the C# thick client and Web Client. Just to remind, when you upgraded to either 6.0 or 6.5 version, this will not be the same, as you can only see the Enhanced Linked vCenter Server from the Web Client but not the Thick Client anymore.

Challenge in Upgrade

When we are trying to upgrade the above environment, actually you would find it not that straight forward, let me quote the VMware KB Supported and deprecated topologies for VMware vSphere 6.5 (2147672)HERE to illustrate the problem we face. Actually the problem comes with the following supported architecture for deploying Enhanced Linked mode.

Simply assume the new Platform Services Controller (PSC) as the SSO server in the <6.0 vSphere Version. So you can see actually VMware doesn’t support All In One Deployment if you are using enhanced Linked Mode. We have to deploy a (simplified) architecture as following for a vSphere 6.5 environment.

So, in simply words we need to break down the AIO vCenter Server into two tier deployment to ensure the end state architecture is of a supported configuration.

What we needa do?

This is why before the upgrade, still in the vSphere 5.5 environment, we actually have to deprecate the existing SSO server which is embedded in the AIO vCenter Server. And create a separate tier SSO server. After that, we need to repoint the vCenter Server to the new SSO server correspondingly. Yes, you may ask why not we do this after the upgrade, but the point is SSO/PSC repointing is not being supported in version 6.5. This is why we have to do it before the upgrade.

So to setup the new SSO Server groups is relatively easy, we just need to deploy SSO Server as a Standalone SSO server for the 1st Node, and Setup 2nd and 3rd Nodes as the MultiSite SSO Server.

After setting up all three SSO server, for my case, sso1.vmware.lab, sso2.vmware.lab and sso3.vmware.lab, we have to configure the vCenter Server nodes to point to the corresponding SSO servers.

Repointing the SSO Server

Thanks for the VMware KB HEREHow to repoint and re-register vCenter Server 5.1 / 5.5 and components (2033620), we don’t need to reinvent the wheel to repoint the SSO server. Following the steps in the KB, which is:

Re-register vCenter Inventory Service with vCenter Single Sign-On

Register vCenter Server with a different vCenter Single Sign-On instance

Re-register vCenter Server with the Inventory Service

Register the vSphere Web Client with a different vCenter Single Sign-On instance

Detail Steps as follows:

Using the Command Prompt and Change Directory to C:\Program Files\VMware\Infrastructure\Inventory Service\scripts

Repeat the above steps for all the three vCenter Server Nodes to repoint them to different SSO Servers, in success repointing you should able to see the Linked Mode working still. You may lose the user, user right and permission of the vSphere.local directory. If you are using AD users, you should still see the user permissions.

So finally, we got the supported architecture and we can use the vCSA 6.5 installer to migrate the SSO and vCenter Server nodes into Virtual Appliances.

Migration Upgrade the SSO Server Nodes

After repointing the SSO, we need to upgrade all the SSO server Nodes first before upgrading the vCenter Server nodes. Steps are comparatively trivial, the wizard will help converting your Windows Based SSO server into VCSA with PSC 6.5 service enabled. Following you can refer to the steps:

Kick Start the vCSA installer

Click Next to Proceed

Accept the EULA

Here You need to input the Migration Assistant Information (You would have to copy the migration assistant binary to the Windows Based SSO/PSC server or vCenter server for Conversion)

Here is what you should see in starting the Migration Assistant on the SSO/PSC Server or vCenter Server

Input the ESXi Host information or vCenter one to deploy the vCSA into

Give the new PSC VCSA the VM name, remember that the host name is NOT this one, the host name will be the existing SSO server hostname. For my case, it would be sso1.vmware.lab, sso2.vmware.lab and sso3.vmware.lab.

Choose the storage information

Provide the TEMP IP information for conversion. This IP will be destroyed on completion of conversion

Confirm the Wizard to kick off the configuration migration from the existing SSO server to the new PSC 6.5 server

Click OK to confirm the prompt

Wait Until the Data Transfer tasks complete

When all the steps are done, the you can proceed to covert another SSO server and repeatedly doing these until all the SSO servers are being converted. The existing SSO servers will be powered off to avoid network conflict during the conversion stage.

Migration Upgrade the vCenter Server Nodes

After all the SSO servers are converted and upgraded to the VCSA 6.5, you can proceed to the Conversion of vCenter Servers Nodes. One Point to remind, you should remove the deprecated SSO service in your AIO vCenter Server to avoid the Migration Assistant misinterpret your machine as an AIO server.

So after removing the SSO Server (GREY one), let’s convert the vCenter Server into VCSA 6.5. You actually leverage the same tool as how you covert the PSC. And steps are really similar, but this time, place to information with the vCenter ones

Unlike the PSC, for vCenter Deployment you can select the deployment size

Actually you are recommended to break the linked mode before the migration, but actually it’s okay even you didn’t and you can see the prompt will warn you on migrating the vCenter from windows to vCSA 6.5 you are bringing the corresponding node away from the existing linked mode which is OKAY.

You can choose what to convert, it’s useful if performance, events or tasks are not necessary but usually these make up a large use of the database which can be skipped to speed up the conversion

You then completed the conversion of the vCenter node, continue and repeat the conversion on other vCenter Nodes.

You are all done!!! You can then check the Enhanced Linked mode status after the migration and upgrade, OF COX, from the HTML5 client too!

Linked vCenters are Shown, folders are there and user permission are all there. We Done!

I wish this is helpful for you and i would like to call out again! TIME to change to vCSA and vSphere 6.5, they are all READY and Lock and Loaded!

A minor version upgrade surprised me again so much… Horizon View 7.1 has been out days ago, not just bring along with enhancement in the remote protocol AGAIN, but also it brings along so many many new features. From VMware blog post HERE and HERE, you can check out some of those cool new features. While in this blog, I would like to test out the RDSH published Apps! I’ve to say that I love the new unauthenticated access feature for the RDSH.

I think this is particularly important and make sense when a customer may wanna provide their applications to others without trusting other’s AD to let them it. Of course, authenticated access to apps doesn’t means we cannot control the access of an application, the authentication can still be perform directly on the RDSH apps. For example, we can publish a vSphere C# Client from the RDSH hosts, and we allow everyone to start the C# Client using the RDSH app, yet to access the vCenter or ESXi hosts, we still need to authenticate again them.

Get Start by Upgrading to Horizon 7.1

As my environment was a Horizon 7.0.3 environment, I have to upgrade the Horizon (both Server and Client) to test the unauthenticated access. Just like any upgrade in Horizon View, it is not difficult at all, but anyway let me recap some screen here for your reference.

So to setup the RDSH hosts, we need very simple setup again, we need to install the RDSH feature in your Windows Server. And afterwards, install the Horizon Agent.

Enable the Remote Desktop Session Host feature under the Remote Desktop Services Role

Well, if you would like to test Just In Time (JIT) applications too, do remember to check the “VMware Horizon Instant Clone” when you are installing the Agent. I choose it definitely, how could I missed this another new enhancement in Horizon 7.1

So after installing the agent you have to reboot the host and then you can just power off the VM and create a Snapshot (which is used by instant clone). Remember that you need to upgrade the VM into Virtual Hardware Version 11 before taking the snapshot

Create a normal RDS Farm in Horizon View

So before you can create a unauthenticated access RDSH applications, you would need to create a RDSH farm in Horizon View definitely. Double confirm the Horizon View has been upgraded to the 7.1 version already

If you didn’t setup the instant clone admin before, do set it up under “View Configuration”. Or else you won’t see the following screen.

I prefer using Blast as default protocol… trust me, it’s very good…

Give the farm a naming convention

I’m not using vSAN yet. But I would test it again in another blog with VSAN later.

Choose the Golden image information and provision information detail

Select the AD Container to provision your RDSH farm into

Confirm the Farm Provisioning

Let the instant clone working at the backend while finally (!!!), let’s start setting up the unauthenticated access to your RDSH applications. Go to the “Users and Groups” and “Unauthenticated Access”, “Add..” the user which you would like to use to login the applications on behalf

You can see a new configuration “Unauthenticated Access” which is Disabled by default. Here, choose “Enabled” to toggle the configuration.

We then can go back to the “Catalog” and “Application Pools” to create the RDSH pool applications and entitle them. In Horizon 7.1, we can entitle “Unauthenticated Users”.

Add back the administrator we added into the “Unauthenticated Users” beforehand and you are done!

Test it out!!!

Remember that unauthenticated access in currently supporting Windows and Linux Clients, you would need to test it with one of these.

Download the latest Horizon View Client which is of Version 4.4. You then can see the new option by clicking the option button at the top right hand corner. Enabled the “Log in anonymously using Unauthenticated Access”.

Connect to the Horizon Server as usual. And not so much as usual, you don’t need to input the username and password anymore

P.S…

So, one finally thing I would like to share. Blast demo in my office lab, no GPU, no Nvidia… and a poor broadband with <10mbps. But definitely awesome… Horizon 7.1, a MUST upgrade!

Following the previous lengthy setup of vCAV for the Service provider, and after we enabled a certain customer and one’s OVDC to have DRaaS to the cloud. We then need to start configuring the tenant side. And actually this is just thru’ one component, the vSphere replication appliance.

Tenant Setup

So, it’s comparatively easy for a tenant setup, as VMware doesn’t expect a Cloud User has to do a lot to consume the Cloud for DR. Actually, for production environment, you could only just need to setup the vSphere Replication at tenant side. But for test and development environment, we would need do a bit more. So followings are the steps to enable it.

Deploy the vSphere Replication Virtual Appliance – I’m using 6.1.1. vSphere Replication for my testing, you can just deploy it like any ordinary vSphere Replication Appliance (as it is one actually).

Register the vSphere Replication Appliance with vCenter Single Sign On

Using a Self-Signed Certificate in a Development Environment – then you would need to force trust the vCloud Director Cert from the ApplianceYou need to use the following command on the vSphere Appliance

Configure Cloud Provider – We need to add a Cloud Provider form the tenant side to replicate our VM into, this provides the exact same steps as if you are connecting to the vCloud Air for the DRaaS. Start doing this by click the highlighted icon which you may probably didn’t hit beforeThen you need to input the Address of your original VCD (NOT the Cloud Proxy) and input the organization and necessary credential Select the organization VDC from the list which has been enabled with the DRaaS Confirm the setup and you will see an alert under the status So you need to click the alert entry to configure the networks for the Target Cloud Side, on setup the network, you have to problem Recovery Network and Test Network which would be used for an actual recovery and test drill correspondingly. Just like VMware Site Recovery Manager (SRM), you can map the recovery network to the network port group on premise And another network for DR Drill network Then you are good to click Finish to confirm the configuration The Alert will be gone after the configuration

Configure Replication – So finally, you can configure the replication of your VM. This is just same as how you replicating a VM with ordinary vSphere Replication Operation. So you can start by right clicking a VM on premise Then Click “Replicate to a Cloud Provider”, amazing… this was only working with vCLoud Air Before Head… Boom! You can see your OVDC on your vCD environment, Click it And Select the Storage Policy to be copied towards Enable WAN Compression if you wanna save bandwidth Choose the RPO and enable Point in Time instances if you wanna recover the VM in different Snapshot of timeYo the replication starts. ***it’s normal to see unknown under VR server***

Great! Let’s perform the test! Yes, I think replication is not something you wanna test right? The test so called should be how we can perform DR Drill and Recovery, right?

DRaaS Testing

On completing both the service provider and tenant setup, we can finally perform the DRaaS test. So let’s start with looking at what your Cloud is like after the replication is completed. It’s as following:

You can see a replicated VM is being created on your Cloud. So let’s see what it will be when we are performing the testing recovery or actual recovery. And as mentioned, there is a new UI come with vCAV, the vCloud Availability for vCloud Director Portal. It’s designed to provide a dedicate portal for tenant to manipulate the DR actions. I will thus cover both the actions you can performed thru’ existing vSphere Web Client on premise and also by the vCAV UI at service provider side.

DR Drill (Test Recovery)

So let’s talk about vCAV UI first, this is by opening a browser and go to http://{vCAV-UI}:8443, login with the VCD credential and you will be directed to a summary page.

To trigger the test failover, go to the “Workspaces” tab and click the VM you would like to test failover with. And you could see the buttons on the right hand side control pane. Click the “Test” and “Start” afterwards to trigger the test failover.

The UI will reflect the status and result in failover

Of course, you can see the same result at the vSphere Web Client side.

So you can also trigger test recover from the vSphere Web Client directly too

But you would have to login on triggering the recovery, this is because you didn’t login the VCD on login the Web Client.

You got more option to choose from using the test recover in the Web Client

On confirmation the test recover will be executed

Again, the status and result of test failover will be visible from the vSphere Web Client

So how actually a test failover looks like? Just like the Site recovery Manager, you production workload will continue to run while the DR workload will be brought up but mapped to a testing network.

Two networks are visible on the VM at the Cloud, where one is for production in Actual recovery and a testing one for test recovery.

Test Recovery Clean Up

On completing the test failover, just like Site Recovery Manager again. We need to clean up the test recovered VMs, you can do that from both vCAV UI or vSphere Web Client again.

Do this by click the VM from the Workspaces tab, then select the “Cleanup” action from the right hand side pane.

Same action can be performed at the vSphere Web Client

Actual Recovery

On confirming the succeed in test recovery, we can confirm the actual recovery is good. Where we do not usually test an actual failover, we usually execute it in a planned migration or actually DR scenario. This is why VMware brings the vCAV UI actually, it makes a lot of sense for the later case, when you lost your vSphere Web Client on premise, you still can trigger the DR recovery.

Just like the test recovery action, you can trigger it by clicking the VM on the vCAV UI at the Workspaces tab and then trigger the Failover from the Right Hand Side Control Pane

You can Monitor the Recovery Status from the vCAV UI

Or you can also trigger the planned migration from the vSphere Web Client on premise, remember than you cannot trigger an actual DR here as the vSphere Web Client on premise is likely gone when DR scenario outbreak.

Again you can have more options in recovering a VM comparing with vCAV UI

From the vSphere Web Client, you can monitor the Status of Recovery too.

Reprotect

After recovery, when your site resumed, you could and you should reportect your VM in opposite direction to enable the DR protection even the VM is at the Cloud now.

This can be done at the vCAV UI with the Reverse button

Or the “Reverse Replication” button from the vSphere Web Client on premise

This simply help reversing the replication traffic to replicate the cloud VM back to on premise datacenter.

Failback

Finally, you can failback you VM back to the on premise datacenter after the reprotect has been completed. Of course, you should test recover again from the Cloud back to the datacenter first. But I would skip here.

Go to the vCAV UI, Workspaces, and click a VM under “Reversed” tab. Click the Failback button

In vSphere Web Client, instead you cannot see a “Reverse button”, but instead you need to go to the “Incoming Replication” which notes the Cloud to Datacenter replication. And Failback the VM by Start Recovery button

So on successful tailback, you should see all your VM back online in your datacenter and the tasks are Recovered Successfully

Great enough? That’s the DRaaS which is enabled by the vCAV and this is how simple you as an tenant can consume it from any of your vCAN service provider who has deployed it. I wish this is helpful for you!

As introduced by the previous blog post, I’m going to brief the Service Provider side setup steps of vCloud Availability for vCloud Director (vCAV) here. The following summarise the components we would setup around the existing vCD Cells.

Service Provider Setup

From the service provider side, I’m assuming you have you vCloud Director deployed. In this blog, my vCloud Director is of 8.10 version, it’s a single cell deployment but leverage the one IP deployment method (sharing http and consoleproxy IP) which is a new feature in version 8.10. I’m following the official guide to:

Prepare the environment

Install the new vCAV components

Configure the new vCAV components

As I’m just testing in the environment, some of the components I’m deploying for test and development purpose only, say I’m using Docker for my MQ and Cassandra. I’m deploying single instance of some components which you should actually consider NLB-ing multiple of them in your production environment. But anyway, just let see how simple stuffs work first. Again the steps I performed follows the official guideline which you can refer at HERE.

Preparing the environment

Create vCloud Availability Installer Appliance – Trivial step, you would need to deploy the vCloud Availability Install Appliance from VMware.com. This component is the central control centre which you would use to install further the vCAV components. DO NOT delete this appliance even after your setup, as you would need this in case when you have to scale or reconfigure your DRaaS environment.

Download vCloud Availability for vCloud Director Appliances – As mentioned, the installer Appliance in step one only help deploying the vCAV, but we still need to download the core vCAV appliances for the deployment. You can Download it from the VMware.com.We need to upload the downloaded vCAV Components binary into the vCAV Installer Appliance. Do this by using SFTP and take note that you would need to provide the path to these OVA/OVF files during the vCAV deployment.

Configuring vCloud Director for Installation – You would have to prepare your vCloud Director with the following items: (If you are interested in the detail steps for doing any of the following, you can refer to the Appendix of this blog series HERE. Or it could be tooooooo lengthy)

Use Wildcard certificate for the vCloud Director if you are not using it (I’m not…)

Migrate your Single Cell vCloud Director to a Multiple Cell Configuration (as for deploying the Cloud Proxy for vCAV).

Deploy and Configure MQ with SSL (This is not default for RabbitMQ).

Join the vCloud Director to the lookup services

Prepare the vCloud Availability Installer Appliance for vCloud Availability for vCloud Director Installation – While there are two ways of vCAV installation, the “Full Commands Installation” and the “Simple Command Installation”, I am using the “Simple Command Installation” in this blog. And this is why I have to create the registry file under the location ~/.vcav of your vCAV Installer Appliance. The registry file defines the vCenter and vCloud Director endpoint information which is be used for component deployment, e.g. the which Datastore and Host should the Components be deployed on. So for my case, my registry file looks like the following:

Enable Static IP Addresses Deployment – Besides following the official guide to create the IP pool. Which you can do it thru’ vSphere Client UI if you are more familiar with it, but the command provided in the official guide also works great. Do also assign static IPs for components and configure the AAAA record in the DNS. You would need to have at least 7 IP Addresses for the following components on the service provider:

Create a Trusted Connection with the vCloud Availability Installer Appliance – Ensure the certificates using in the environment are being (force) trusted by all the components.

Create Cloud Proxy – This is actually another vCloud Director Cell yet with most of the functions being disabled. Cloud Proxy is required for handling the DRaaS tasks. This is why I’ve mentioned we have to migrate from a single cell deployment to a Multiple cell one. And the following red circled line is the corresponding setting in the global.properties file on the Cloud Proxy Cell

Creating Containers (Optional) – If you have a MQ and a Cassandra in your environment already, you can skip this step. And if you are preparing for a production environment, skip this step too. This step is just for test and development purpose. We can leverage Container technology to deploy the MQ and Cassandra DB for the vCAV easily. (This is why VMware Loves Container). I use the Docker in my environment, you can following the official guide session HERE. It instructs you how to create the docker host and further the docker image and configuration for both MQ with SSL and Cassandra.We can deploy the docker host for running the MQ and Cassandra with the following Command

Check vCloud Director Endpoints – After preparing the vCloud Director Environment, we definitely have to validate the preparation before we moving on to Install the Components. You could find my command a bit different from the one in the official document, perhaps you would need to add “-k” as mine too as my wildcard cert is not being secure enough. So in case you see the above message, you need to use a command like mine

Installing the new vCAV Component

Then we need to leverage the vCAC Installer Appliance to help deploying the vCAV Components actually here. And the step has to be preformed in CLI, so I would recommend you using SSH rather than the VM Console (you know you cannot copy and paste). So before running the actual installation commands, you have to prepare the password file which can be handy to ensure you didn’t type the password wrongly during deployments. Do this by running the following command at the vCAV Installer:

And we need to follow the step below to setup all the vCAV components.

Create vSphere Replication Manager (vRMS) Host – This may not be too unfamiliar for you, as we also need this component in vSphere Replication.The command to deploy vRMS from the vCAV Installer is as following, and you need to customise the IP, hostname and OVF URL for your deployment:

Create vSphere Replication Cloud Service (vRCS) Host – As the name of it, this is the engine for enabling the DRaaS or Cloud Replication.The command to deploy vRCS from the vCAV Installer is as following, and you need to customise the IP, hostname and OVF URL for your deployment:

Create vSphere Replication Server (vRS) Host – The actual appliance for performing the replication tasks. If you are having a non-testing environment, you would probably need multiple instances of this to handling the actual traffic from customer site

vCAC UI Portal Appliance Deployment – This is the appliance for running the separate UI dedicated for the DRaaS for customer consumptionThe command to deploy the vCAV UI from the vCAV Installer is as following, and you need to customise the IP, hostname and OVF URL for your deployment:

Validate Deployment – To ensure the initial deployment is good, we can validate the deployment with the following command. Again I’ve added “-k” for all the vCAV commands I used to connect to the vCloud Director

Configure RabbitMQ Servers – Configure the vCloud Director Cells to use AMQP provided by the Rabbit MQ with SSL enabled. And you need to restart the VCD Services in your nodes after performing this, execute “service vmware-vcd restart” in your VCD nodes, NOT the vCAV Installer.

Configure vSphere Replication Cloud Service (vRCS) – Register the vRCS to the vCenter Server which is managing the Cloud Resource (same as step 1) and Update the vCloud Director Role to include the vRCS privileges. ***Check the firewall if you are hitting issue in the second command***

Configure Service Provider vCloud Director Organisations – Finally, enable a tenant who is eligible to use the DRaaS. This is not by default enabled feature for all the organisations and all organization VDC. You have to enable it selectively:

Great! The Service Provider part has been completed and we can proceed to the Tenant Setup. As a Service Provider, I strongly recommend you to have a look into this as this enable a lot of value added services you can provide to your customer on top of IaaS service. Do start the evaluation and i wish this is helpful for you!

“The biggest thing since vCloud Director” – This is the most suitable phase I could use to describe the VMware vCloud Availability for vCloud Director (vCAV). I have been working in few vCloud Director projects in these years, including vCloud Director with Metro Cluster, vCloud Director Migrations and vCloud Director with Customized Portal On top. Those projects are great and I’ve benefit a lot in cloud building thru’ out. If you know me, you would know how much I love vCloud Director. It’s simple and it’s beautiful. Thus, I love to see how it evolves. I’m looking forward to the vCloud Director 8.20 so much which should be announce soon. But today, I would like to talk about another ultimate weapon which vCloud Director can be equipped with, the vCAV. This is so awesome a value-added functionality for a public cloud and it’s also at the same time so handy for customer to gain a DR solution.

Actually, vCloud Air has been empowering with this function for a while already and you can use that. And what vCloud Availability does, it simply let vCloud Air Network service providers to be equipped with the similar DR as a Service feature. Personally, I’ve been looking into this for so long. Primarily because there are no vCloud Air Coverage at Hong Kong, many of my customers are not allowed to put their VM to AWS or vCloud Air because of regulations for data location. This is why I believe there are chances for local service providers who can offer DRaaS here locally. And this is also why I would like to demonstrate how it actually looks like here, as I believe your local service provider (not just Hong Kong ones) would also be interested in this.

What does vCAV include?

So as mentioned in the previous blogs, vCloud Director is a very handy tool for Service Provider. It helps providing both the Cloud Engine and Portal Service for delivering IaaS service on top of VMware vSphere Environment. But what’s more and what’s next?

Actually VMware is trying to help and encourage Service provider to deploy and develop value added service on top. And vCAV is one which help providing DRaaS for vCloud Director Deployments. To deploy it, you would need the following items:

vCloud Availability Installer Appliance

Cloud Proxy

vCloud Director (vCD)

vSphere Replication Cloud Service (vRCS)

vSphere Replication Manager (vRMS)

vSphere Replication Server (vRS)

vCloud Availability for vCloud Director Portal (vCAV UI)

Cassandra

RabbitMQ

From the diagram below, you could found the basic architecture of the solution. Not all the items are being covered, but this diagram provides you the network requirements for deploying vCAV. And the beauty is, you actually only need to enable port 443 from the On Premise to the Provider Cloud.

Separate UI – vCAC 1.0.1 provide another web UI for triggering Recovery, in 1.0 version, this can only be performed by API (vCAC UI)

Although it looks like there are a lot of components to be deployed, the vCloud Availability 1.0.1 provides you a really easy way to deploy all the components. No worry, you don’t have to deploy each of the component manually, you just need to leverage the vCloud Availability Installer Appliance. I would call it a Control Centre in deploying, scaling and reconfiguring the vCAV environment. Thus, it’s more than just an one off installer, I would recommend you NOT to delete it even after the deployment.

The official user guide actually includes step by step commands for your reference to deploy the whole vCAV infrastructure, but I would like to let you know what would be expected symptom and behaviour in each step and this is the reason why I’m writing this series of blogs. This is just a introduction post, and I will further create two blog posts, one for Service Provider and another for Tenant. Wish that would be helpful for you!

As the Appendix of the captioned blog series, here I would provide the steps in “Preparing the vCloud Director” Step. As mentioned, there are few things we need to enable our existing vCloud Director Deployment before we can deploy the vCAV. To recap, they are:

Use Wildcard certificate for the vCloud Director if you are not using it (I’m not…)

Migrate your Single Cell vCloud Director to a Multiple Cell Configuration (as for deploying the Cloud Proxy for vCAV).

Deploy and Configure MQ with SSL (This is not default for RabbitMQ).

Join the vCloud Director to the lookup services

Generate Wildcard Certificate for vCloud Director Cells

I’m using Active Directory CA in my environment, so I use one of my domain joint machine to request for a wildcard certificate. This can be done at the MMC with the Certificate Snap-in import. Do request for a “Legacy Key” Template with PKCS#10 format.

Friendly name doesn’t have to be the wildcard, i use here just for easy in identification

Input the Subject Detail, CN = wildcard is a critical entry

Enable the Extensions as following

Make the Private Key Exportable

Then proceed to generate the certificate request

Copy the Certificate request content

And request the certificate from the AD

Generate as a Web Server Certificate

Download the Certificates from the AD

And Import it back to the machine we request the certificate

You can then see the wildcard certificate being available on the machine

We then can export it out to the vCloud Director Cells

Upload the Wildcard certificate onto the vCloud Director Cells and you can replace the existing certificates with it according to the VMware KB HERE.

Don’t forget to replicate the wildcard certificate at the vCloud Director Portal

Migrate from Single Cell to Multiple Cell vCloud Director Deployment

As there are numbers of blogs discussing about this. What I would like to recap here will be more high level steps:

Create a NFS share for sharing between target vCD Cells

Copy the files under /opt/vmware/vcloud-director/data/transfer of the existing vCD cell

Stop the vCD service by “service vmware-vcd stop”

Mount the NFS share to the vCD cell at the /opt/vmware/vcloud-director/data/transfer

Start the vCD service by “service vmware-vcd start”

Share the /opt/vmware/vcloud-director/etc/response.properites and certificate keystore among the hosts

Install new vCD cells by mounting the same NFS share and using the response.properites and Certificate keystore

Deploy a Rabbit MQ server with SSL enabled (NOT Container)

I’ve come across a very good blog HERE for configuring the Rabbit MQ with SSL. I am not repeating it.

Join the vCloud Director to vSphere Lookup services

This may not be difficult for you, as you can follow the standard procedure to add the federation setting at the vCloud Director Admin UI. But remember the following caveats, you would need to put “/cloud” after this URL in the vCloud Director setting. ***Even the hints under the text box didn’t said so*** I’m checking with support team on this cosmetic error.

Then you can just add the Lookup service URL under the Federation Tab

On succeed you would see this and you would have to login with SSO user. So do add SSO users as your system admin by granting the user right. Or if you want to login thru’ local user, do go to the URL at https://vCDFQDN/cloud/login.jsp.

So on completing all the above, your vCloud Director environment is being prepared well and you can continue the vCAV setup!!! Wish this is helpful for you!

From the latest VMware Blog HERE, VMware has just announced the Horizon View 7.1 where a new protocol BEAT is developed and Just in time (JIT) applications are being emphasised in the release. While the term JIT is referring to the instant clone technology which help provisioning VM in a very short time thru’ linked clone technology on both storage and memory aspect. So we can expect the JIT Application in horizon 7.1 would be an instant clone of RDSH hosts, yet as it has not been GA-ed yet, let’s just wait for it.

In this blog, instead, I would like to share the way to use JIT Desktop which is brought about from Horizon 7.0. Someone may said linked clone or JIT desktops are just something provisioning faster than full clone desktops. BUT, i would like to emphasis here again, this is absolutely NOT. In fully clone, we expect user data perhaps will be persisted on the desktop and such that one user may tie to one desktop. But actually Linked clone and JIT desktops target to make desktops a shared pool. We try to enable a pool of Desktops for users to consume such that in case one desktop broke down, they can use another with same user experience directly.

Why we need JIT desktop and App Volume???

To achieve this, we have to understand what make up the user’s “user experience” of a desktop, I think it can be layered as:

Desktop OS layer

Application layer

User Environment layer

While in a physical desktop, we have all the three layers tying tight together. We generally don’t care about how to separate the layers. But in VDI, if we can decouple the layers, actually we can recompose an “user experience” base on a policy and making desktop management easy and possible.

While the JIT desktop from Horizon 7 enables we creating pool of consistent Desktop OS, the App Volume helps instead in providing decoupled applications on top of the Desktop. I didn’t cover it in this post, but VMware User Environment Management can help in performing the User Environment layer management perfectly further on top.

So how to build JIT desktops?

Actually, that’s simple. That’s even more simple than using linked clone in Horizon View. With linked clone deployment, you need to deploy the Horizon View Composer Server, right? But for JIT desktop, you don’t even have to use it at all. So, to begin with, you need to ensure the Horizon View Agent is being properly setup for JIT deployment, as differs from default option during the agent setup. You have to enabled the instant clone when you come to the following page

While the option for “View Composer” and “Instant Clone” are conflicting, we thus need to disable the “View Composer” and enable “Instant Clone” for enabling it, just like the following.

After prepared the image, you have to take a snapshot, just like how you are doing it for linked clone desktops. And then you can open the Horizon Admin page for setting up the instant clone Desktop Pools. But before that, go to the “View Configuration” > “Instant Clone Domain Admins” to add an administrator for performing the instant clone action. As we need to create Domain objects and join domain stuffs, do ensure the user right is proper.

Afterwards you can go the create a new Desktop Pool, there is not much special in provisioning an instant clone pool. Yet, user assignment has to be a floating one.

If not, you won’t be able to see the “Instant Clones” option

Select the snapshot just like how you do it for linked clone desktops

Wait for the completion of the cloning tasks, it is proper for you seeing cp-parent objects appear at least one per hosts and cp-replica one per datastore and cp-template VM there.

You can try the behaviour of the instant clone, if you try to power of the instant clone VM, it will be flagged out again

So after user assignment, you can test it out using either View Client or a browser if you enabled the HTML assess. DO try power off it from View Session too, you will see the magic of JIT desktop.

You don’t actually need to even recompose or refresh Desktop linked clone desktops, as the Desktop life can be as short as a power off action be triggered. And this is how a JIT desktop is being created.

So how to build App Volume?

So afterwards, we have to setup the App Volume Manager first, this is decoupled from Horizon View actually and you can use it with even Citrix VDI. The basic mechanism is we will install an agent into the VDI Desktops such that when a user login, the agent will help presenting the Applications including Desktop Icons and Binaries onto the Desktop. While we try to use VMDK based mounting and running natively on the desktop OS to achieve the application layer, you can expect the speed will be very fast and the performance of the application will not be limited by any sandbox environment.

Thus to achieve the above, we need to:

Deploy the App Volumes Manager + Database

Deploy the App volumes Agent

Capture the Application as AppStacks for presenting

So, let’s get started!

Deploy the App Volumes Manager + Database

While the App Volumes Manager is a Windows based solution, the database required is thus a MSSQL. We need to create a database for the setup,

You can create a login dedicated for this database

So you just need to grant the dbo right of the DB for this new user, do grant the dbo right for the msdb database too.

Then you can start setting up the App Volume Manager on the Windows Server. Do this by download the IOS and mount it and running the setup binary, this is the same binary for setting up both Server and Client Agent

You can select the “Install App Volumes Manager” during the setup

Do select the External DB by “Connect to an existing SQL Server Database”

Input the DB connection string detail and proceed the setup

You could see the App vol icon on the desktop after the successful installation

After the setup, we need to perform the basic configuration of the App Vol Manager. Start by clicking the App vol icon on the desktop. Most of the admin tasks are being done on the web GUI

First you need to input a license or using the evaluation one

Then we have to connect the app volumes to the AD

Afterwards, we need to grant someone in the AD be the admin as the App Volume Admins

Add the vCenter Server under the “Machine Managers”, I choose “Mount On Host” to offload the tasks for vCenter Server which is by default the person to help mounting and unmounting the disks for the VDI Desktops.

You gonna accept the vCenter Certificates

And select the storage for the AppStack to be provisioned to

Start Upload the VMDK template as the seed of the AppStack and Writable Volume

Click the “Upload” to start importing the VMDK into the Datastore Selected

Click Okay to confirm and complete the initial setup

Following is the App Volumes main page after the initial configuration and we will come back here when we are creating the App Stacks for our applications again.

Deploy the App volumes Agent

So using the same App Volume installation binary, you can install the Agent on the desktop image. You just have to deploy it on the template we used in the section above in JIT Desktop and take a snapshot. Make sure the Desktop network can reach the app volume manager, do not use NAT in-between the networks if possible (as I have hit some issue before). And if you have Anti virus running in your desktop image, you would need to follow a VMware KB to amend a registry key about filter.

To capture an Application you would need a VDI Desktop with App Volume Agent but you would need to login with a domain user to ensure it’s connecting to the App Volume Manager. But to Click start the Application Capturing,

Go to the App volume management page and “Create AppStack” at the “Volumes” > “App Stacks”

Give the AppStack a name and choose which storage to be provisioned in

On clicking the Create the AppStack, it will create the VMDK for storing the AppStack for the target capturing applications

It will be in “Unprovisioned” mode, then we need to click the “plus” icon in the front to provision the App Stack which is actually kicking start with the new Application Capturing

But before “Provision” and capture the Application, as mentioned, do login one VDI Template or Desktop Image first in order to let App Volume to see this VDI for Application Capture

After login, you can see the VDI machine from the App Vol UI. And you can click the “Start Provisioning”.

You can see the status changed on the App Volumes UI, we then swap to the VDI desktop to install the corresponding Applications.

You can see the App Volume Agent prompted and following the message, do NOT click OK until you finished the application installation

Just setup the Application as usual

Click the “OK” to finish the setup and the application capture

Click “Yes” to complete the setup

Click OK to restart the VM to complete the Application Capture

You could see the following after the reboot and login. This is needed to complete the whole capture, thus, DO NOT skip this setup to religion the VDI image you use to capture the application.

Then you can see the status of the capture app is changed, and you can assign the app to different AD users or groups.

On Assignment, you can see the Application is being instantly provisioned on the Desktop

YES!!! You got JIT Desktop and Application Virtualisation done!!! In the upcoming blog we will study the User Environment Management considerations and see how to achieve this.

Last week is an exciting week that the very first version of NSX has been released for supporting vSphere 6.5! I did deployed few 6.5 labs but as previously I cannot deploy NSX on top of it, honestly, I could not test thru’ it much. But finally, the NSX version 6.3 has been out which officially support the vSphere 6.5 (should be 6.5a actually). Besides supporting vSphere 6.5, there are a lot of enhancement and improvement in stability and performance aspect. You can refer to the release notes HERE. Actually, what make me more exciting is the support of vCloud Director 8.20 advanced networking services, this is why I am waiting for the release of vCloud Director 8.20 even more after the release of NSX 6.3.

But here, I would like to illustrate the upgrade steps and deployment procedure of NSX 6.3 instead. Actually, most of the steps are pretty much alike to the deployment of any NSX version, but here still let me capture the expected screen in deployment for your reference.

Begin the vSphere Upgrade

As the pre-requisite for NSX 6.3, you would need to have vSphere 6.5a, NOT 6.5. Although I think most of the production environment would not be at version 6.5 yet. This is why I have to first upgrade my existing 6.5 environment.

While my existing environment comprises of the followings:

vCenter 6.5 Server Appliance

Management Host ESXi 6.5 x 3

Resource Host ESXi 6.5 x 2 (ROBO vSAN Enabled)

vSAN Witness Appliance 6.5

The upgrade approach is comparatively trivial, from point 1 to point 4. Let’s get started! Do remember that as we are performing an upgrade instead of a green field deployment, you have to download the path from the patch download link HERE.

So you have to download the 6.5a patch for the ESXi hosts

And also the 6.5a patch for the vCenter Appliance

Upgrade the vCenter Server

So, you just have to upload the ISO and attach it to the vCenter Server Appliance VM after downloading the patch. Then you can upgrade the vCenter server by going to the management URL at https://<vCenter ip or FQDN>:5480.

Hit the Update and “Check CDROM”. This GUI based upgrade is so convenient to me as we have to do this thru’ CLI in the previous versions. Then you can proceed in hitting the “Install CDROM Updates”

On accepting the EULA, the upgrade will be proceeded.

Wait until the update is completed and you have to reboot the vCenter Server Appliance

And you can do it at the “Summary” tab by using the “Reboot” button

All done!!! Is it simple enough? The vCenter Upgrade has been improved by version to version, but I truly believe the 6.5 version is the best among all.

Upgrade the ESXi Hosts

So then we can upgrade the ESXi from 6.5 version to 6.5a version, this is simply achievable thru’ Updater Manager of course. But the beauty in 6.5 again, the Update Manager is there by default inside the vCenter Server Appliance.

So you just need to upload the patches downloaded above

Create the baseline to include the patch

Attach the base line to the Cluster

Then remediate the hosts, remember that even the VSAN Witness, you can just remediate it instead of the need to redeploy the Appliance again.

After the upgrade you are all done with the pre-requisites for deploying the NSX 6.3.

Deploy the NSX 6.3

After upgrading the vSphere, following would be an even easier task for deploying the NSX. You have to download the NSX OVA definitely. But the deployment is easy that you just import it same as any VM appliance.

Then, you can go to the http://<nsx ip or fqdn>/admin, to configure the NSX registration.

Register the NSX to vCenter 6.5a instance, so good… it works!!!

Yes, then you can see the “Networking & Security” icon in the web client

You can further proceed to the NSX detail Setup, first step deploy the NSX Controller

Deploy the NSX features and VXLAN

You can see I’m having a L2 over L3 vxlan deployment while cluster Mgmt is using 192.168.100.x/24 network and Res Cluster using 192.168.101.x/24

I skipped the steps in creating Segment ID and transport zone and logical switch as those are trivial but the end result is as follows, you can see my VXLAN is working great!

Conclusion

This is really the greatest news in the Chinese New Year such that I can shift my focus in vSphere 6.5 testing after the release of NSX 6.3. The upgrade steps for your existing 6.5GA environment are also trivial as shown above. I would like to invite you to test out the new NSX 6.3 too. Again let me update again when the vCloud Director 8.20 is out soon and I’m really looking forward seeing the new features and integration between the two products!

P.S.

Again, as all the labs i deployed is nested on a vCloud Director based environment, I use my Domain controller as the router too. If you are using the same approach, do remember to configure the MTU on your router to enable 1600MTU for the VXLAN connection!

Security is a strong focus of the vSphere 6.5, you can have your vMotion traffic being encrypted which is very useful for you to migrate a VM cross site thru’ internet network and could be even more useful when later you would have to migrate it into the cloud. We also have the secure boot for UEFI which ensure the boot device is being trusted, as mentioned in the Auto Deploy Blog Series, this is the stuff that stop me booting my nested ESXi from PXE image. So in this blog, I would like to walk thru’ the setup steps and caveats you would have to be aware of. You can refer to the documentation HERE for the detail VM Encryption function supported by vSphere 6.5. But as I would like to let you visualise the setup, thus let me start by the setup procedure.

VM Encryption Setup

While the vCenter, ESXi will be responsible for the actual encrypting mechanism, we need to setup a KMS server for storing the keys which are used for encrypting and decrypting the VM files.

For testing purpose, I have (and would suggest you) following the blog post HERE by William Lam. In the post, we can use a docker container to hold the KMS server. Of course the key will be lost when the docker process is down, but this provides a really handy way for us to test the VM Encryption in this post. You can definitely use any docker host to flag out the container process, but here, I would like to use a Photon OS to do that for me. Followings are the steps I used to setup the KMS:

So first you need to prepare the docker host for running the KMS container, and you can download the Photon OS from the link HERE. I download the OVA with virtual hardware v11 version which fit in my vSphere 6.5 environment

So you can then deploy it as generic photon OS

As you cannot assign a static IP to photon OS thru’ the deployment wizard, you would have to login the VM Console to alter the configuration file under /etc/systemd/network for giving a static IP to the Photon OS.

So following the Post from William, you can run the command:

docker pull lamw/vmwkmip

To pull the image into the docker host, of course, your Photon OS thus has to have internet access for pulling the docker image down. After pulling the image down, you can run the following command to run the KMS docker image.

docker run –rm -it -p 5696:5696 law/vmwkmip

I have to state again, docker based KMS should not be used for production environment as it’s not stateful at all, the key will be lost when the docker process is quitted or down accidentally.

Anyway, as we are just testing the Caveats here, you can continue the work and go back to the vSphere Web Client. You need to connect the vCenter to the KMS server from the “Configure” tab of the vCenter Object. And hit the “Add KMS Server” with the green plus icon.

You can see the following wizard which would let you entering the KMS server information. The mandatory item will be “Server Address” and “Server port”, while you can give the “Cluster name” and “Server alias” a name you want

Confirm the configuration by clicking “Yes”

The KMIP cert will be prompted to be trusted Manually

On successful configuration, you could see the KMS entry as following:

So after the basic configuration of the KMS server, we can start encrypting the VM

VM Encryption Test

So, far easier than what you think, you actually just need to change the “VM Polices” of a VM, with the editing in VM Storage Policies when your VM is being powered OFF.

The wizard let you assign the VM Encryption Policy to the VM to encrypt the VM

You can find relevant task and events which are trying to “reconfigure VM” which is actually the encryption task

After the task is done, you can see more information from the VM summary.

the VM logo is with a “Lock” beside it

Encryption Entry under “VM Hardware”

VM storage policy is compliant with Encryption Policy

Well all DONE! Is it simple enough?? And yes this is how you can encrypt your VM with VM encryption as the new feature in vSphere 6.5 environment.