What is your Virtual Function?

VDI

I first wrote about Citrix Provisioning Server (PVS) in 2012 and focused on a feature called vDisk Update Manager. Back then PVS was at version 6.0, the current release is version 7.1 and again it’s worth time looking at a small change that has made a massive impact.

In this release Citrix has introduced the vDisk feature Cache in device RAM with overflow on hard disk.

Before we leap into cache first lets wind back a little and quickly look at PVS. I have to admit that in a lot of recent work I have been using Citrix Machine Creation Services (MCS) not PVS, I’ve been neglecting one of Citrix’s imaging technologies because I’m not managing a production site and therefore not having to deal with patching and updates or worrying about backend performance. A lot of the work I do is demonstrations and proof of concepts therefore convenience is good for me . I’ve certainly found MCS useful and have a number of customers doing really good things with the utility in production at scale. However the ace PVS has always carried is the ability to redirect the locations of disk writes and it’s a trump card as good as any you carried in the playground.

Why is that important? It’s been well documented that virtual desktops create lots of writes, in small blocks too. Although we are far from the early days of desktop virtualisation and therefore this no longer comes as a surprise (if it came as a surprise I’d argue you didn’t really understand desktops before you started), PVS has come up with the perfect answer with its new write cache option.

There are now six options for the vDisk Cache type:

Cache on device hard drive

Cache on device hard drive persisted (NT 6.1 and later)

Cache in device RAM

Cache in device RAM with overflow on hard disk

Cache on server

Cache on server persisted

I have only ever used PVS with XenDesktop and XenApp and have only ever looked at Cache in device RAM and Cache on device hard drive. I’ve always liked the idea of caching things in RAM, its fast and has become relatively cheap but fill it up and you have no where to go. Well you do it crashes the server instance and that in the end does give us somewhere to go the exit door!

The second option, up till now, has been move those writes to a hard disk on the host and take them away from your storage. The theory, we don’t wan’t to keep them with our pooled desktops so let’s write to a location thats out of the way, of the network and cheaper. Many production environments use this and it works very well.

But now we have the ultimate hybrid; Cache in device RAM with overflow on hard disk. We can use the RAM option without the risk.

To implement this feature is a straight forward procedure.

Make sure you have a disk to overflow to, if you are using the XenDesktop Setup Wizard in PVS this is created for you. If not you do need to have this; it will need to be part of your VM as local disk on the host and will appear as the next available drive letter in the VM when it boots (note- the streamed disk will normally be the first disk).

You are now ready to switch on the feature in the vDisk properties. Remember that to do this the disk has to have no locks. The feature is found in the properties section of the vDisk, in your Store.

Once enabled you are asked to select a value for the Maximum RAM size in MBs. This is the amount of RAM to use as the cache, in the example below I have used 512MB. Note you should make sure you have an additional 512MB or RAM assigned to your VM. You can play around with these numbers to find the best fit for your environment. I have found that with good storage and server based hardware 512MB works very well, however in older environments and some labs I’ve increased this number to 1.5GB to get good results.

Citrix Provisioning Server (PVS) is now at version 6.1. Released in version 6.0 was the vDisk Update Manager and I think it is worth some time looking more closely at this feature.

Once installed and configured the PVS management console has a new node the vDisk Update Manager. This has three sections; Hosts, vDisks and Tasks.

Hosts; references the hypervisor hosts in use, XenServer ESX and Hyper-V are all supported.

vDisks; is the PVS vDisks enabled for update management.

Tasks; is the automated scripts that power the Electronic Software Distribution (ESD) client software. Microsoft System Center Configuration Manager (SCCM), Microsoft Windows Update Service (WSUS) and custom scripts are all supported. Please note: the ESD client software must already be installed in the vDisk image.

First let’s take a look at manual updates. In previous versions you had to make a copy of the vhd file (vDisk), mount this against a new machine in PVS, boot and make the changes required, shutdown the image, increment the vDisk version number and make it available to all machines. It was possible to script this process via PowerShell however in my experience not everyone did this and there was still manual work to do to integrate this into your environment.

To utilise the new features you require a running environment with machines utilising a standard mode vDisk. This feature is only available for standard mode disks; private disks are read / write and therefore can be managed by existing ESD tools.

Step one is to add the host that you are using; right click on the Hosts node and follow the wizard. You will require the correct host / pool credentials.

Step two is to add a vDisk to this tool; right click on the vDisks node and follow the wizard. This will search your Store for available vDisks and ask you to enter the VM on your host that will be used as your maintenance VM. Finally it will require an AD machine account in order to process the updates to the vDisk. If you have not created the VM on your host with then name specified in this step do so now.

In the image below you can see the PVS Services Console with the properties of the vDisk Update Management node highlighted. It shows that the vDisk in the Store StoreW7 is on host XenServer-1 and the VM on the host that will use this disk is called W73. The vDisk has to exist before you start step two however it does not check the Host to make sure the VM is there, so this can be created after if required. It will however need to be present for this process to work correctly, as highlighted below XenCenter is showing a VM with the name of W73.

Once this is completed from the Stores node you can select the vDisk and manage the Versions. Right click on the vDisk in the Store and select Versions.

Select New; this will create a new disk in Maintenance mode. To edit this disk boot your maintenance VM from the Host. Once this is completed if you refresh the vDisks Versions console you will see a single device is now accessing this version of the disk and that the Access type is Maintenance. Please note: you have not had to remove any locks form the existing vDisk, log off any users or shut down machines to start this process.

You can now make any changes required to the VM as the version you are using is in read write mode. To achieve this PVS is using avhd files and creating a chain to the original disk. This saves on time and disk space when making changes, it also means if we are not happy with the results we can revert back quickly to the previous image. Once we are satisfied with the build we can merge all updates, this stops a long chain becoming a performance drain.

Reboot the VM to apply any changes and then shut the Maintenance VM down. You can then Promote the vDisk and can set the access version to Test or Production. If you set it to Production and apply the changes immediately, on next reboot all users will have access to this disk.

Before reboot the VMs are still using disk 7.3

After reboot they are now using 7.4

As you can see this is a great improvement on the original way of updating the vDisk. However you will not always want to run through this process, especially when it comes to patching. This is where the Tasks feature of the vDisk Update Management tool comes into play. To implement; right click the Tasks node and follow the simple steps. This will by default allow you to connect to WSUS, SCCM or implement pre or post scripts. Like all tasks this can be scheduled and after the update the vDisk can be placed into Maintenance, Test or straight into Production modes.

In my opinion simplifying this process for desktops is a great win for XenDesktop. It makes it an easy step to get to grips with PVS and by integrating into SCCM and WSUS most desktop admins see significant advantages to their current virtual desktop update process. If you couple this with the advantages of PVS in terms of single image disk management, read IOPS cache and control over the write IOPS then we now have access to a powerful desktop management tool.

Finally lets not forget that all these advantages are available to XenApp too. With a number of organisations now virtualising XenApp servers the number of server instances is on the rise, what better way to manage them than with PVS.

***Update***

Stephen (comments below) has provided some links to PVS documentation that go into more detail on additional tasks. Thanks Stephen.

VDI-in-a-Box is an interesting desktop virtualisation solution from Citrix because it utilises local storage; its aim is to simplify and reduce the cost of entry to desktop virtualisation for small and medium businesses. The main premise is that each piece of hardware acts as a single entity in a grid, as capacity is reached additional servers can be added, enabling the solution to be scaled out block-by-block.

When sizing VDI-in-a-Box there are four main considerations; disk space, disk speed, RAM and CPU. I will look at each and discuss its potential impact. All calculations come from Citrix eDocs guide; VDI-in-a-Box > VDI-in-a-Box 5.0.x > System Requirements for VDI-in-a-Box 5.0.2 or are referenced via link.

Disk Space

Calculating disk space requirements will depend upon your hypervisor and the number of images you intend to utilise. In the calculations below I have assumed that XenServer 6.0 with Thin Provisioning is utilised and that 2 images will be required for 2 desktop types, power users and normal users.

Fixed items will be the storage required for the hypervisor and the vdiMGR appliance, plus Citrix recommends reserving an amount for swap and transient activity. In the Dell and Kaviza reference architecture they recommend 100 GB. Therefore, in this instance, fixed storage will be:

8 GB for the hypervisor + 74 GB for the vdiMGR appliance + 100 GB for swap and transient activity

=182 GB

For the desktops images, templates and virtual desktops, we will assume we are using non-persistent desktops, that we require 2 images and that the base image size is 20 GB for both desktop types. VDI-in-a-Box uses 2 times the image size, to maintain multiple images, and the cloning technology uses 15% of the original size of the desktop image for each desktop created. Therefore if we require 100 desktops the desktop storage requirement per host will be:

I recommend adding addition space for growth; in this instance I will assume 30%. Therefore total disk capacity requirements will be:

Fixed items + desktops + growth

= (182 GB + 380 GB) + 30%

= 562 GB + 167 GB

= 729 GB

Disk Speed

Disk speed is all about IOPS, understanding desktop IOPS is an art in itself. There are a number of main areas of consideration; boot IOPS, login IOPS, average use or normal operation IOPS, application launch IOPS and log off IOPS for the user type (e.g. normal or power user). A good guide to IOPS considerations and measurements has been written by Jim Moyle titled “Windows 7 IOPS for VDI a Deep Dive 1.0”

In my example IOPS numbers should be seen as a guide only to assist you in determining the disk speed required for your deployment. The fastest disk is not always the most economical, therefore once you have determined the IOPS required it is worth looking at different options for your hardware based on capacity versus unit cost.

In this guide I have referenced the Citrix blog article “Finding a Better Way to Estimate IOPS for VDI”. This is by no means a definitive guide however the section I have utilised “Calculating Workload IOPS” has a useful sub-section on user definitions and a recommended “ballpark guesstimates” figure for IOPS per workload. By utilising these numbers I have been able to simplify my calculations and focus just on this number and not the boot and login numbers. The user definitions are listed below:

Light user: ~6 IOPS per concurrent user. This user is working in a single application and is not browsing the web.

Normal user: ~10 IOPS per concurrent user. This user is probably working in a few applications with minimal web browsing.

Power user: ~25 IOPS per concurrent user. This user usually runs multiple applications concurrently and spends considerable time browsing the web.

Heavy user: ~ 50 IOPS per concurrent user. This user is busy doing tasks that have high I/O requirements like compiling code or working with images or video.

Therefore in this example if we require 100 desktops, made up of 25 power users and 75 normal users our “guesstimate number” for IOPS will be:

(# of power users * 25 IOPS) + (# of normal users * 10 IOPS)

= (25 * 25) + (75 * 10)

= 625 + 750

= 1375 IOPS

Please note: to gauge a true reflection of required IOPS I would suggest monitoring the current usage patterns and workloads in your environment. To bring the peak IOPS requirement down consider the start-up and login patterns and how these can be managed; e.g. allowing for a longer boot time and starting the boot process overnight or early in the day or by only allowing users to log off desktop not reboot and decide what you want to happen to desktops when a user does log off.

RAM

RAM requirements are an easier calculation to make however we still have to make a number of assumptions. Total RAM required will be the RAM required for the hypervisor, plus the RAM required for the vdiMGR appliance, plus overhead, plus the RAM required for the virtual desktops.

Citrix recommends at least 1.5 GB for Windows 7 and at least 0.5 GB for XP, with 1 GB for the hypervisor, 1 GB for the vdiMGR appliance and a 10% overhead.

I have made the assumption that all my desktops are Windows 7 and that a normal user requires 1.5 GB RAM and a power user 2 GB RAM. Therefore in this example the total RAM required is:

CPU is similar in terms of RAM requirement in that we need CPU for the hypervisor, the vdiMGR appliance, overhead and virtual desktops. The amount of CPU required per user type will vary, in my example I will only be assigning 1 vCPU per desktop however I will make the assumption that I will get less power users per core. Definitions vary; Citrix eDocs states 10 for task workers, 8 for knowledge workers, and 6 desktops per core for heavy users and the Dell reference architecture states 5 for basic users and 6 desktops per core for standard users.

In my example I have picked an average and will assign 8 desktop per core for normal users and 6 desktops per core for power users. Therefore the total CPU requirement is:

In my example I have 100 users in total of which 25 are classed power users and 75 normal users.

Each user will be supplied with a Windows 7 virtual desktop with 1vCPU, however normal users will receive 1.5 GB RAM and power users 2 GB RAM.

In total I will require:

792 GB of disk space

1375 IOPS

181 GB RAM

17 CPU cores

Choosing your Hardware Configuration

Considerations

There are a number of considerations we need to take into account when selecting hardware.

The first we need to discuss is availability. Server class hardware is great however I do not like the idea of 100 desktops not being available, therefore for my example I am going to suggest an N+1 strategy. This will always depend upon risk vs. expenditure and ultimately needs to be a business decision.

Secondly we need to think about type of hardware, disk type speeds and RAID configuration. As all resources are on a single appliance and we need IOPS and disk space, in most instances blade configurations will be out of the question, as we will only have two disks. Therefore we are often looking at a 2 – 4 U server.

In regards to RAID configuration, RAID 0 will give us maximum disk capacity and IOPS availability but may make us nervous about availability, leaving RAID 1 + 0 as the option that gives us disk space and availability without too much penalty.

Finally we need to consider disk seep as IOPS availability is important. Citrix’s VDI-in-a-Box sizing guide states the following disk speeds

In my calculations I have selected a middle ground and used the following:

SSD = 6,000 IOPS

15K = 175 IOPS

10K = 125 IOPS

Hardware Options

The two figures that stand out for me in total resources required are IOPS and RAM. See example below for a basic server configuration (Host Specifications):

In the example above, Table 1, a server with 16 cores, 96 GB of RAM and 8 x 15K SAS spindles in a RAID 1 + 0 configuration can support 25 power users and 56 normal users based on my previous calculations and assumptions. And as you can see it is the IOPS that restricts the number of power users and RAM that restricts the number of normal users.

In Table 2 below if we increase the RAM to 128 GB the number of power user desktops is still limited by IOPS however the number of normal user desktops is now also limited by IOPS. Therefore if we had purchased a server in this configuration we would have been wasting money on RAM. To take advantage of this amount of RAM we would either need to change the disk type or increase the number of spindles.

At no point in the two hardware examples has CPU cores or disk capacity been an issue.

Getting the hardware right is important and your number of servers will depend upon your appetite for risk plus your understanding of your environment.

Analysing your environment before you start to gauge IOPS is an important step. The numbers here will help but can only be used as a guide. Partners familiar with VDI-in-a-Box will have a good understanding of resources required and may be able to speed up this process based on their history of work.

Finally if you have completed some analysis just using some list price numbers on hardware you’ll see that by understanding your limit points, you can make sure you get the best return on hardware investment and reduce the overall cost per desktop.

I spend a lot of my time talking to customers and many invite me into their organisation to talk about virtual desktops, in fact they often mark the subject of the meeting VDI. I’m always interested in what customers are up to and I often learn a lot from what they are doing and how they are using IT to drive their business forward. Be that a large multi-national mining company, a small not for profit care agency or somewhere in between, the conversations are all interesting and everyone has their challenges.

Moving Ore and looking after the elderly present very different challenges when it comes to information management however all want to discuss VDI. In other words all think that VDI will meet a need in their business. Some will argue that this conversation has been happening for some time and is nothing new, I can hear you yawning now and shouting: “its vendor push, its market hype, it can’t be done for less than a gazillion dollars, the user experience is rubbish and technically it’s too hard. Come on people have been blogging about this for years, get with the program and write something about Big Data please!”

So why do customers keep coming back to the topic and where is the common value? I believe the common value starts with the apps, all businesses run on applications. It is applications that allow them to organise and process data into a meaningful product enabling the business to function. Centralising apps, so all information can be processed in one place, i.e. all data is in one place makes a lot of sense. So when I listen to organisations discuss their business needs it is applications we often come round to talking about.

The next stage with any customer is to trial and test our technology. This means I get to spend time on site with my sleeves rolled up implementing products and integrating them into a proof of concept environment. These days I just about always implement a NetScaler VPX for remote access, XenDesktop and XenApp. At the end of every trial XenApp will have met the application requirements as discussed but it is XenDesktop that almost always has the customer most excited about their next desktop roll out. If you get VDI right, then yes apps are as important as ever but virtual desktops is what the customer wants to do. You cannot in my opinion underestimate the value of desktop virtualisation, the ability to rapidly deploy, enable a single user experience from any location and to give a user a desktop that is always available without carrying a device.

When I first started at Citrix, over 5 years ago, I had just about everything locally installed by IT on my laptop, this was then shipped to my remote office (home) and away I went. All apps were available as hosted applications but for me Office was installed locally and everything else, e.g. SAP was hosted on Presentation Server. I had device flexibility in that I could get to my hosted apps from any device with an ICA client but because of the way I worked remotely this rarely happened.

Things changed as our technology changed e.g. accelerated access to mapped drives was cool, improved performance of hosted apps was always appreciated and the new version of Office arriving streamed made a difference. A difference in that I started to add additional apps to my local pool.

Then over eighteen months ago I decided to move my full working day into a virtual desktop. I had been using XenDesktop before but not exclusively, I would chop and change between environments depending upon task. But it felt like the right time to move so after a quick copy of a few files away I went.

The experience has been good for a number of reasons. Firstly as a remote employee working from a home office, I never worry about mapped drives, centrally stored content or the device I am working from. It’s not unusual to find me outside having pinched my wife’s Mac book working on a document in our garden; it’s a pleasant place to be and my laptop stays docked. As I navigate through tasks launching multiple applications the interaction is familiar and easy and from a user experience point of view I have few complaints.

Do I still use my local device, sure over the year and a half I have dropped into a local browser to run video content and a presentation or two. Outside of that it’s really the personal stuff that I use locally, I access Facebook via my local browser, I have a local Twitter client and Skype for family calls when I am away. My work desktop is thousands of kilometres away and to be honest I’m keeping it that way.

Sign up for an Amazon Web Services account. I have used EC2 the Amazon Elastic Compute Cloud for this purpose. EC2 is one of two compute services from Amazon, it is worth looking through all the services on offer as this will help you develop an understanding of the entire Amazon services.

Once the account has been created, sign in and launch the EC2 console.

From the console select your region, I selected Singapore. I completed a number of tests before selecting the region I wanted to use. I’m based in Perth Western Australia and my ISP routes me straight there so my experience is very good. However if you are based on the Eastern States of Australia then the results of my testing showed that the West coast of the United States would be the best option.

From the EC2 dashboard, select Network and Security, Security Groups. Create a new security group and assign the inbound port rules.

I added rules for RDP, HTTP, and ICA including the CGP. I have not enabled multi stream ICA in this environment or UDP stream. IF this is required then make the adjustments required in the security group.

Navigate to images and search for an AMI to use. In this instance I used ami-f4dfa1a6 (amazon/Windows-2008R2-SP1-English-Base-v101). Once selected you will need to launch the instance.

On launch you are required to select the number of instance, availability zone and instance type. If you are unsure of the cost or location Amazon provides a very good pricing guide and FAQ, which is well worth referencing.

Post type selection you will be asked to create a key pair, configure the firewall, which is a matter of adding the instance to you security group and reviewing the information you have selected.

From the EC2 dashboard it is now possible to start the instance. The status will change from stopped, to pending and then running. Once running right click the instance and retrieve the Windows password and connect to the device. This will start an RDP session to the server.

After logging into the server launch the EC2 Configuration Service and un tick the Set Computer Name, Initialize Drives and Password options. Then in computer management reset the computers hostname. The new hostname will be used as the license server name.

I have not played with the features however if I was to build more than one device and required the use of Sysprep then this may change the options I have chosen.

Following a reboot, download the XenApp 6.5 ISO and mount on the server. I used Virtual CloneDriver for this. It is possible to convert this instance to a template for further use; I have not completed this step in this case.Please note: you will need a MyCitrix account to access the media and evaluation licenses.

Once mounted follow the XenApp 6.5 install process, installing the edition and components required. For a single server set up I installed XenApp, web interface and licensing all on a single device.

In this environment I have not used an AWS elastic IP address and am therefore presented with a different public IP address at each instance start. Therefore to access published applications over the Internet I have configured the Web Interface and Service Site to user Alternate address translation and on each instance start set the AaltAddr on the XenApp server to map the current public IP address.

Setting the secure access method on the XenApp Web Interface and Services sites.

Last Tweets

I was lucky enough to join the Australian Institute of Company Directors swim team for the #PorttoPub swim in Perth Western Australia. The race was called off at the three hour mark due to the tough conditions. However it proved again to me that a good t…https://t.co/AMf3zGNVEx,7 hours ago