As of the writing of this article, this hotfix has not yet been added to CTX138115 (entitled "Recommended Updates for XenServer Hotfixes") or, as we like to call it "The Fastest Way to Patch A Vanilla XenServer With One or Two Reboots!" I imagine that resource will be updated to reflect XS65ESP1035 soon.

Personally/Professionally, I will be installing this hotfix as, per CTX216249, I am excited to read what is addressed/fixed:

Duplicate entry for XS65ESP1021 was created when both XS65ESP1021 and XS65ESP1029 were applied.

After deleting a snapshot on a pool member that is not the pool master, a coalesce operation may not succeed. In such cases, the coalesce process can constantly retry to complete the operation, resulting in the creation of multiple RefCounts that can consume a lot of space on the pool member.

In addition, this hotfix contains the following improvement:

This fix lets users set a custom retrans value for their NFS SRs thereby giving them more fine-grained control over how they want NFS mounts to behave in their environment.

This is storage based hotfix and while we can create VMs all day, we rely on the storage substrate to hold our precious VHDs, so plan accordingly to deploy it!

Applying The Patch Manually

As a disclaimer of sorts, always plan your patching during a maintenance window to prevent any production outages. For me, I am currently up-to-date and will be rebooting my XenServer host(s) in a few hours, so I manually applied this patch.

Why? If you look in XenCenter for updates, you won't see this hotfix listed (yet). If it was available in XenCenter, checks and balances would inform me I need to suspend, migrate, or shutdown VMs. For a standalone host, I really can't do that. In my pool, I can't reboot for a few hours, but I need this patch installed, so I simply do the following on my XenServer stand-alone server OR XenServer primary/master server:

Using the command line in XenCenter, I make a directory in /root/ called "ups" and then descend into that directory because I plan to use wget (Web Get) to download the patch via its link in http://support.citrix.com/article/CTX216249:

[root@colossus ~]# mkdir ups[root@colossus ~]# cd ups

Now, using wget I specify what to download over port 80 and to save it as "hf35.zip":

I'm a big fan of using shortcuts - especially where UUIDs are involved. Now that I have the patch ready to expand onto my XenServer master/stand-alone server, I want to create some kind of variable so I don't have to remember my host's UUID or the patch's UUID.

For the host, I can simply source in a file that contains the XenServer primary/master server's INSTALLATION_UUID (better known as the host's UUID):

So, nothing really special -- just a quick way to apply patches to a XenServer primary/master server. In the same manner, you can substitute the $INSTALLATION_UUID with other host UUIDs in a pool configuration, etc.

Last year, I announced that we were working on a XenServer Administrators Handbook, and I'm very pleased to announce that it's been published. Not only have we been published, but based on the Amazon reviews to date we've done a pretty decent job. In part, I suspect that has a ton to do with the book being focused on what information you, XenServer administrators, need to be successful when running a XenServer environment regardless of scale or workload.

The handbook is formatted following a simple premise; first you need to plan your deployment and second you need to run it. With that in mind, we start with exactly what a XenServer is, define how it works and what expectations it has on infrastructure. After all, it's critical to understand how a product like XenServer interfaces with the real world, and how its virtual objects relate to each other. We even cover some of the misunderstandings those new to XenServer might have.

While it might be tempting to go deep on some of this stuff, Jesse and I both recognized that virtualization SREs have a job to do and that's to run virtual infrastructure. As interesting as it might be to dig into how the product is implemented, that's not the role of an administrators handbook. That's why the second half of the book provides some real world scenarios, and how to go about solving them.

We had an almost limitless list of scenarios to choose from, and what you see in the book represents real world situations which most SREs will face at some point. The goal of this format being to have a handbook which can be actively used, not something which is read once and placed on some shelf (virtual or physical). During the technical review phase, we sent copies out to actual XenServer admins, all of whom stated that we'd presented some piece of information they hadn't previously known. I for one consider that to be a fantastic compliment.

Lastly, I want to finish off by saying that like all good works, this is very much a "we" effort. Jesse did a top notch job as co-author and brings the experience of someone who's job it is to help solve customer problems. Our technical reviewers added tremendously to the polish you'll find in the book. The O'Reilly Media team was a pleasure to work with, pushing when we needed to be pushed but understanding that day jobs and family take precedence.

So whether you're looking at XenServer out of personal interest, have been tasked with designing a XenServer installation to support Citrix workloads, clouds, or for general purpose virtualization, or have a XenServer environment to call your own, there is something in here for you. On behalf of Jesse, we hope that everyone who gets a copy finds it valuable. The XenServer Administrator's handbook is available from book sellers everywhere including:

With storage providers adding better functionality to provide features like QoS, fast snapshot & clone and with the advent of storage-as-a-service, we are interested in the ability to utilize these features from XenServer. VMware’s VVols offering already allows integration of vendor provided storage features into their hypervisor. Since most storage allows operations at the granularity of a LUN, the idea is to have a one-to-one mapping between a LUN on the backend and a virtual disk (VDI) on the hypervisor. In this post we are going to talk about the supplemental pack that we have developed in order to enable VDI-per-LUN.

Xenserver Storage

To understand the supplemental pack, it is useful to first review how XenServer storage works. In XenServer, a storage repository (SR) is a top-level entity which acts as a pool for storing VDIs which appear to the VMs as virtual disks. XenServer provides different types of SRs (File, NFS, Local, iSCSI). In this post we will be looking at iSCSI based SRs as iSCSI is the most popular protocol for remote storage and the supplemental pack we developed is targeted towards iSCSI based SRs. An iSCSI SR uses LVM to store VDIs over logical volumes (hence the type is lvmoiscsi). For instance:

Here c67132ec-0b1f-3a69-0305-6450bfccd790 is the UUID of the SR. Each VDI is represented by a corresponding LV which is of the format VHD-. Some of the LVs have a small size of 8MB. These are snapshots taken on XenServer. There is also a LV named MGT which holds metadata about the SR and the VDIs present in it. Note that all of this is present in an SR which is a LUN on the backend storage.

Now XenServer can attach a LUN at the level of an SR but we want to map a LUN to a single VDI. In order to do that, we restrict an SR to contain a single VDI. Our new SR has the following LVs:

If a snapshot or clone of the LUN is taken on the backend, all the unique identifiers associated with the different entities in the LUN also get cloned and any attempt to attach the LUN back to XenServer will result in an error because of conflicts of unique IDs.

Resignature and supplemental pack

In order for the cloned LUN to be re-attached, we need to resignature the unique IDs present in the LUN. The following IDs need to be resignatured

LVM UUIDs (PV, VG, LV)

VDI UUID

SR metadata in the MGT Logical volume

We at CloudOps have developed an open-source supplemental pack which solves the resignature problem. You can find it here. The supplemental pack adds a new type of SR (relvmoiscsi) and you can use it to resignature your lvmoiscsi SRs. After installing the supplemental pack, you can resignature a clone using the following command

Here, instead of creating a new SR, the supplemental pack re-signatures the provided LUN and detaches it (the error is expected as we don’t actually create an SR). You can see from the error message that the SR has been re-signed successfully. Now the cloned SR can be introduced back to XenServer without any conflicts using the following commands:

This supplemental pack can be used in conjunction with an external orchestrator like CloudStack or OpenStack which can manage both the storage and compute. Working with SolidFire we have implemented this functionality, available in the next release of Apache CloudStack. You can check out a preview of this feature in a screencast here.

Overview

We were interested in getting XenServer 6.5 to boot via UEFI. Leaving servers in Legacy/BIOS boot was not an option in our target environment. We still have to do the initial install with the server in Legacy BIOS mode; however, I managed to compile Xen as an EFI bootable binary using the source and patches distributed by Citrix. With that I am able to change the servers boot mode back to UEFI and boot XenServer. Here are the steps I used to compile it.

Steps

Prepare a DDK

Prepare a build environment

Build some prerequisites

Unpack the SRPM

Compile Xen

DDK Preparation

Development will be done inside a 6.5 DDK. This is a CentOS 5.4-based Linux that has the same kernel as Dom0 and some of the required development tools.

Import the VM template per Citrix DDK developer documentation.

After importing, set the following VM options:

2 vCPUs

Increase memory to 2048MB

Resize disk image to 10GB

Add a network interface for SSH

Start the VM, set a root password, and then finalize resizing the disk by running:

Building the Prerequisites

This page: http://xenbits.xen.org/docs/4.3-testing/misc/efi.html says that it is required to use gcc 4.5 or better and that that binutils must be compiled with --enable-targets=x86_64-pep. I could not satisfy this with packages in the repositories, so I compiled and installed some requirements for gcc:

Compile Xen

The source code and required scripts are now all under /root/rpmbuild/, so just run:

# cd ~/rpmbuild
# QA_RPATHS=$[ 0x0020 ] rpmbuild -bc SPECS/xen.spec

The -bc flag causes the process to follow the spec file and patch the source, but then stop just before running the make commands. The make commands would fail to compile with warnings about uninitialized variables being treated as errors. Fix this by changing line 45 of ~/rpmbuild/BUILD/xen-4.4.1/xen/Rules.mk to read:

and the compile will finish successfully. I probably could have just made a quick patch to add the -Wno-error flag and allowed the rpmbuild to run the full spec file, but I didn't actually need to compile xen-tools etc, those are already compiled and installed on the XenServer installation. The only file needed is ~/rpmbuild/BUILD/xen-4.4.1/xen/xen.efi. With that in hand I created a xen.cfg file like this:

where the root UUID is the boot disk created during the XenServer install and the RSDP number came from running:

# dmesg | grep RSDP

I ran that in an EFI booted live Linux environment. I found that somevendors' UEFI implementations were able to provide the RSDP during bootand some were not, so without specifying it in the xen.cfg I had troublewith things like usb peripherals.

Boot XenServer

With the xen.efi and xen.cfg I was able to boot XenServer in UEFI boot mode using refind. We have done extensive testing on several different servers and found no problems. I was also able to repeat the process with the source code provided by the service packs up to and including Service Pack 1. I haven't tried any further than that yet.

Editors Note

For those of you wishing to retain Citrix commercial support status, the above procedure will convert the XenServer 6.5 host into an "unsupported configuration".

One of the tasks I was assigned was to fix the code preventing XenServer with Neutron from working properly. This configuration used to work well, but the support was broken when more and more changes were made in Neutron, and the lack of a CI environment with XenServer hid the problem. I began getting XenServer with Neutron back to a working state by following the outline in the Quantum with Grizzly blog post from a few years ago. It's important to note that with the Havana release, Quantum was renamed to Neutron, and we'll use Neutron throughout this post. During my work, I needed to debug why OpenStack images were not obtaining IP addresses. This blog post covers the workflow I used, and I hope you'll find it helpful.

Environment

XenServer: 6.5

OpenStack: September 2015 master code

Network: ML2 plugin, OVS driver, VLAN type

Single Box installation

I had made some changes in the DevStack script to let XenServer with Neutron to be installed and run properly. The following are some debugging processes I followed when newly launched VMs could not get an IP from Neutron DHCP agent automatically.

Brief description of the DHCP process

When guest VMs are booting, they will try to send DHCP request broadcast message within the same network broadcast domain and then wait for a DHCP server's reply. In OpenStack Neutron, the DHCP server, or DHCP agent, is responsible for allocating IP addresses. If VMs cannot get IP addresses, our first priority is to check whether the packets from the VMs can be received by the DHCP server.

Dump traffic in Network Node

Since I used DevStack with single box installation, all OpenStack nodes reside in the same DomU (VM). Perform the following steps

Dump traffic in Compute Node

Meanwhile, you will definitely want to dump traffic in the OpenStack compute note, and with XenServer this is Dom0.

When new instance is launched, there will be a new virtual interface created named “vifX.Y”. 'X' is the domain ID for the new VM and Y is the ID if the VIF defined in XAPI. Domain IDs are sequential - if the latest interface is vif20.0, the next one will most likely be vif21.0. Then you can try tcpdump -i vif21.0. Note that it may fail at first if the virtual interface hasn't been created yet, but once the virtual interface is created, you can monitor the packets. Theoretically you should see DHCP request and reply packets in Dom0; just like you see in DHCP agent side.

Note: If you cannot catch the dump packet at the instance’s launching time, you can also try this using ifup eth0 after doing a login to the instance via XenCenter. "ifup eth0" will also trigger the instance to send a DHCP request.

Check DHCP request goes out of the compute node

In most case, you should see the DHCP request packets sent out from Dom0, this means that the VM itself is OK. It has sent out DHCP request message.

Note: Some images will try to send DHCP requests from time to time until it gets a response message. However, some images won’t. They will only try several times, e.g. three times, and if it cannot get DHCP response it won’t try again any more. In some scenarios, this will let the instance lose the chance of sending DHCP request. That’s why some people on the Internet suggest changing images when launching instance cannot get an IP address via DHCP.

Check DHCP request arrives at the DHCP server side

When I was first testing, I didn't see any DHCP request from the DHCP agent side. Where the request packet go? It’s possible that the packets are dropped? Then who dropped these packets? Why drop them?

If we think it a bit more, it’s either L2 or L3 that dropped. With this in mind, we can begin to check one by one. For L3/L4, I don’t have a firewall setup and the security group’s default rule is to let all packets go through. So, I don’t spent so much effort on this part. For L2, since we use OVS, I began by checking OVS rules. If you are not familiar with OVS, this can take some time. At least I spent a lot of time on it to completely understand the mechanism and the rules.

The main aim is to check all existing rules in Dom0 and DomU, and then try to find out which rule let the packets dropped.

Check OVS flow rules

OVS flow rules in Network Node

To get the port information on the network bridge "br-int" execute the following in the DevStack VM

These rules in DomU look normal without concerns, so let's go on with Dom0 and try to find more.

OVS flow rules in Compute Node

Looking at the traffic flow in picture 1, the traffic direction from VM to DHCP server is xapiX->xapiY(Dom0), then ->br-eth1->br-int(DomU). So, maybe some rules filtered the packets at the layer 2 level by OVS. While I do suspect xapiY, I cannot provide any specific reasons why.

To determine the xapiY in your environment, execute:

xe network-list

In the results, look for the "bridge" which matches the name-label for your network. In our case, it was xapi3, so to determine the port information, execute:

The higher priority=4 will be matched first. If the dl_vlan=1, it will modify the tag and then with normal process, which will let the flow through

The lower priority=2 will be matched second, and it will drop the flow. So, will the flows be dropped? If the flow doesn’t have dl_vlan=1, it will be definitely be dropped.

Note:

(1) For dl_vlan=1, this is the virtual LAN tag id which corresponding to the Port tag

(2) I didn’t realize the problem was a missing tag for the new launched instance for a long time due to my lack of OVS understanding. Thus I didn’t have know to check the port’s tag first. So next time when we meet this problem, we can check these part first.

With this question, I checked the new launched instance’s port information, ran command ovs-vsctl show in Dom0, you can get results like these:

For port vif16.0, it really doesn’t have tag with value 1, so the flow will be unconditionally dropped.

Note: When launching a new instance under XenServer, it will have a virtual network interface named vifx.0, and from OVS’s point of view, it will also create a port and bind that interface correspondingly.

Check why tag is not set

The next step is to find out why the newly launched instance don’t have a tag in OVS. There is no obvious findings for new comers like me. Just read the code over and over and make assumptions and test and so forth. But after trying various ideas, I did find that each time when I restart neutron-openvswitch-agent(q-agt) in the Compute Node, the VM can get IP if I execute ifup eth0 command. So, there must be something which is done when q-agt restarts and is not done when launching a new instance. With this information, I can focus my code inspection. Finally I found that, with XenServer, when a new instance is launched, q-agt cannot detect the newly added port and it will not add a tag to the corresponding port.

That then left the question of why q-agt cannot detect port changes. We have a session from DomU to Dom0 to monitor port changes, which seems not to work as we expect. With this in mind, I first ran command ovsdb-client monitor Interface name,ofport in Dom0, which produces output like this:

So, this means the OVS monitor itself works well! There maybe other errors with the code that makes the monitoring. Seems I'm getting closer to the the root cause :)

Finally, I found that with XenServer, our current implementation cannot read the OVS monitor's output, and thus q-agt doesn't know there is a new port added. But lucky enough, L2 Agent provides another way of getting the port changes, and thus we can use that way instead.

Setting minimize_polling=false in the L2 agent's configuration file ensures the Agent does not rely on ovsdb-client monitor, which means that the port will be identified and the tag gets added properly!

In this case, this is all that was needed to get an IP address and everything else worked normally. I hope the process I went through in debugging this problem will be beneficial to others.

A few weeks ago, I received an invitation to participate in the first new XenServer class to be rolled out in over three years, namely CXS-300: Citrix XenServer 6.5 SP1 Administration. Those of you with good memories may recall that XenServer 6.0, on which the previous course was based, was officially released on September 30, 2011. Being an invited guest in what was to be only the third time the class had been ever held was something that just couldn’t be passed up, so I hastily agreed. After all, the evolution of the product since 6.0 has been enormous. Plus, I have been a huge fan of XenServer since first working with version 5.0 back in 2008. Shortly before the open-sourcing of XenServer in 2013, I still recall the warnings of brash naysayers that XenServer was all but dead. However, things took a very different turn in the summer of 2013 with the open-source release and subsequent major efforts to improve and augment product features. While certain elements were pulled and restored and there was a bit of confusion about changes in the licensing models, things have stabilized and all told, the power and versatility of XenServer with the 6.5 SP1 release is at a level now some thought it would never reach.

FROM 6.0 TO 6.5 – AND BEYOND

XenServer (XS for short) 6.5 SP1 made its debut on May 12, 2015. The feature set and changes are – as always – incorporated within the release notes. There are a number of changes of note that include an improved hotfix application mechanism, a whole new XenCenter layout (since 6.5), increased VM density, more guest OS support, a 64-bit kernel, the return of workload balancing (WLB) and the distributed virtual switch controller (DVSC) appliance, in-memory read caching, and many others. Significant improvements have been made to storage and network I/O performance and overall efficiency. XS 6.5 was also a release that benefited significantly from community participation in the Creedence project and the SP1 update builds upon this.

One notable point is that XenServer has been found to now host more XenDesktop/XenApp (XD/XA) instances than any other hypervisor (see this reference). And, indeed, when XenServer 6.0 was released, a lot of the associated training and testing on it was in conjunction with Provisioning Services (PVS). Some users, however, discovered XenServer long before this as a perfectly viable hypervisor capable of hosting a variety of Linux and Windows virtual machines, without having even given thought to XenDesktop or XenApp hosting. For those who first became familiar with XS in that context, the added course material covering provisioning services had in reality relatively little to do with XenServer functionality as an entity. Some viewed PVS an overly emphasized component of the course and exam. In this new course, I am pleased to say that XS’s original roots as a versatile hypervisor is where the emphasis now lies. XD/XA is of course discussed, but the many features available that are fundamental to XS itself is what the course focuses on, and it does that well.

COURSE MATERIALS: WHAT’S INCLUDED

The new “mission” of the course from my perspective is to focus on the core product itself and not only understand its concepts, but to be able to walk away with practical working knowledge. Citrix puts it that the course should be “engaging and immersive”. To that effect, the instructor-led course CXS-300 can be taken in a physical classroom or via remote GoToMeeting (I did the latter) and incorporates a lecture presentation, a parallel eCourseware manual plus a student exercise workbook (lab guide) and access to a personal live lab during the entire course. The eCourseware manual serves multiple purposes, providing the means to follow along with the instructor and later enabling an independent review of the presented material. It adds a very nice feature of providing an in-line notepad for each individual topic (hence, there are often many of these on a page) and these can be used for note taking and can be saved and later edited. In fact, a great takeaway of this training is that you are given permanent access to your personalized eCourseware manual, including all your notes.

The course itself is well organized; there are so many components to XenServer that five days works out in my opinion to be about right – partly because often question and answer sessions with the instructor will take up more time than one might guess, and also, in some cases all participants may have already some familiarity with XS or other hypervisor that makes it possible to go into some added depth in some areas. There will always need to be some flexibility depending on the level of students in any particular class.

A very strong point of the course is the set of diagrams and illustrations that are incorporated, some of which are animated. These compliment the written material very well and the visual reinforcement of the subject matter is very beneficial. Below is an example, illustrating a high availability (HA) scenario:

The course itself is divided into a number of chapters that cover the whole range of features of XS, enforced by some in-line Q&A examples in the eCourseware manual and with related lab exercises. Included as part of the course are not only important standard components, such as HA and Xenmotion, but some that require plugins or advanced licenses, such as workload balancing (WLB), the distributed virtual switch controller (DVSC) appliance and in-memory read caching. The immediate hands-on lab exercises in each chapter with the just-discussed topics are a very strong point of the course and the majority of exercises are really well designed to allow putting the material directly to practical use. For those who have already some familiarity with XS and are able to complete the assignments quickly, the lab environment itself offers a great sandbox in which to experiment. Most components can readily be re-created if need be, so one can afford to be somewhat adventurous.

The lab, while relying heavily on the XenCenter GUI for most of the operations, does make a fair amount of use of the command line interface (CLI) for some operations. This is a very good thing for several reasons. First off, one may not always have access to XenCenter and knowing some essential commands is definitely a good thing in such an event. The CLI is also necessary in a few cases where there is no equivalent available in XenCenter. Some CLI commands offer some added parameters or advanced functionality that may again not be available in the management GUI. Furthermore, many operations can benefit from being scripted and this introduction to the CLI is a good starting point. For Windows aficionados, there are even some PowerShell exercises to whet their appetites, plus connecting to an Active Directory server to provide role-based access control (RBAC) is covered.

THE INSTRUCTOR

So far, the materials and content have been the primary points of discussion. However, what truly can make or break a class is the instructor. The class happened to be quite small, and primarily with individuals attending remotely. Attendees were in fact from four different countries in different time zones, making it a very early start for some and very late in the day for others. Roughly half of those participating in the class were not native English speakers, though all had admirable skills in both English and some form of hypervisor administration. Being all able to keep up a common general pace allowed the class to flow exceptionally well. I was impressed with the overall abilities and astuteness of each and every participant.

The instructor, Jesse Wilson, was first class in many ways. First off, knowing the material and being able to present it well are primary prerequisites. But above and beyond that was his ability to field questions related to the topic at hand and even to go off onto relevant tangential material and be able to juggle all of that and still make sure the class stayed on schedule. Both keeping the flow going and also entertaining enough to hold students’ attention are key to holding a successful class. When elements of a topic became more of a debatable issue, he was quick to not only tackle the material in discussion, but to try this out right away in the lab environment to resolve it. The same pertained to demonstrating some themes that could benefit from a live demo as opposed to explaining them just verbally. Another strong point was his adding his own drawings to material to further clarify certain illustrations, where additional examples and explanations were helpful.

SUMMARY

All told, I found the course well structured, very relevant to the product and the working materials to be top notch. The course is attuned to the core product itself and all of its features, so all variations of the product editions are covered.

Positive points:

Good breadth of material

High-quality eCourseware materials

Well-presented illustrations and examples in the class material

Q&A incorporated into the eCourseware book

Ability to save course notes and permanent access to them

Relevant lab exercises matching the presented material

Real-life troubleshooting (nothing ever runs perfectly!)

Excellent instructor

Desiderata:

More “bonus” lab materials for those who want to dive deeper into topics

More time spent on networking and storage

A more responsive lab environment (which was slow at times)

More coverage of more complex storage Xenmotion cases in the lecture and lab

In short, this is a class that fulfills the needs of anyone from just learning about XenServer to even experienced administrators who want to dive more deeply into some of the additional features and differences that have been introduced in this latest XS 6.5 SP1 release. CXS-300: Citrix XenServer 6.5 SP1 Administration represents a makeover in every sense of the word, and I would say the end result is truly admirable.

Administering any technology can be both fun and challenging at times. For many, the fun part is designing a new deployment while for others the hardware selection process, system configuration and tuning and actual deployment can be a rewarding part of being an SRE. Then the challenging stuff hits where the design and deployment become a real part of the everyday inner workings of your company and with it come upgrades, failures, and fixes. For example, you might need to figure out how to scale beyond the original design, deal with failed hardware or find ways to update an entire data center without user downtime. No matter how long you've been working with a technology, the original paradigms often do change, and there is always an opportunity to learn how to do something more efficiently.

That's where a project JK Benedict and I have been working on with the good people of O'Reilly Media comes in. The idea is a simple one. We wanted a reference guide which would contain valuable information for anyone using XenServer - period. If you are just starting out, there would be information to help you make that first deployment a successful one. If you are looking at redesigning an existing deployment, there are valuable time-saving nuggets of info, too. If you are a longtime administrator, you would find some helpful recipes to solve real problems that you may not have tried yet. We didn't focus on long theoretical discussions, and we've made sure all content is relevant in a XenServer 6.2 or 6.5 environment. Oh, and we kept it concise because your time matters.

I am pleased to announce that attendees of OSCON will be able to get their hands on a preview edition of the upcoming XenServer Administrators Handbook. Not only will you be able to thumb through a copy of the preview book, but I'll have a signing at the O'Reilly booth on Wednesday July 22nd at 3:10 PM. I'm also told the first 25 people will get free copies, so be sure to camp out ;)

Now of course everyone always wants to know what animal which gets featured for the book cover. As you can see below, we have a bird. Not just any bird mind you, but a xenops. Now I didn't do anything to steer O'Reilly towards this, but find it very cool that we have an animal which also represents a very core component in XenServer; the xenopsd. For me, that's a clear indication we've created the appropriate content, and I hope you'll agree.

An important consideration when planning a deployment of VMs on XenServer is around the sizing of your storage repositories (SRs). The question above is one I often hear. Is the performance acceptable if you have more than a handful of VMs in a single SR? And will some VMs perform well while others suffer?

In the past, XenServer's SRs didn't always scale too well, so it was not always advisable to cram too many VMs into a single LUN. But all that changed in XenServer 6.2, allowing excellent scalability up to very large numbers of VMs. And the subsequent 6.5 release made things even better.

The following graph shows the total throughput enjoyed by varying numbers of VMs doing I/O to their VDIs in parallel, where all VDIs are in a single SR.

In XenServer 6.1 (blue line), a single VM would experience modest 240 MB/s. But, counter-intuitively, adding more VMs to the same SR would cause the total to fall, reaching a low point around 20 VMs achieving a total of only 30 MB/s – an average of only 1.5 MB/s each!

On the other hand, in XenServer 6.5 (red line), a single VM achieves 600 MB/s, and it only requires three or four VMs to max out the LUN's capabilities at 820 MB/s. Crucially,adding further VMs no longer causes the total throughput to fall, but remains constant at the maximum rate.

And how well distributed was the available throughput? Even with 100 VMs, the available throughput was spread very evenly -- on XenServer 6.5 with 100 VMs in a LUN, the highest average throughput achieved by a single VM was only 2% greater than the lowest. The following graph shows how consistently the available throughput is distributed amongst the VMs in each case:

Specifics

Host: Dell R720 (2 x Xeon E5-2620 v2 @ 2.1 GHz, 64 GB RAM)

SR: Hardware HBA using FibreChannel to a single LUN on a Pure Storage 420 SAN

VMs: Debian 6.0 32-bit

I/O pattern in each VM: 4 MB sequential reads (O_DIRECT, queue-depth 1, single thread). The graph above has a similar shape for smaller block sizes and for writes.

INTRODUCTION

Back in August 2014 I went to the Xen Project Developer Summit in Chicago (IL) and presented a graph that caused a few faces to go "ahn?". The graph was meant to show how well XenServer 6.5 storage throughput could scale over several guests. For that, I compared 10 fio threads running in dom0 (mimicking 10 virtual disks) with 10 guests running 1 fio thread each. The result: the aggregate throughput of the virtual machines was actually higher.

In XenServer 6.5 (used for those measurements), the storage traffic of 10 VMs corresponds to 10 tapdisk3 processes doing I/O via libaio in dom0. My measurements used the same disk areas (raw block-based virtual disks) for each fio thread or tapdisk3. So how can 10 tapdisk3 processes possibly be faster than 10 fio threads also using libaio and also running in dom0?

At the time, I hypothesised that the lack of indirect I/O support in tapdisk3 was causing requests larger than 44 KiB (the maximum supported request size in Xen's traditional blkif protocol) to be split into smaller requests. And that the storage infrastructure (a Micron P320h) was responding better to a higher number of smaller requests. In case you are wondering, I also think that people thought I was crazy.

TRADITIONAL STORAGE AND MERGES

For several years operating systems have been optimising storage I/O patterns (in software) before issuing them to the corresponding disk drivers. In Linux, this has been achieved via elevator schedulers and the block layer. Requests can be reordered, delayed, prioritised and even merged into a smaller number of larger requests.

Merging requests has been around for as long as I can remember. Everyone understands that less requests mean less overhead and that storage infrastructures respond better to larger requests. As a matter of fact, the graph above, which shows throughput as a function of request size, is proof of that: bigger requests means higher throughput.

It wasn't until 2010 that a proper means to fully disable request merging came into play in the Linux kernel. Alan Brunelle showed a 0.56% throughput improvement (and less CPU utilisation) by not trying to merge requests at all. I wonder if he questioned that splitting requests could actually be even more beneficial.

SPLITTING I/O REQUESTS

Given the results I have seen on my 2014 measurements, I would like to take this concept a step further. On top of not merging requests, let's forcibly split them.

The rationale behind this idea is that some drives today will respond better to a higher number of outstanding requests. The Micron P320h performance testing guide says that it "has been designed to operate at peak performance at a queue depth of 256" (page 11). Similar documentation from Intel uses a queue depth of 128 to indicate peak performance of its NVMe family of products.

But it is one thing to say that a drive requires a large number of outstanding requests to perform at its peak. It is a different thing to say that a batch of 8 requests of 4 KiB each will complete quicker than one 32 KiB request.

MEASUREMENTS AND RESULTS

So let's put that to the test. I wrote a little script to measure the random read throughput of two modern NVMe drives when facing workloads with varying block sizes and I/O depth. For block sizes from 512 B to 4 MiB, I am particularly interested in analysing how these disks respond to larger "single" requests in comparison to smaller "multiple" requests. In other words, what is faster: 1 outstanding request of X bytes or Y outstanding requests of X/Y bytes?

My test environment consists of a Dell PowerEdge R720 (Intel E5-2643v2 @ 3.5GHz, 2 Sockets, 6 Cores/socket, HT Enabled), with 64 GB of RAM running Linux Jessie 64bit and the Linux 4.0.4 kernel. My two disks are an Intel P3700 (400GB) and a Micron P320h (175GB). Fans were set to full speed and the power profiles are configured for OS Control, with a performance governor in place.

There are several ways of looking at the results. I believe it is always worth starting with a broad overview including everything that makes sense. The graphs below contain all the data points for each drive. Keep in mind that the "x" axis represent Block Size (in KiB) over the Queue Depth.

While the Intel P3700 is faster overall, both drives share a common treat: for a certain amount of outstanding data, throughput can be significantly higher if such data is split over several inflight requests (instead of a single large request). Because this workload consists of random reads, this is a characteristic that is not evident in spinning disks (where the seek time would negatively affect the total throughput of the workload).

To make this point clearer, I have isolated the workloads involving 512 KiB of outstanding data on the P3700 drive. The graph below shows that if a workload randomly reads 512 KiB of data one request at a time (queue depth=1), the throughput will be just under 1 GB/s. If, instead, the workload would read 8 KiB of data with 64 outstanding requests at a time, the throughput would be about double (just under 2 GB/s).

CONCLUSIONS

Storage technologies are constantly evolving. At this point in time, it appears that hardware is evolving much faster than software. In this post I have discussed a paradigm of workload optimisation (request merging) that perhaps no longer applies to modern solid state drives. As a matter of fact, I am proposing that the exact opposite (request splitting) should be done in certain cases.

Traditional spinning disks have always responded better to large requests. Such workloads reduced the overhead of seek times where the head of a disk must roam around to fetch random bits of data. In contrast, solid state drives respond better to parallel requests, with virtually no overhead for random access patterns.

Virtualisation platforms and software-defined storage solutions are perfectly placed to take advantage of such paradigm shifts. By understanding the hardware infrastructure they are placed on top of, as well as the workload patterns of their users (e.g. Virtual Desktops), requests can be easily manipulated to better explore system resources.

Last week a vulnerability in QEUM was reported with the marketing name of "VENOM", but which is more correctly known as CVE-2015-3456. Citrix have released a security bulletin covering CVE-2015-3456 which has been updated to include hotfixes for XenServer 6.5, 6.5 SP1 and XenServer 6.2 SP1.

Learning about new XenServer hotfixes

When a hotfix is released for XenServer, it will be posted to the Citrix support web site. You can receive alerts from the support site by registering at http://support.citrix.com/profile/watches and following the instructions there. You will need to create an account if you don't have one, but the account is completely free. Whenever a security hotfix is released, there will be an accompanying security advisory in the form of a CTX knowledge base article for it, and those same KB articles will be linked on xenserver.org in the download page.

Patching XenServer hosts

XenServer admins are encouraged to schedule patching of their XenServer installations at their earliest opportunity. Please note that this bulletin does impact XenServer 6.2 hosts, and to apply the patch, all XenServer 6.2 hosts will first need to be patched to service pack 1 which can be found on the XenServer download page.

Wait, another XenServer release? Yes folks, there is no question we've been very busy improving upon XenServer over the past year, and the pace is quite fast. In case you missed it, we released XenServer 6.5 in January (formerly known as Creedence). Just a few weeks ago I announced and made available pre-release binaries for Dundee, and now we've just announced availability at Citrix Synergy of the first service pack for XenServer 6.5. Exciting times indeed.

What's in XenServer 6.5 SP1

I could bury the lead talk about hot fixes and roll-ups (more on that later), but the real value for SP1 is in the increased capabilities. Here are the lead items for this service pack:

The Docker work we previewed in January at FOSDEM and later on xenserver.org is now available. If you've been using xscontainer in preview form, it should upgrade fine, but you should back up any VMs first. Completion of the Docker work also implies that CoreOS 633.1.0 is also an officially supported operating system with SP1. Containers deployed in Unbuntu 14.04 and RHEL, CentOS, and Oracle Enterprise Linux 7 and higher are supported.

Adoption of LTS (long term support) guest support. XenServer guest support has historically required users of guest operating system to wait for XenServer to adopt official support for point releases in order to remain in a supported configuration. Starting with SP1, all supported operating systems can be upgraded within their major version and still retain "supported" status from Citrix support. For example, if a CentOS 6.6 VM is deployed, and the CentOS project subsequently releases CentOS 6.7, then upgrading that VM to CentOS 6.7 requires no changes to XenServer in order to remain a supported configuration.

Intel GVT-d support for GPU pass through for Windows guests. This allows users of Xeon E3 Haswell processors to use the embedded GPU in those processors within a Windows guest using standard Intel graphics drivers.

NVIDIA GPU pass though to Linux VMs allowing OpenGL and CUDA support for these operating systems.

Installation of supplemental packs can now be performed through XenCenter Update. Note that since driver disks are only a special case of a supplemental pack, driver updates or installation of drivers not required for host installation can now also be performed using this mechanism

Virtual machine density has been increased to 1000. What this means is that if you have a server which can reasonably be expected to run 1000 VMs of a given operating system, then using XenServer you can do so. No changes were made to the supported hardware configuration to accommodate this change.

Hotfix process

As with all XenServer service packs, XenServer 6.5 SP1 contains a rollup of all existing hot fixes for XenServer 6.5. This means that when provisioning a new host, your first post-installation step should be to apply SP1. It's also important to call out that when a service pack is released, hotfixes for the prior service pack level will no longer be created within six months. In this case, hotfixes for XenServer 6.5 will only be created through November 12th and following that point hotfixes will only be created for XenServer 6.5 SP1. In order for the development teams to streamline that transition, any defects raised for XenServer 6.5 in bugs.xenserver.org should be raised against 6.5 SP1 and not base 6.5.

INTRODUCTION

There are a number of ways to connect storage devices to XenServer hosts and pools, including local storage, HBA SAS and fiber channel, NFS and iSCSI. With iSCSI, there are a number of implementation variations including support for multipathing with both active/active and active/passive configurations, plus the ability to support so-called “jumbo frames” where the MTU is increased from 1500 to typically 9000 to optimize frame transmissions. One of the lesser-known and somewhat esoteric iSCSI options available on many modern iSCSI-based storage devices is Asymmetric Logical Unit Access (ALUA), a protocol that has been around for a decade and is furthermore mysterious and intriguing because of its ability to be used not only with iSCSI, but also with fiber channel storage. The purpose of this article is an attempt to both clarify and outline how ALUA can be used more flexibly now with iSCSI on XenServer 6.5.

HISTORY

ALUA support on XenServer goes way back to XenServer 5.6 and initially only with fiber channel devices. The support of iSCSI ALUA connectivity started on XenServer 6.0 and was initially limited to specific ALUA-capable devices, which included the EMC Clariion, NetApp FAS as well as the EMC VMAX and VNX series. Each device required specific multipath.conf file configurations to properly integrate with the server used to access them, XenServer being no exception. The upstream XenServer code also required customizations. The "How to Configure ALUA Multipathing on XenServer 6.x for Enterprise Arrays" article CTX132976 (March 2014, revised March 2015) currently only discusses ALUA support through XenServer 6.2 and only for specific devices, stating: “Most significant is the usability enhancement for ALUA; for EMC™ VNX™ and NetApp™ FAS™, XenServer will automatically configure for ALUA if an ALUA-capable LUN is attached”.

It was announced in the XenServer 6.5 Release Notes that XenServer will automatically connect to one of these aforementioned documented devices and it is now running the updated device mapper multipath (DMMP) version 0.4.9-72. This rekindled my interest in ALUA connectivity and after some research and discussions with Citrix and Dell about support, it appeared this might now be possible specifically for the Dell MD3600i units we have used on XenServer pools for some time now. What is not stated in the release notes is that XenServer 6.5 now has the ability to connect generically to a large number of ALUA-capable storage arrays. This will be gone into detail later. It is also of note that MPP-RDAC support is no longer available in XenServer 6.5 and DMMP is the exclusive multipath mechanism supported. This was in part because of support and vendor-specific issues (see, for example, the XenServer 6.5 Release Notes or this document from Dell, Inc.).

But first, how are ALUA connections even established? And perhaps of greater interest, what are the benefits of ALUA in the first place?

ALUA DEFINITIONS AND SETTINGS

As the name suggests, ALUA is intended to optimize storage traffic by making use of optimized paths. With multipathing and multiple controllers, there are a number of paths a packet can take to reach its destination. With two controllers on a storage array and two NICs dedicated to iSCSI traffic on a host, there are four possible paths to a storage Logical Unit Number (LUN). On the XenServer side, LUNs then are associated with storage repositories (SRs). ALUA recognizes that once an initial path is established to a LUN that any multipathing activity destined for that same LUN is better served if routed through the same storage array controller. It attempts to do so as much as possible, unless of course a failure forces the connection to have to take an alternative path. ALUA connections fall into five self-explanatory categories (listed along with their associated hex codes):

Active/Optimized : x0

Active/Non-Optimized : x1

Standby : x2

Unavailable : x3

Transitioning : xf

For ALUA to work, it is understood that an active/active storage path is required and furthermore that an asymmetrical active/active mechanism is involved. The advantage of ALUA comes from less fragmentation of packet traffic by routing if at all possible both paths of the multipath connection via the same storage array controller as the extra path through a different controller is less efficient. It is very difficult to locate specific metrics on the overall gains, but hints of up to 20% can be found in on-line articles (e.g., this openBench Labs report on Nexsan), hence this is not an insignificant amount and potentially more significant that gains reached by implementing jumbo frames. It should be noted that the debate continues to this day regarding the benefits of jumbo frames and to what degree, if any, they are beneficial. Among numerous articles to be found are: The Great Jumbo Frames Debate from Michael Webster, Jumbo Frames or Not - Purdue University Research, Jumbo Frames Comparison Testing, and MTU Issues from ESNet. Each installation environment will have its idiosyncrasies and it is best to conduct tests within one's unique configuration to evaluate such options.

The SCSI Architecture Model version defines these SCSI Primary Commands (SPC-3) used to determine paths. The mechanism by which this is accomplished is target port group support (TPGS). The characteristics of a path can be read via an RTPG command or set with an STPG command. With ALUA, non-preferred controller paths are used only for fail-over purposes. This is illustrated in Figure 1, where an optimized network connection is shown in red, taking advantage of routing all the storage network traffic via Node A (e.g., storage controller module 0) to LUN A (e.g., 2).

Figure 1. ALUA connections, with the active/optimized paths to Node A shown as red lines and the active/non-optimized paths shown as dotted black lines.

Various SPC commands are provided as utilities within the sg3_utils (SCSI generic) Linux package.

There are other ways to make such queries, for example, VMware has a “esxcli nmp device list” command and NetApp appliances support “igroup” commands that will provide direct information about ALUA-related connections.

Let us first examine a generic Linux server containing ALUA support connected to an ALUA-capable device. In general, note that this will entail a specific configuration to the /etc/multipath.conf file and typical entries, especially for some older arrays or XenServer versions, will use one or more explicit configuration parameters such as:

hardware_handler ”1 alua”

prio “alua”

path_checker “alua”

Consulting the Citrix knowledge base article CTX132976, we see for example the EMC Corporation DGC Clariion device makes use of an entry configured as:

To investigate the multipath configuration in more detail, we can make use of the TPGS setting. The TPGS setting can be read using the sg_rtpg command. By using multiple “v” flags to increase verbosity and “d” to specify the decoding of the status code descriptor returned for the asymmetric access state, we might see something like the following for one of the paths:

Noting the boldfaced characters above, we see here specifically that target port ID 1 is an active/non-optimized ALUA path, both from the “target port group id” line as well as from the “status code”. We also see there are two paths identified, with target port IDs 1,1 and 1,2.

There are a slew of additional “sg” commands, such as the sg_inq command, often used with the flag “-p 0x83” to get the VPD (vital product data) page of interest, sg_rdac, etc. The sg_inq command will in general return, in fact, TPGS > 0 for devices that support ALUA. More on that will be discussed later on in this article. One additional command of particular interest, because not all storage arrays in fact support target port group queries (more also on this important point later!), is sg_vpd (sg vital product data fetcher), as it does not require TPG access. The base syntax of interest here is:

sg_vpd –p 0xc9 –hex /dev/…

where “/dev/…” should be the full path to the device in question. Looking at an example of the output of a real such device, we get:

If one reads the source code for various device handlers (see the multipath tools hardware table for an extensive list of hardware profiles as well as the Linux SCSI device handler regarding how the data are interpreted through the device handler), one can determine that the value of interest here is that of avte_cvp (part of the RDAC c9_inquiry structure), which is the sixth hex value, and will indicate if the connected device is using ALUA (if shifted right five bits together with a logical AND with 0x1, in the RDAC world, known as IOSHIP mode), AVT, or Automatic Volume Transfer mode (if shifted right seven bits together with a logical AND with 0x1), or otherwise defaults in general to basic RDAC (legacy) mode. In the case above we see “61” returned (indicated in boldface), so (0x61 >> 5 & 0x1) is equal to 1, and hence the above connection is indeed an ALUA RDAC-based connection.

I will revisit sg commands once again later on. Do note that the sg3_utils package is not installed on stock XenServer distributions and as with any external package, the installation of external packages may void any official Citrix support.

MULTIPATH CONFIGURATIONS AND REPORTS

In addition to all the information that various sg commands provide, there is also an abundance of information available from the standard multipath command. We saw a sample multipath.conf file earlier, and at least with many standard Linux OS versions and ALUA-capable arrays, information on the multipath status can be more readily obtained using stock multipath commands.

For example, on an ALUA-enabled connection we might see output similar to the following from a “multipath –ll” command (there will be a number of variations in output, depending on the version, verbosity and implementation of the multipath utility):

Recalling the device sde from the section above, note that it falls under a path with a lower priority of 10, indicating it is part of an active, non-optimized network connection vs. 50, which indicates being in an active, optimized group; a priority of “1” would indicate the device is in the standby group. Depending on what mechanism is used to generate the priority values, be aware that these priority values will vary considerably; the most important point is that whatever path has a higher “prio” value will be the optimized path. In some newer versions of the multipath utility, the string “hwhandler=1 alua” shows clearly that the controller is configured to allow the hardware handler to help establish the multipathing policy as well as that ALUA is established for this device. I have read that the path priority will be elevated to typically a value of between 50 and 80 for optimized ALUA-based connections (cf. mpath_prio_alua in this Suse article), but have not seen this consistently.

The multipath.conf file itself has traditionally needed tailoring to each specific device. It is particularly convenient, however, that using a generic configuration is now possible for a device that makes use of the internal hardware handler and is rdac-based and can auto-negotiate an ALUA connection. The italicized entries below represent the specific device itself, but others should now work using this generic sort of connection:

THE CURIOUS CASE OF DELL MD32XX/36XX ARRAY CONTROLLERS

The LSI controllers incorporated into Dell’s MD32xx and MD36xx series of iSCSI storage arrays represent an unusual and interesting case. As promised earlier, we will get back to looking at the sg_inq command, which queries a storage device for several pieces of information, including TPGS. Typically, an array that supports ALUA will return a value of TPGS > 0, for example:

Highlighted in boldface, we see in this case above that TPGS is reported to have a value of 1. The MD36xx has supported ALUA since RAID controller firmware 07.84.00.64 and NVSRAM N26X0-784890-904, however, even with that (or newer) revision level, an sg_inq returns the following for this particular storage array:

Various attempts to modify the multipath.conf file to try to force TPGS to appear with any value greater than zero all failed. Above all, it seemed that without access to the TPGS command, there was no way to query the device for ALUA-related information. Furthermore, the command mpath_prio_alua and similar commands appear to have been deprecated in newer versions of the device-mapper-multipath package, and so offer no help.

This proved to be a major roadblock in making any progress. Ultimately it turned out that the key to looking for ALUA connectivity in this particular case comes oddly from ignoring what TPGS reports, and rather focusing on what the MD36xx controller is doing. What is going on here is that the hardware handler is taking over control and the clue comes from the sg_vpd output shown above. To see how a LUN is mapped for these particular devices, one needs to hunt back through the /var/log/messages file for entries that appear when the LUN was first attached. To investigate this for the MD36xx array, we know it uses the internal “rdac” connection mechanism for the hardware handler, so a Linux grep command for “rdac” in the /var/log/messages file around the time the connection was established to a LUN should reveal how it was established.

Sure enough, if one looks at a case where the connection is known to not be making use of ALUA, you might see entries such as these:

In contrast, an ALUA-based connection to LUNs shown below on an MD3600i that has new enough firmware to support ALUA and using an appropriate client that also supports ALUA and has a properly configured entry in the /etc/multipath.conf file will instead show the IOSHIP connection mechanism (see p. 124 of this IBM System Storage manual for more on I/O Shipping):

The even better news is that not only is ALUA now functional in XenServer 6.5 but should, in fact, work now with a large number of ALUA-capable storage arrays, both with custom configuration needs as well as potentially many that may work generically. Another surprising find was that for the MD3600i arrays tested, it turns out that even the “stock” version of the MD36xxi multipath configuration entry provided with XenServer 6.5 creates ALUA connections. The reason for this is that the hardware handler is being used consistently, provided no specific profile overrides are intercepted, and so primarily the storage device is doing the negotiation itself instead of being driven by the file-based configuration. This is what made the determination of ALUA connectivity more difficult, namely that the TPGS setting was never changed from zero and could consequently not be used to query for the group settings.

CONCLUSIONS

First off, it is really nice to know now that many modern storage devices support ALUA and that XenServer 6.5 now provides an easier means to leverage this protocol. It is also a lesson that documentation can be either hard to find and in some cases, is in need of being updated to reflect the current state. Individual vendors will generally provide specific instructions regarding iSCSI connectivity, and should of course be followed. Experimentation is best carried out on non-production servers where a major faux pas will not have catastrophic consequences.

To me, this was also a lesson in persistence as well as an opportunity to share the curiosity and knowledge among a number of individuals who were helpful throughout this process. Above all, among many who deserve thanks, I would like to thank in particular Justin Bovee from Dell and Robert Breker of Citrix for numerous valuable conversations and information exchanges.

Having just released Creedence as XenServer 6.5, 2015 has definitely started off with a bang. In 2014 the focus for XenServer was on a platform refresh, and creating a solid platform for future work. For me, 2015 is about enabling the ecosystem to be successful with XenServer, and that's where FOSDEM comes in. For those unfamiliar with FOSDEM, it's the Free and Open Source Developers European Meeting, and many of the most influential projects will have strong representation. Many of those same projects have strong relationships with other hypervisors, but not necessarily with XenServer. For those projects, XenServer needs to demonstrate its relevance, and I hope through a set of demos within the Xen Project stand to provide exactly that.

Demo #1 - Provisioning Efficiency

XenServer is a hypervisor, and as such is first and foremost a provisioning target. That means it needs to work well with provisioning solutions and their respective template paradigms. Some of you may have seen me present at various events on the topic of hypervisor selection in various cloud provisioning tools. One of the core workflow items for all cloud solutions is the ability to take a template and provision it consistently to the desired hypervisor. In Apache CloudStack with XenServer for example, those templates are VHD files. Unfortunately, XenServer by default exports XVA files, not native VHD; which makes the template process for CloudStack needlessly difficult.

This is where a technology like Packer comes in. Some of the XenServer engineers have been working on a Packer integration to support Vagrant. That's cool, but I'm also looking at this from the perspective of other tools and so will be showing Packer creating a CentOS 7 template which could be used anywhere. That template would then be provisioned and as part of the post-provisioning configuration management become a "something" with the addition of applications.

Demo #2 - Application Containerization

Once I have my template from Packer, and have provisioned it into a XenServer 6.5 host, the next step is application management. For this I'm going to use Ansible to personalize the VM, and to add in some applications which are containerized by Docker. There has been some discussion in the marketplace about containers replacing VMs, and I really see proper use of containers as being efficient use of VMs not as a replacement for a VM. Proper container usage is really proper application management, and understanding when to use which technology. For me this means that a host is a failure point which contains VMs. A VM represents a security and performance wrapper for a given tenant and their applications. Within a VM applications are provisioned, and where containerization of the applications makes sense, it should be used.

System administrators should be able to directly manage each of these three "containers" from the same pane of glass, and as part of my demo, I'll be showing just that using XenCenter. XenCenter has a simple GUI from which host and VM level management can be performed, and which is in the process of being extended to include Dockerized containers.

With this as the demo backdrop, I encourage anyone planning on attending FOSDEM to please stop by and ask about the work we've done with Creedence and also where we're thinking of going. If you're a contributor to a project and would like to talk more about how integrating with XenServer might make sense, either for your project or as something we should be thinking about, please do feel free to reach out to me. Of course if you're not planning on being at FOSDEM, but know folks who are, please do feel free to have them seek me out. We want XenServer to be a serious contender in every data center, but if we don't know about issues facing your favorite projects, we can't readily work to resolve them.

btw, if you'd like to plan anything around FOSDEM, please either comment on this blog, or contact me on Twitter as @XenServerArmy.

Commercial Support

Commercial support is available from Citrix and many of its partners. A commercial support contract is appropriate if you're running XenServer in a production environment, particularly if downtime is a critical component of your SLA. It's important to note that commercial support is only available if the deployment follows the Citrix deployment guidelines, uses third party components from the Citrix Ready Marketplace, and is operated in accordance with the terms of the commercial EULA. Of course, since your deployment might not precisely follow these guidelines, commercial support may not be able to resolve all issues and that's where community support comes in.

Community Support

Community support is available from the Citrix support forums. The people on the forum are both Citrix support engineers and also your fellow system administrators. They are generally quite knowledgeable and enthusiastic to help someone be successful with XenServer. It's important to note that while the product and engineering teams may monitor the support forums from time to time, engineering level support should not be expected on the community forums.

Developer Support

Developer level support is available from the xs-devel list. This is your traditional development mailing list and really isn't appropriate for general support questions. Many of the key engineers are part of this list, and do engage on topics related to performance, feature development and code level issues. It's important to remember that the XenServer software is actually built from many upstream components, so the best source of information might be an upstream developer list and not xs-devel.

Self-support tool

Citrix maintains an self-support tool called Citrix Insight Services, formerly known as Tools-as-a-Service (TaaS). Insight Services takes a XenServer status report, and analyzes it to determine if there are any operational issues present in the deployment. A best practice is to upload a report after installing a XenServer host to determine if any issues are present which can result in latent performance or stability problems. CIS is used extensively by the Citrix support teams, but doesn't require a commercial support contract for end users.

Submitting Defects

If you believe you have encountered a defect or limitation in the XenServer software, simply using one of these support options isn't sufficient for the incident to be added to the defect queue for evaluation. Commercial support users will need to have their case triaged and potentially escalated, with the result potentially being a hotfix. All other users will need to submit an incident report via bugs.xenserver.org. Please be as detailed as possible with any defect reports such that they can be reproduced, and it doesn't hurt to include the URL of any forum discussion or the TaaS ID in your report. Also, please be aware that while the issue may be urgent for you any potential fix may take some time to be created. If your issue is urgent, you are strongly encouraged to follow the commercial support route as Citrix escalation engineers have the ability to prioritize customer issues.

Additionally, its important to point out that submitting a defect or incident report doesn't guarantee it'll be fixed. Some things simply work the way they do for very important reasons, other things may behave the way they do due to the way components interact. XenServer is tuned to provide a highly scalable virtualization platform, and if an incident would require destabilizing that platform, it's unlikely to be changed.

"Our job is to accommodate all the faculties’ needs as much as possible so we needed to find a solution that could support a large number of applications as well as save storage space and staff resources. This is where Citrix stepped in."

Jose ChanHead of IT DepartmentMacau Polytechnic Institute

Commercial Support

Do you want professional support and service from Citrix? We can help with installation, technical support and optimization of XenServer. Contact Citrix