get-info -class technology | write-output > /dev/web

Citrix Xen

This is a bit of a copycat post really but I saw Mike Taulty and Phil Winstanley‘s hardware lineups and thought it was a good idea. So, here it is, a summary of the technology I use pretty much every day and how I see that changing this year.

Car: Audi A4 Avant 2.0 TDI 170 S-Line

My wife and I have been Volkswagen fans for a few years now (we find them to be good, solid, reliable cars that hold their value well) so, a couple of years ago, when I heard that Volkswagen and Audi were being added to our company car scheme, I held back on replacing my previous vehicle in order to take advantage. I did consider getting a Passat but the A4 (although smaller) had a newer generation of engine and lower emissions, so it didn’t actually cost much more in tax/monthly lease costs.

After a year or so, I’m normally bored/infuriated with my company cars but I still really enjoy my A4 – so much so that I will consider purchasing this one at the end of its lease next year. My only reservations are that I would really like something larger, sometimes a little more power would be nice (although this has 170PS, which is pretty good for a 2 litre diesel) and I do sometimes think that the money I contribute to the car might be better spent on reducing the mortgage (I add some of my salary to lease a better car than my grade entitles me to).

Either way, it’s on lease until I hit 3 years or 60,000 miles, so it’s a keeper for 2011.

Verdict 9/10. Hold.

Phone: Apple iPhone 3GS 16GB

I actually have two phones (personal and work SIMs) but my personal needs are pretty basic (a feature phone with Bluetooth connectivity for hands free operation in the car) and I recycled my iPhone 3G when I was given a 3GS to use for work.

After having owned iPhones for a few years now (this is my third one), I don’t feel that the platform, which was once revolutionary, has kept pace and it now feels dated. As a result, I’m tempted by an Android or Windows Phone 7 device but neither of these platforms is currently supported for connection my corporate e-mail service.

The main advantages of this device for me are the apps and the Bluetooth connectivity to the car (although I needed to buy a cable for media access). I use Spotify and Runkeeper when I’m running but there are a whole host of apps to help me when I’m out and about with work (National Rail Enquiries, etc.) and, of course, it lets me triage my bulging mailbox and manage my calendar when I’m on the move. Unfortunately, the camera is awful and it’s not much use as a phone either, but it does the job.

I could get an iPhone 4 (or 5 this summer?) but I’d say it’s pretty unlikely, unless something happened to this one and I was forced to replace it.

Verdict 3/10. Not mine to sell!

Tablet: Apple iPad 3G 64GB

After several weeks (maybe months) of thinking “do I? don’t I?”, I bought an iPad last year and I use it extensively. Perhaps it’s a bit worrying that I take it to bed with me at night (I often catch up on Twitter before going to sleep, or use it as an e-book reader) but the “instant on” and long battery life make this device stand out from the competition when I’m out and about.

2011 will be an interesting year for tablets – at CES they were all over the place but I’ve been pretty vocal (both on this blog, and on Twitter) about my views on Windows as a tablet operating system and many of the Android devices are lacking something – Android 3 (Gingerbread [correction] Honeycomb) should change that. One possible alternative is Lenovo’s convertible notebook/tablet which runs Windows but features a slide out screen that functions as an Android tablet (very innovative).

I may upgrade to an iPad 2, if I can get a good resale price for my first generation iPad, but even Apple’s puritanical anti-Adobe Flash stand (which means many websites are unavailable to me) is not enough to make me move away from this device in 2011.

My personal preference for notebook PCs is a ThinkPad – I liked them when they were manufactured by IBM and Lenovo seem to have retained the overall quality associated with the brand – but, given who pays my salary, it’s no surprise that I use a Fujitsu notebook PC. Mine’s a couple of years old now and so it’s branded Fujitsu-Siemens but it’s the same model that was sold under the Fujitsu name outside Europe. It’s a solid, well-built notebook PC and I have enough CPU, memory and disk to run Windows 7 (x64) well.

Unfortunately it’s crippled with some awful full disk encryption software (I won’t name the vendor but I’d rather be using the built-in BitLocker capabilities which I feel are better integrated and less obtrusive) and, even though the chipset supports Intel vPro/AMT (to install the Citrix XenClient hypervisor), the BIOS won’t allow me to activate the VT-d features. As a result, I have to run separate machines for some of my technical testing (I’m doing far less of that at work anyway these days) and to meet my personal (i.e. non-work) computing requirements.

My hope is that we’ll introduce a bring your own computer (BYOC) scheme at work and I can rationalise things but, if not, it’ll be another two years before I can order a replacement and this will soldier on for a while yet.

In its day, my netbook was great. It’s small, light, can be used on the train when the seatback tables are too small for a normal laptop and I used mine extensively for personal computing whilst working away from home. It was a bit slow (on file transfers) but it does the job – and the small keyboard is ideal for my young children (although even they could do with a larger screen resolution).

Nowadays my netbook it sits on the shelf, unloved, replaced by my iPad. It was inexpensive and, ultimately, consumable.

Verdict 2/10. Sell, or more likely use it to geek out and play with Linux.

Digital Camera: Nikon D700

After a series of Minoltas in the 1980s and 1990s, I’ve had Nikon cameras for several years now, having owned an F90x, a D70 and now a D700. I also use my wife’s D40 from time to time and we have a Canon Ixus 70 too (my son has adopted that). With a sizeable investment in Nikon lenses, etc., I can’t see myself changing brands again – although some of my glass could do with an upgrade, and I’d like an external flash unit.

The D700 gives me a lot of flexibility and has a high enough pixel count, with minimal noise and good low-light performance. It’s a professional-grade DSLR and a bit heavy for some people (I like the weight). It’s also too valuable for some trips (which is when I use the D40) but I always miss the flexibility and functionality that the D700 body provides. Maybe sometimes I think some video capabilities would be nice but I won’t be changing it yet.

It’s been three years since I bought my MacBook and, much as I’d like one of the current range of MacBook Pros it’ll be a while before I replace it because they are so expensive! In fairness, it’s doing it’s job well – as soon as I bought it I ungraded the hard disk and memory, and whilst the the CPU is nt as fast as a modern Core i5 or i7, it’s not that slow either.

For a machine that was not exactly inexpensive, I’ve been disappointed with the build quality (it’s had two new keyboard top covers and a replacement battery) but Apple’s customer service meant that all were replaced under warranty (I wouldn’t fancy my chances at getting a new battery from many other PC OEMs).

I use this machine exclusively for photography and the Mac OS suits me well for this. It’s not “better” than Windows, just “different” and, whilst some people would consider me to be a Microsoft fanboi and an iHater, the list of kit on this page might say otherwise. I like to consider myself to have objective views that cut through the Redmond or Cupertino rhetoric!

So, back to the Mac – I may dive into Photoshop from time to time but Adobe Lightroom, Flickr Uploadr, VueScan and a few specialist utilities like Sofortbild are my main tools. I need to sweat this asset for a while longer before I can replace it.

My Mac Mini was the first Intel Mac I bought (I had one of the original iMacs but that’s long gone) and it’s proved to be a great little machine. It was replaced by the MacBook but has variously been used in Windows and Mac OS X forms as a home media PC. These days it’s just used for iTunes and Spotify, but I plan to buy a keyboard to have a play with Garage Band too.

It may not be the most powerful of my PCs, but it’s more than up to this kind of work and it takes up almost no space at all.

As my work becomes less technical, I no longer run a full network infrastructure at home (I don’t find myself building quite so many virtual machines either) so I moved the main infrastructure roles (Active Directory, DHCP, DNS, TFTP, etc.) to a low-power server based on an Intel Atom CPU. I still have my PowerEdge 840 for the occasions when I do need to run up a test environment but it’s really just gathering dust. Storage is provided by a couple of Netgear ReadyNAS devices and it’s likely that I’ll upgrade the disks and then move one to a family member’s house, remote syncing to provide an off-site backup solution (instead of a variety of external USB drives).

Verdict 6/10. Hold (perhaps sell the server, but more likely to leave it under the desk…).

First of all, to summarise the Citrix announcements last month, as well as announcing that the XenServer hypervisor will be free of charge, Citrix announced a 20-year collaboration project with Microsoft (Project Encore) as part of which they will release new management tools (Citrix Essentials – available for XenServer and for Hyper-V) and Microsoft will support XenServer in a future version of SCVMM.

The official coverage of the Citrix Essentials announcement also includes videos featuring Simon Crosby, CIO of the Virtualisation Management Division at Citrix and Mike Neil, the General Manager for Virtualisation at Microsoft. In one video, Crosby says that:

“You’ve known us as the guys who made the hypervisor free – that’s what Xen stood for and we’ve been partners with Microsoft with Hyper-V to make exactly the same true in the Windows world.

This is not about free hypervisors anymore – this is about free enterprise virtualised infrastructure, containing multiple servers, shared storage, live relocation – everything that you need to build, in production, enterprise class virtualised infrastructure is now free. It’s a game changer for the virtualisation industry because it completely changes the cost of adopting virtualisation.”

After saying how Citrix was setting everything free, Crosby contradicts himself by saying that it’s basically the hypervisor that’s free but that there’s a management suite (Citrix Essentials) that’s chargeable… (so the “essential” part is not free then!)

Even so, it’s a significantly lower price point than the last time I looked at VMware Virtual Infrastructure (which is the real point Citrix are trying to make), and Citrix Essentials will provide extra functionality, some of which would require the purchase of additional products from VMware:

Automated lab management – to streamline the process of building, testing, sharing and delivering throughout the application lifecycle, from development labs into the production environment.

Dynamic provisioning services – for the on-demand deployment of workloads to any combination of virtual machines or physical servers from a single image.

Workflow orchestration – for the simplified scripting to automation of key management processes.

High availability – for the automatic restart and intelligent placement of virtual machines in case of failure of guest systems or physical servers.

But some of this functionality is also available in SCVMM, so how does Citrix Essentials for Hyper-V fit with the Microsoft Virtualization portfolio? That’s explained in another video, where Crosby highlights that:

“Citrix Essentials is a management pack of solutions that complement System Center VMM, adding value in areas relating to storage automation, lab automation and VM lifecycle automation that are entirely complimentary to the use cases that are part of System Center VMM today”

He continues to explain that, in terms of multivendor platform management, SCVMM is forging ahead and Citrix’s objective is to complement the Microsoft products by filling in the key areas of automation that are not part of the virtualisation management role (e.g. storage, lab and stage management), to complement Hyper-V and to co-exist with SCVMM.

Mike Neil explained that the Microsoft Virtualization platform is designed to be layered with the base hypervisor functionality provided in Windows Server and the System Center products layered on top to manage the virtual and physical machines, their operating systems and applications. This infrastructure is designed to be extended by partners and Citrix has taken advantage by producing Citrix Essentials for Hyper-V.

Officially under embargo until next week (not an embargo that I’ve signed up to though), ZDNet is reporting that Citrix is to offer XenServer for free (XenServer is a commercial product based on the open source Xen project). From my standpoint, this is great news because Citrix and Microsoft already work very closely (Citrix developed the Linux integration for Hyper-V) and Citrix will be selling new management tools which will improve the management experience for both XenServer and Hyper-V but, in addition, Microsoft SCVMM will support XenServer (always expected, but never officially announced), meaning that SCVMM will further improve its position as a “manager of managers” and provide a single point of focus for managing all three of the major hypervisors.

VMware, of course, will respond and tell us that this is not simply a question of software cost (to some extent, they are correct, but many of the high-end features that they offer over the competition are just the icing on the top of the cake), that they have moved on from the hypervisor and how their cloud-centric Virtual Datacentre Operating System model will be the way forward. That may be so, but with server virtualisation now moving into mainstream computing environments and with Citrix and Microsoft already working closely together (along with Novell – and now Red Hat), this is almost certainly not good news for VMware’s market dominance.

When Windows Server 2008 shipped with only a beta version of the new “Hyper-V” virtualisation role in the box Microsoft undertook to release a final version within 180 days. I’ve commented before that, based on my impressions of the product, I didn’t think it would take that long and, as Microsoft ran at least two virtualisation briefings this week in the UK, I figured that something was just about to happen (on the other hand I guess they could just have been squeezing the events into the 2007/8 marketing budget before year-end on 30 June).

The big news is that Microsoft has released Hyper-V to manufacturing today.

[Update: New customers and partners can download Hyper-V. Customers who have deployed Windows Server 2008 can receive Hyper-V from Windows Update starting from 8 July 2008.]

Why choose Hyper-V?

I’ve made no secret of the fact that I think Hyper-V is one of the most significant developments in Windows Server 2008 (even though the hypervisor itself is a very small piece of code), and, whilst many customers and colleagues have indicated that VMware has a competitive advantage through product maturity, Microsoft really are breaking down the barriers that, until now, have set VMware ESX apart from anything coming out of Redmond.

When I asked Byron Surace, a Senior Product Manager for Microsoft’s Windows Server Virtualization group, why he believes that customers will adopt Hyper-V in the face of more established products, like ESX, he put it down to two main factors:

Customers now see server virtualisation as a commodity feature (so they expect it to be part of the operating system).

The issue of management (which I believe is the real issue for organisations adopting a virtualisation strategy) – and this is where Microsoft System Center has a real competitive advantage with the ability to manage both the physical and virtual servers (and the running workload) within the same toolset, rather than treating the virtual machine as a “container”.

When asked to comment on Hyper-V being a version 1 product (which means it will be seen by many as immature), Surace made the distinction between a “typical” v1 product and something “special”. After all, why ship a product a month before your self-imposed deadline is up? Because customer evidence (based on over 1.3 million beta testers, 120 TAP participants and 140 RDP customers) and analyst feedback to date is positive (expect to see many head to head comparisons between ESX and Hyper-V over the coming months). Quoting Surace:

“Virtualisation is here to stay, not a fad. [… it is a] major initiative [and a] pillar in Windows Server 2008.”

I do not doubt Microsoft’s commitment to virtualisation. Research from as recently as October 2007 indicates only 7% of servers are currently virtualised but expect that to grow to 17% over the next 2 years. Whilst there are other products to consider (e.g. Citrix XenServer), VMware products currently account for 70% of the x86 virtualisation market (4.9% overall) and are looking to protect their dominant position. One strategy appears to be pushing out plenty of FUD – for example highlighting an article that compares Hyper-V to VMware Server (which is ridiculous as VMware Server is a hosted platform – more analogous to the legacy Microsoft Virtual Server product, albeit more fully-featured with SMP and 64-bit support) and commenting that live migration has been dropped (even though quick migration is still present). The simple fact is that VMware ESX and Microsoft Hyper-V are like chalk and cheese:

ESX has a monolithic hypervisor whilst Hyper-V takes the same approach as the rest of the industry (including Citrix/Xen and Sun) with its microkernelised architecture which Microsoft consider to be more secure (Hyper-V includes no third party code whilst VMware integrates device drivers into its hypervisor).

VMware use a proprietary virtual disk format whilst Microsoft’s virtual hard disk (.VHD) specification has long since been offered up as an open standard (and is used by competing products like Citrix XenServer).

Hyper-V is included within the price of most Windows Server 2008 SKUs, whilst ESX is an expensive layer of middleware.

ESX doesn’t yet support 64-bit Windows Server 2008 (although that is expected in the next update).

None of this means that ESX, together with the rest of VMware’s Virtual Infrastructure (VI), are not good products but for many organisations Hyper-V offers everything that they need without the hefty ESX/VI price tag. Is the extra 10% really that important? And when you consider management, is VMware Virtual Infrastructure as fully-featured as the Microsoft Hyper-V and System Center combination? Then consider that server virtualisation is just one part of Microsoft’s overall virtualisation strategy, which includes server, desktop, application, presentation and profile virtualisation, within an overarching management framework.

Guest operating system support

At RTM the supported guest operating systems have been expanded to include:

Windows Server 2008 32- or 64-bit (1, 2 or 4-way SMP).

Windows Server 2003 32- or 64-bit (1, or 2 way SMP).

Windows Vista with SP1 32- or 64-bit (1, or 2 way SMP).

Windows XP with SP3 64-bit (1, or 2 way SMP), with SP2 64-bit (1, or 2 way SMP) or with SP2 32-bit (1 vCPU only).

Windows Server 2000 with SP4 (1 vCPU only).

SUSE Linux Enterprise Server 10 with SP1 or 2, 32- or 64-bit.

Whilst this is a list of supported systems (i.e. those with integration components to make full use of Hyper-V’s synthetic device driver model), others may work (in emulation mode) but my experience of installing the Linux integration components is that it is not always straightforward. Meanwhile, for many, the main omissions from that list will be Red Hat and Debian-based Linux distributions (e.g. Ubuntu). Microsoft isn’t yet making an official statement on support for other flavours of Linux (and the Microsoft-Novell partnership makes SUSE an obvious choice) but they are pushing the concept of a virtualisation ecosystem where customers don’t need to run one virtualisation technology for Linux/Unix operating systems and another for Windows and its logical to assume that this ecosystem should also include the leading Linux distribution (I’ve seen at least one Microsoft slide listing RHEL as a supported guest operating system for Hyper-V), although Red Hat’s recent announcement that they will switch their allegiance from Xen to KVM could raise some questions (it seems that Red Hat has never been fully on-board with the Xen hypervisor).

Performance and scalability

Microsoft are claiming that Hyper-V disk throughput is 150% that of VMware ESX Server – largely down to the synthetic device driver model (with virtualisation service clients in child partitions communicating with virtualisation service providers in the parent partition over a high-speed VMBus to access disk and network resources using native Windows drivers). The virtualisation overhead appears minimal – in Microsoft and QLogic’s testing of three workloads with two identical servers (one running Hyper-V and the other running direct on hardware) the virtualised system maintained between 88 and 97% of the number of IOPS that the native system could sustain and when switching to iSCSI there was less than a single percentage point difference (although the overall throughput was much lower). Intel’s vConsolidate testing suggests that moving from 2-core to 4-core CPUs can yield a 47% performance improvement with both disk and network IO scaling in a linear fashion.

Hardware requirements are modest too (Hyper-V requires a 64-bit processor with standard enhancements such as NX/XD and the Intel VT/AMD-V hardware virtualisation assistance) and a wide range of commodity servers are listed for Hyper-V in the Windows Server Catalog. According to Microsoft, when comparing Hyper-V with Microsoft Virtual Server (both running Windows Server 2003, with 16 single vCPU VMs on an 8-core server), disk-intensive operations saw a 178% improvement, CPU-intensive operations returned a 21% improvement and network-intensive operations saw a 107% improvement (in addition to the network improvements that the Hyper-V virtual switch presents over Virtual Server’s network hub arrangements).

Ready for action

As for whether Hyper-V is ready for production workloads, Microsoft’s experience would indicate that it is – they have moved key workloads such as Active Directory, File Services, Web Services (IIS), some line of business applications and even Exchange Server onto Hyper-V. By the end of the month (just a few days away) they aim to have 25% of their infrastructure virtualised on Hyper-V – key websites such as MSDN and TechNet have been on the new platform for several weeks now (combined, these two sites account for over 4 million hits each day).

It’s not just Microsoft that thinks Hyper-V is ready for action – around 120 customers have committed to Microsoft’s Rapid Deployment Programme (RDP) and, here in the UK, Paul Smith (the retail fashion and luxury goods designer and manufacturer) will shortly be running Active Directory, File Services, Print Services, Exchange Server, Terminal Services, Certificate Services, Web Services and Management servers on a 6-node Hyper-V cluster stretched between two data centres. A single 6-node cluster may not sound like much to many enterprises, but when 30 of your 53 servers are running on that infrastructure it’s pretty much business-critical.

Looking to the future

In the meantime, System Center Virtual Machine Manager 2008 will ship later this year, including suppoort for managing Virtual Server, Hyper-V and VMware ESX hosts.

In addition, whilst Microsoft are keeping tight-lipped about what to expect in future Windows versions, Hyper-V is a key role for Windows Server and so the next release (expected in 2010) will almost certainly include additional functionality in support of virtualisation. I’d expect to see new features include those that were demonstrated and then removed from Hyper-V earlier in its lifecycle (live migration and the ability to hot-add virtual hardware) and a file system designed for clustered disks would be a major step forward too.

Somewhat confusingly, the version 4XenSource products include version 3.1 of the Xen hypervisor. I’d assumed that this was pretty much identical to the Xen 3.0.3 kernel that I can install from the RHEL DVD but it seems not. Roger Baskerville, XenSource’s Channel Director for EMEA explained to me that it’s important to differentiate between the OSS Xen and the Xen Source commercial products and that whilst both Red Hat and XenSource use a snapshot of OSS Xen 3.x.x, the XenSource snapshot is more recent than the one in RHEL due to the time that it takes to incorporate various open source components into a operating system. Furthermore, XenExpress, XenServer and XenEnterprise are designed for bare-metal deployment with little more than the hypervisor and a minimal domain-0 (a privileged virtual machine used to control the hypervisor) whereas RHEL’s domain-0 is a full operating system.

The XenSource microkernel is based on CentOS (itself a derivative of Red Hat Enterprise Linux) with only those services that are needed for virtualisation along with a proprietary management interface and Windows drivers. Ultimately, both the XenSource and RHEL models include a Xen hypervisor interacting directly with the processor, virtual machines (domain-U) and domain-0 for disc and network traffic. Both use native device drivers from the guest operating system, except in the case of full virtualised VMs (i.e. Windows VMs) in which case the XenSource products use signed proprietary paravirtualised Windows drivers for disk access and network traffic (XenSource Tools).

So when it comes to installation, we have two very different methods – whereas XenSource is a bare-metal installation, installing Xen on RHEL involves a number of RPMs to create the domain 0 environment. This is how it’s done:

At this point, it should be possible to start the Xen daemon (as long as a reboot onto the Xen kernel has been performed – either from manual selection or by changing the defaults in /boot/grub/menu.lst) using xend start. If the reboot took place after kernel installation but prior to installing all of the tools (as mine did) then chkconfig --list should confirm that xend is set to start automatically and in future it will not be necessary to start the Xen daemon manually. xm list should show that Domain-0 is up and running.

Having installed Xen on RHEL, I was unable to install any Windows guests because the CPU on my machine doesn’t have Intel-VT or AMD-V extensions. It’s also worth noting that my attempts to install Xen on my notebook PC a few months ago were thwarted as, every time I booted into the Xen kernel, I was greeted with the following error:

Finally, it’s worth noting that my RHEL installation of Xen is running on a 32-bit 1.5GHz Pentium 4 (“Willamette”) CPU whereas the XenSource products require that the CPU supports a 64-bit instruction set. The flags shown with cat /proc/cpuinfo can be a bit cryptic but Todd Allen’s CPUID makes things a little clearer (if not quite as clear as CPU-Z is for Windows users).

Its sometimes difficult to understand how open source (i.e. community driven) software and commercial operations can co-exist. Yesterday’s XenSource presentation gave me a great example of how the model works:

XenSource (the commercial company) takes the stable and tested elements of the solution and combines these with proprietary elements to produce a commercial product. It also contributes code to the open source project along with bug fixes.

XenSource has the resources to provide enterprise-level quality assurance and testing, including manual and automated regression testing, optimisations and beta test programmes. These contribute further fixes for inclusion in the product(s).

The result is a commercial product (in this case three products) which promote open source software development at the same time as providing a revenue stream for ongoing product development.

My last question to ask is “what about the community developers who devoted their time to the project?” – it would be interesting to hear how those who contribute code that then makes profit for faceless shareholders feel but I suspect they derive their benefits in a far more altruistic manner:

A feeling of community and pride in having contributed to a widely-deployed software product.

Access to source code in order to develop and extend the community versions of the product.

In the case of the project founders and leaders, financial recognition through their involvement in the commercial company.

Like this:

As I write this, I’m on the train to attend a Microsoft event about creating and managing a virtual environment on the Microsoft platform (that’s something that I’m doing right now to support some of my business unit’s internal systems). I’m also on the Windows Server Virtualization TAP program (most of the information I get from that is under NDA – I’m saving it all up to blog when it becomes public!) and I have a good working knowledge of VMware’s product set, including some of the (non-technical) issues that a virtualisation project can face. With that in mind, I thought I’d take the time to attend one of XenSource‘s Unify Your Virtual World events yesterday to look at how this commercial spinoff from the open source Xen project fits into the picture.

From my point of view, the day didn’t start well: the location was a hotel next to London Heathrow airport with tiny parking spaces at an extortionate price (at least XenSource picked up the bill for that); there was poor signage to find the XenSource event; and stale pastries for breakfast; however I was pleased to see that, low key as the event was, the presenters were accessible (indeed John Glendinning, XenSource VP for Worldwide Sales, was actively floor-walking). And once the presentation got started things really picked up with practical demonstrations supplemented with PowerPoint slides (not OpenOffice Impress as I would expect from an open source advocate) only to set the scene and provide value, rather than the typical “death by PowerPoint” product pitch with only a few short demonstrations.

XenSource was founded in 2005 by the creators and leaders of the Xen hypervisor open source project and in that short time it has grown to the point where it is now a credible contender in the the x86 virtualisation space – so much so that they are currently in the process of being acquired by Citrix Systems. Rather than trying to dominate in the entire market, XenSource’s goal is clear – they provide a core virtualisation engine with partners providing the surrounding products for storage, backup, migration, etc., ensuring that there are multiple choices for enterprises that deploy the XenSource virtualisation products. The XenSource “engine” is a next generation hypervisor which delivers high performance computing through its use of paravirtualisation and hardware assist technologies. They also try to cast off the view of “it’s Linux so it must be difficult” with their 10 minutes to Xen model with no base operating system or RPMs to install, demonstrating the installation of a Xen server on bare metal hardware in around 10 minutes from a PXE boot (other deployment options are available).

From an architectural standpoint, the Xen hypervisor is very similar to Microsoft’s forthcoming Windows Server Virtualization model, providing an environment known as Domain 0. Memory and CPU access is facilitated by the hypervisor, providing direct access to hardware in most cases although for Windows VMs to make use of this the hardware must support Intel-VT or AMD-V (virtualisation hardware assistance). Storage and network access use a high performance memory bus to access the Domain 0 environment which itself makes use of standard Linux device drivers, ensuring broad hardware support.

One of the problems with running multiple virtual machines on a single physical server is the control of access to hardware. In a virtualisation environment that makes use of emulated drivers (e.g. VMware Server, Microsoft Virtual Server) the guest operating system is not aware that it is running in a virtual environment and any hardware calls are trapped by the virtual machine management layer which manages interaction with the hardware. The paravirtualised model used for Linux VMs allows the guest operating system to become aware that it is virtualised (known as enlightenment) and therefore to make a hypercall (i.e. a call to the hypervisor) that can interact directly with hardware. For non-paravirtualised operating systems that use the high performance memory bus (e.g. current versions of Windows), full virtualisation is invoked whereby the virtual machine believes it owns the hardware but in reality the hardware call is trapped by the virtualisation assist technology in the processor and passed to the hypervisor for action. For this reason, Intel VT or AMD-V capabilities are essential for Windows virtualisation with Xen.

XenSource view the VMware ESX Server model of hypervisor-based virtualisation as “first generation” – effectively using a mini-operating system kernel that includes custom device drivers and requires binary patching at runtime with a resulting performance overhead. In contrast, the “second generation” hypervisor model allows for co-operation between guests and the hypervisor, providing improved resource management and input/output performance. Furthermore, because the device drivers are outside the hypervisor, it has a small footprint (and consequentially small attack surface from a security standpoint) whilst supporting a broad range of hardware and providing significant performance gains.

XenSource claim that paravirtualised Linux on Xen has only a 0.5-2% latency (i.e. near-native performance) and even fully virtualised Windows on Xen has only a 2-6% latency (which is comparible with competing virtualisation products).

There are three XenSource products:

XenExpress – a production-ready, entry level system for a standalone server (free of charge).

XenServer – a mid-range multi-server virtualisation platform

XenEnterprise – high capacity dynamic virtualisation for the enterprise.

Because the three products share the same codebase (unlike Microsoft Virtual PC/Virtual Server or VMware Workstation/Server/ESX Server), upgrade is as simple as supplying a license key to unlock new functionality. For XenServer and XenEnterprise, there are both perpetual and annual licensing options (licensed per pair of physical CPU sockets) at a significantly reduced cost when compared with VMware Virtual Infrastructure 3 (VI3).

Xen64, a true 64-bit hypervisor providing scalability and support for enterprise applications in either a 32- or 64-bit environment with quality of service controls on resources, dynamic guest configuration and supporting up to:

XenCenter, which provides a graphical virtualisation management interface, with guided wizards and guest templates for host and resource pool configuration on multiple servers, storage and networking configuration and management, VM lifecycle management and import/export (cf. VMware Virtual Center). Whilst CLI commands are also available XenCenter is a Microsoft.NET application for Windows operating systems which makes use of the latest Windows user interface standards. Because XenCenter makes use of a distributed configuration database there is no dependency on a single SQL Server and management can fail over between virtual host servers.

XenAPI, a secure and remoteable programming interface for third-party and customer integration with existing products and processes including the xe commands for system control.

One example of the XenSource approach to providing additional functionality through partnerships is the agreement with Symantec whereby Symantec (formerly Veritas) Storage Foundation will be embedded into XenEnterprise (providing dynamic fibre-channel multipathing for redundancy, load balancing, resilience and speed); a new product called XenEnterprise High Availability will be developed for virtual machine failover; and Veritas NetBackup will be offered for data protection and backup of critical applications running on XenEnterprise virtual machines (via the NetBackup Agent, also supporting snapshots when used with Symantec Storage Foundation). Rather than re-certify systems for virtualisation, XenSource will accept Symantec’s certified plugins for common OEM architectures and, because Symantec Storage Foundation is already widely deployed, existing investments can be maintained.

In terms of demonstration, I was impressed by what I saw. XenSource demonstrated a bare metal installation in around 10 minutes and were able to show all the standard virtualisation demonstrations (e.g. running a ping, copying files, or watching a video whilst performing a live migration with no noticeable break in service). The XenCenter console can be switched between VNC and RDP communications and Xen makes use of is own .xva Xen virtual appliance format with Microsoft .vhd virtual hard disks. Conversion from VMware .vmdk files is possible using the supplied migration tools (there are Linux P2V tools included with the XenSource products but for Windows migrations it’s necessary to use products from partners such as PlateSpin and LeoStream) and templated installations can also be performed with simple conversion between running VMs and templates. When cloning virtual machines, there are options for “fat clones” whereby the whole disk is copied or thin provisioning using the same image and a differencing drive. Virtual machines can use emulated drivers or XenSource Tools can be installed for greater control from the console. Storage can be local, NFS or iSCSI based with fibre channel storage and logical volume management expected in the next release.

It’s clear that XenSource see VMware as their main competitor in the enterprise space and it looks to me as if they have a good product which provides most of the functionality in VMware VI3 Enterprise Edition (all of the functionality in VMware VI3 Standard Edition) at a significantly lower price point. The Citrix aquisition will provide the brand ownership that many sceptics will want to see before they buy an open source product, the partnership model should yield results in terms of flexibility in operations and it’s clear that the development pace is rapid. With XenSource going from strength to strength and Microsoft Windows Server Virtualization due to arrive around the middle of next year, VMware need to come up with something good if they want to retain their dominance of the x86 virtualisation market.

Like this:

By using this website you allow cookies to be placed on your computer. They are harmless and never personally identify you. For more information about cookies and how they are used, visit the
Privacy Policy and Data Protection Notice