Search This Blog

Xen installation and configuration

Courtesy of my co-worker Henry Wong, here's a guide on installing and configuring Xen on an RHEL4 machine.

Introduction

Xen is a set of kernel extensions that allow for paravirtualization of operating systems that support these kernel extensions, allowing for near-native performance for the guest operating systems. These paravirtualized systems require a compatible kernel to be installed for it to be aware of the underlying Xen host. The Xen host itself needs to be modified in order to be able to host these systems. More information can be found at the Xen website.

Sometime in the future, XenSource will release a stable version that supports the installation of unmodified guest machine on top of the Xen host. This itself requires that the host machine hardware have some sort of virtualization technology integrated into the processor. Both Intel and AMD have their own versions of virtualization technology, VT for short, to meet this new reqirement. To distinguish between the two competing technologies, we will refer to Intel's VT as its codename, Vanderpool, and AMD's VT as Pacifica.

Installation

Before starting, it is highly recommended that you visit the Xen Documentation site. This has a more general overview of what is involved with the setup, as well as some other additional information.

Terminology

domain 0 (dom0): In terms of Xen, this is the host domain that hosts all of the guest machines. It allows for the creation and destruction of virtual machines through the use of Python-based configuration files that has information on how the machine is to be constructed. It also allows for the management of any resources that is taken up by the guest domains, i.e. networking, memory, physical space, etc.

domain U (domU): In terms of Xen, this is the guest domain, or the unpriviledged domain. The guest domain has resources assigned to it from the host domain, along with any limits that are set by the host domain. None of the physical hardware is available directly to the guest domain, instead the guest domain must go through the host interface to access the hardware.

hypervisor: Xen itself is a hypervisor, or in other words, something that is capable of running multiple operating systems. A more general definition is available here.

Xen Hypervisor Installation Procedure

Extract the tarball to a directory with sufficient space and follow the installation instructions that are provided by XenSource. For RHEL4, it is recommended that you force the upgrade of glibc and the xen-kernel RPMs. This will be explained in detail further in the page.

Append the following to the grub.conf/menu.lst configuration file for the GRUB bootloader:

This might change depending on the version that is installed, but for the most part, using just the major versions should work. Details about the parameters will be explained later in the page.

Reboot the machine with the new kernel.

The machine should now be running the Xen kernel

Guest Domain Storage Creation Procedure

LVM Backed Storage

By default, RHEL4 (and basically any new Linux distribution that uses a 2.6 kernel by default) uses the LVM (Logical Volume Manager) in order to keep track of system partitions in a logical fashion. There are two important things about LVM, the logical volume and the volume group. The volume group consists of several physical disks that are grouped together during creation, with each volume group having a unique identifier. Logical volumes are then created on these volume groups, and can be given a unique name. These logical volumes are able to grab a pool of available space on the volume group, with any specified size, properties, etc. If you wish to learn more about LVM, a visit to the LVM HOWTO on the Linux Documentation Project site is recommended.

Physical Partition Backed Storage

Far easier to create than an LVM, but with a little less flexibility, the physical partition backed storage for a guest machine just uses a system partition to store the data of the virtual machine. This partition needs to be formatted to a filesystem that is supported by the host, if you are to use the paravirtualization approach for domain creation.

File-Backed Storage

By far the easiest way to get a guest domain up and running, a file-backed store for the guest allows you to put the file anywhere where there is space. You wouldn't have to give up any extra partitions in order to create the virtual machine. But, this incurs a performance penalty.

Guest Domain Installation Procedure

Create an image tarball from the preexisting Linux installation for the guest. Use tar along these lines:

Note that the excludes are before rather than after the short flags. This is because the -f short option is positional, and thus it needs a name immediately after the option.

Move the tarball over to the Xen hypervisor machine.

Mount the desired location of the guest storage on the hypervisor.

Unpack the tarball into the guest storage partition.

Copy the modules for the Xen kernel into the guest's /lib/modules directory. You can use the following command to copy the modules directory, replacing with the guest storage mount point:

$ cp -r /lib/modules/`uname -r`/ /lib/modules/

Move the /lib/tls directory to /lib/tls.disabled for the guest. This operation is specific to Redhat-based systems. Due to the way that glibc is compiled, the guest operating system will incur a performance penalty if this is not done. Ignore this step for any non-Redhat systems.

Initial setup of the guest is completed.

Running With Xen

Creating and starting a guest domain

Create a guest configuration file under /etc/xen. Use the following example as a guideline:

kernel = "/boot/vmlinuz-2.6-xen" # The kernel to be used to boot the domUramdisk = "/boot/initrd-2.6.16-xenU.img" # Need the initrd, since most of these systems run udev

Mount the guest storage partition and edit the /etc/fstab for the guest to reflect any changes made to the configuration file. Remove any extraneous mount points that won't be recognized by the guest when the system is started, otherwise the guest machine will not boot.

Start the maching using the following command:

$ xm create -c

This will create the machine and attach it to a virtual console. You can detach from the console using CTRL-].

Further setup is still required, but it is OS-specific. The network interfaces will need to be setup for the guest machine.

Get link

Facebook

Twitter

Pinterest

Google+

Email

Other Apps

Comments

Anonymous said…

Network interface could be made os indipendent if you use DHCP. With xen you can force the MAC address of DomUs so you can configure dhcpd to provide IP to your VMs easily

From my experience, using a single file as opposed to LVM was much more conceptually clearer.

Also, using a pre-built image is easier.

The hardest part for me was getting the network working, ie talking to the outside world. Opensuse had some bad settings in the xend scripts I had to change. Plus, I had to define the IP needed both in the domU conf, and inside the domU itself.

Popular posts from this blog

Here's a good interview question for a tester: how do you define performance/load/stress testing? Many times people use these terms interchangeably, but they have in fact quite different meanings. This post is a quick review of these concepts, based on my own experience, but also using definitions from testing literature -- in particular: "Testing computer software" by Kaner et al, "Software testing techniques" by Loveland et al, and "Testing applications on the Web" by Nguyen et al.

Update July 7th, 2005

From the referrer logs I see that this post comes up fairly often in Google searches. I'm updating it with a link to a later post I wrote called 'More on performance vs. load testing'.

Performance testing

The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression testing. To conduct performance testing is to engage in a carefully controlled process of measurement and analys…

I first saw nsupdate mentioned on the devops-toolchain mailing list as a tool for dynamically updating DNS zone files from the command line. Since this definitely beats manual editing of zone files, I'd thought I'd give it a try. My setup is BIND 9 on Ubuntu 10.04. I won't go into the details of setting up BIND 9 on Ubuntu -- see a good article about this here.

It took me a while to get nsupdate to work. There are lots of resources out there, but as usual it's hard to separate the grain from the chaff. When everything was said and done, the solution was relatively simple. Here it is.

Gatling is a modern load testing tool written in Scala. As part of the Jenkins setup I am in charge of, I wanted to run load tests using Gatling against a collection of pages for a given website. Here are my notes on how I managed to do this.

Running Gatling as a Docker container locally

There is a Docker image already available on DockerHub, so you can simply pull down the image locally:

$ docker pull denvazh/gatling:2.2.2
Instructions on how to run a container based on this image are available on GitHub: $ docker run -it --rm -v /home/core/gatling/conf:/opt/gatling/conf \ -v /home/core/gatling/user-files:/opt/gatling/user-files \ -v /home/core/gatling/results:/opt/gatling/results \ denvazh/gatling:2.2.2 Based on these instructions, I created a local directory called gatling, and under it I created 3 sub-directories: conf, results and user-files. I left the conf and results directories empty, and under user-files I created a simulations directory containing a Gatling load test scenario writ…