Introduction to the Xen Virtual Machine

Everyone's talking about Xen, but the code is complex. Here's a starting point.

The Xend Daemon

First, what is the Xend daemon? It is the Xen controller daemon, meaning
it handles creating new domains, destroying extant domains, migration and
many other domain management tasks. A large
part of its activity is based on running an HTTP server. The default
port of the HTTP socket is 8000, which can be configured. Various requests
for controlling the domains are handled by sending HTTP requests for
domain creation, domain shutdown, domain save and restore, live migration
and more. A large part of the Xend code is written in Python, and it
also uses calls to C methods from within Python scripts.

We start the Xend daemon by running from the command line, after booting
into Xen, xend start. What exactly does this command involve? First,
Xend requires
Python 2.3 to support its logging functions.

The work of the Xend daemon is based on interaction with an XCS server,
the control Switch. So, when we start the Xend daemon, we check to see if
the XCS is up and running. If it is not, we try to start
XCS. This step is discussed more fully later in this article. .

The SrvDaemon is, in fact, the Xend main program; starting
the Xend daemon creates an instance of SrvDaemon
class (tools/python/xen/xend/server/SrvDaemon.py.).
Two log files are created here, /var/log/xend.log and /var/log/xend-debug.log.

We next create a Channel Factory in createFactories() method. The Channel
Factory has a notifier object embedded inside. Much of the work
of the Xend daemon is based on messages received by this
notifier. This factory creates a thread that reads the notifier in an
endless loop. The notifier delegates the read request to the XCS server;
see xu_notifier_read() in xen/lowlevel/xu.c. This method sends the
read request to the XCS server by calling xcs_data_read().

Creating a Domain

The creation of a domain is accomplished by using a hypercall
(DOM0_CREATEDOMAIN). What is a hypercall? In the Linux kernel, there is
a system call with which a user space can call a method in the kernel;
this is done by an interrupt (Int 0x80). In Xen, the analogous call is a
hypervisor call, through which domain 0 calls a method in the
hypervisor. This also is accomplished by an interrupt (Int 0x82). The
hypervisor accesses each domain by its virtual CPU, struct vcpu in include/xen/sched.h.

The XendDomain class and the XendDomainInfo class play a significant
part in creating and destroying domains. The domain_create() method in
XendDomain class is called when we create a new domain; it starts the
process of creating of a domain.

The XendDomainInfo class and its methods are responsible for the actual
construction of a domain. The construction process includes setting up the
devices in the new domain. This involves a lot of messaging between the
front end device drivers in the domain and the back end device drivers in
the back end domain. We talk about the back end and front end device
drivers later.

The XCS Server

The XCS server opens two TCP sockets, the control connection and the data
connection. The difference between the control connection and the data
connection is the control connection is synchronous
while the data connection is asynchronous. The notifier object, which
was mentioned before, for example, is a client of the XCS server.

A connection to the XCS server is represented by an object of type
connection_t. After a connection is bound, it is added to a list of
connections, connection_list, which is iterated every five seconds to
see whether new control or data messages arrived. Control messages, which
can be control or data messages, are handled by handle_control_message()
or by handle_data_message(), respectively.

Creating Virtual Devices When Creating a Domain

The create() method in XendDomainInfo starts a chain of actions to create
a domain. The virtual devices of the domain first are created. The create()
method calls create_blkif() to create a block device interface (blkif);
this is a must even if the VM doesn't use a disk. The other virtual devices
are created by create_configured_devices(), which eventually calls the
createDevice() method of DevController class (see controller.py). This
method calls the newDevice() method of the corresponding class. All the
device classes inherit from Dev, which is an abstract class representing
a device attached to a device controller. Its attach() abstract (empty)
method is implemented in each subclass of the Dev class; this method
attaches the device to its front end and back end. Figure 2 shows the
devices hierarchy, and Figure 3 shows the device controller hierarchy.

Figure 2

Figure 3

Domain 0 runs the back end drivers, and the newly created domain runs the
front end drivers. A lot of messages pass between the back end
and front end drivers. The front end driver is a virtual driver in the
sense that it does not use specific hardware details; the code resides
in drivers/xen, in the sparse tree.

Event channels and shared-memory rings are the means of communication
among domains. For example, in the case of netfront device (netfront.c),
which is the network card front end interface, the np->tx and the
np->rx are the shared memory pages, one for the receiver buffer and one for the
transmitted buffer. In send_interface_connect(), we tell the netback end
to bring up the interface. The connect message travels through the event
channel to the netif_connect() method of the back end, interface.c. The
netif_connect() method calls the get_vm_area(2*PAGE_SIZE, VM_IOREMAP)).
The get_vm_area() method searches in the kernel virtual mapping area
for an area whose size equals two pages.

In the blkif case, which is the block device front end interface,
blkif_connect() also calls get_vm_area(). In this case, however, it uses only
one page of memory.

The interrupts associated with virtual devices are virtual interrupts.
When you run cat /proc/interrupts from domainU,
look at the interrupts with numbers higher than 256; they are labeled "Dynamic-irq".

How are IRQs redirected to the guest OS? The do_IRQ() method was changed
to support IRQs for the guest OS. This method calls __do_IRQ_guest()
if the IRQ is for the guest OS, xen/arch/x86/irq.c. The __do_IRQ_guest()
uses the event channel mechanism to send the interrupt to the guest OS,
send_guest_pirq() method in event_channel.c.

I installed Xen on two of our server. The kernel and xen are the modern versions (The platform information can be found here ). I have done some test on this platform, the result is that the CPU performance is nearly 100% while the memory performance is only 90% compared to the physical machine. Details can also be found here . I am satisfied by the cpu performance. Maybe the 10% memory overhead is a bit large. I am wondering whether there are some mistake in my configuration or how to improve it. -- Eric

It is a little confusing on how you have described the interaction between XenStore and a domain. How exactly does a Domain interact with Xenstore i.e. TCP ports, sockets, etc...? Since XenStore resides in ring 3 how does it access the hypervisor itself? Thanks.

That's something I guess can't happen with qemu/bochs etc. Other words: you trade that for speed.

And at first guess the enhanced CPU architecture will have just tags at descriptors and more complicated descriptor access rules to enable more page tables separated and being loaded simultaneously switched/selected on demand and privileges. But then how can it provide applications/OSes existing in different page tables with similar amount of cpu time to run? Maybe someone can summarize the tech a bit and publish it?

one clarification that i need is if the 3% overhead of xen includes the overhead of running multiple identical guest kernels. yes, xen adds 3% overhead, but is there also some duplication when running 3 linux kernels, whether in memory or in processing?

i recently investigated viritualization for the purposes of consolidating, yet keeping partitioned, a linux server & desktop. as there is very little difference between my current linux server & desktop kernels, i would prefer not to duplicate the linux kernel, but merely have different userlands. i am currently testing linux vserver as it allows me to run a single linux kernel, but maintain multiple userland "instances", each "instance" with its own ip address and other features.

granted vserver, chroot, etc does not help when a user wants to run different operating systems (linux & windows), and if full separation between userland images, even down to the kernel level (kernel-level exploits, user-visible features like nfsd, etc), is desired, then xen is the proper tool for that job. heck, give the xen livecd a test drive and marvel at xen's accomplishment.

just wanted to share my holiday weekend's research to help save someone else some time.

we tested it recently. yup, it involves 3% overhead on simple operations, but overhead is more than 20-30% on disk I/O, network etc.
And sure, memory pressure/requirements you mentioned are rather big.

I would recommend you to take a look at OpenVZ project as well. It is more mature, than vserver. We successfully run 30-50 VPSs on 1GB of RAM with it.

I guess this may be still issue with xen compared to qemu/bochs. It's not that straight forward but have a look at the access privileges model behind ring. Once you gain ring 1 privileges then the userland of host OS is toast.

I am curios, after the VT and Pacifica gets in and you can then run windows on xen directly, could you run games, graphics, etc...
I guess it depends what kind of drivers xen would provide or allow access too. Anyone?
I.e. work on linux and windows in tandem.
For example, applications that can't or have not yet been ported to linux will work on windows (such as games, proprietery...) and the rest would be linux.

I've been reading VMware press releases for the last few weeks with zero substance except how they were going to "open" something up. I went to Intel's Developer Forum and spoke with numerous developers from IBM, HP and Intel at and asked them straight up what the deal was. I asked, "Where is the "open code"? They all kind of (quietly) said the same thing. VMware is getting freaked out by Xen and wanted some press. In reality, they may document a few more APIs, but this is just a load...

I had written in this article about the advantages and disadvantages of the Xen and VMWare virtualization solutions.
One of the Xen advantages I pointed out was it was
free and open source project.

I felt it will be unfair not to mention that VMware
started that Community Source program in the beginning
of this August.

In the article I wrote aboout this Community Source program : "..it will be providing its partners with access to VMware ESX Server source code"; also VMWare news release (to which I gave a link) talks about giving source to ***partners***.

I think your comment should be read considering this and in this light.

Why would IBM waste resources on POWER5 support? They already have a rock-solid micropartitioning and virtualization environment on the POWER5 that supports Linux, and one that appears to provide even greater protection across partitions then xen does with domains. I'm running my own distribution on one as I write this, and I'm sold. I'd rather manage a SAN-backed POWER5 installation over a blade server any day.

I can see a big advantage for the PPC970, though, given that you can get JS20 blades for their blade center, and the HS20 already.