This document explains how to setup Xen 4.2 in Arch. It uses the new oxenstored / xl toolstack (replaces the xend / xm toolstack which was deprecated in Xen 4.1).

+

This document explains how to use Xen 4.2 in Arch. It uses the new oxenstored / xl toolstack (replaces the xend / xm toolstack which was deprecated in Xen 4.1).

==What is Xen?==

==What is Xen?==

Line 11:

Line 11:

:"''The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems.''"

:"''The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems.''"

−

The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, it starts the "Dom0" (for "domain 0"), or privileged domain, which in our case runs a Linux kernel (other possible Dom0 operating systems are NetBSD and OpenSolaris). The hardware must, of course, be supported by this kernel to run Xen. Once the Dom0 has started, one or more "DomU" domains can be started and controlled from Dom0.

+

The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader of the computer it is installed on, and allows multiple operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, it starts the "Dom0" (short for "domain 0"), or privileged domain, which in our case runs a Linux kernel (other possible Dom0 operating systems are NetBSD and OpenSolaris). The physical hardware must, of course, be supported by this kernel to run Xen. Once the Dom0 has started, one or more "DomUs" (short for user domains, sometimes called VMs) can be started and controlled from Dom0.

+

+

Xen.org provides a [http://wiki.xen.org/wiki/Xen_Overview full overview]

==Types of Virtualization Available with Xen==

==Types of Virtualization Available with Xen==

===Paravirtual (PV)===

===Paravirtual (PV)===

−

Paravirtualized guests require a kernel with support for Xen built in. This is default for all recent Linux kernels and some other Unix-like systems. Paravirtualized domUs usually run faster as they do not have to run in emulated hardware.

+

Paravirtualized guests require a kernel with support for Xen built in. This is default for all recent Linux kernels and some other Unix-like systems. Paravirtualized domUs usually run faster than HVM domains as they do not have to run in emulated hardware.

===Hardware Virtual (HVM)===

===Hardware Virtual (HVM)===

−

For hardware virtualization in our domUs, the host system hardware must include either Intel VT-x or AMD-V (SVM) virtualization support. In order to verify this, run the following command on the host system:

+

For OSes that do not natively support Xen (e.g. Windows), HVM offers full hardware virtualization. To use HVM in Xen, the host system hardware must include either Intel VT-x or AMD-V (SVM) virtualization support. In order to verify this, run the following command on the host system:

grep -E "(vmx|svm)" --color=always /proc/cpuinfo

grep -E "(vmx|svm)" --color=always /proc/cpuinfo

If the above command does not produce output, then hardware virtualization support is unavailable and your hardware is unable to run Xen HVM guests. It is also possible that the host CPU supports one of these features, but that the functionality is disabled by default in the system BIOS. To verify this, access the host system's BIOS configuration menu during the boot process and look for an option related to virtualization support. If such an option exists and is disabled, then enable it, boot the system and repeat the above command.

If the above command does not produce output, then hardware virtualization support is unavailable and your hardware is unable to run Xen HVM guests. It is also possible that the host CPU supports one of these features, but that the functionality is disabled by default in the system BIOS. To verify this, access the host system's BIOS configuration menu during the boot process and look for an option related to virtualization support. If such an option exists and is disabled, then enable it, boot the system and repeat the above command.

−

===Paravirtual on Hardware (PV on HM)===

+

== Obtaining Xen ==

−

There is a third mode which runs Xen on top of a HardwareVirtual guest.

+

Xen is available from the AUR. The recommended current stable version is [https://aur.archlinux.org/packages.php?ID=14640 Xen 4.2], and the bleeding edge unstable package can be found [https://aur.archlinux.org/packages/xen-hg-unstable/ here.] Both packages provide the Xen hypervisor, current xl interface and all configuration and support files, including systemd services.

−

==Recommended Practices==

+

Xen, unlike certain other virtualization systems, relies on a full install of the base operating system. Before attempting to install Xen, your host machine should have a fully operational and up-to-date install of Arch Linux. If you are building a new host from scratch, see the [[Installation_Guide|Installation Guide]] for instructions on installing Arch Linux.

−

The [http://wiki.xen.org/wiki/Main_Page current xen.org wiki] has a section regarding best practices for running Xen. It includes information on allocating a fixed amount of memory dom0 and how to dedicate (pin) a CPU core for its own use.

−

== Obtaining Xen ==

+

Like all AUR packages, the Xen binaries are built from source. Note that it is possible (but not necessary) to build the package on a separate machine and transfer the xz package over, assuming that the machines share the same architecture (e.g. x86_64). For Xen, an internet connection is needed during its compilation because further source files are downloaded during the process. Xen.org recommends a host to be 64-bit. This requires the 'multilib' repository to be enabled in ''etc/pacman.conf''.

−

Xen is available from AUR,

−

To speed the introduction of 4.2, the maintainer during Xen 4.1 stepped aside; there are significant changes between 4.1 and 4.2, coupled with the transition of Arch from rc.d to systemd.

+

To build the package you will need the following:

−

As it may take a short time for the new package to settle out there is still a section to build it from source. See [[#Building and Installing Xen Hypervisor and Dom0 Host from Source]].

+

base-devel zlib lzo2 python2 ncurses openssl libx11 yajl

+

libaio glib2 base-devel bridge-utils iproute gettext

+

dev86 bin86 iasl markdown git wget

+

+

optional packages: ocaml ocaml-findlib

−

== Setting up the Host (Dom0) ==

+

You will need to enable the 'extra' repository to get bin86. A tool such as [https://aur.archlinux.org/packages/yaourt/ yaourt] or [https://aur.archlinux.org/packages/packer/ packer] can aid in downloading, compiling and installing dependencies for AUR packages.

−

Xen requires that you boot a special xen kernel (xen.gz) which in turn boots your system's normal kernel. A new bootloader entry is needed. See [[#Bootloader Configuration]].

−

Previous versions of Xen created a network bridge to enable communication between the Xen host, Xen guests and the internet. Xen 4.2 requires networking to be set up by the host system. See [[#Set up Networking]].

+

== Configuring Xen ==

+

The following configuration steps are required once the Xen package is installed.

−

The Xen host maintains the tools and configuration files for creating and controlling Xen guests. See [[#Creating Guest Domains (domU)]].

+

'''The dom0 host requires'''

+

* an entry in the bootloader configuration file

+

* systemd services to be started at boot time

+

* a xenfs filesystem mount point

+

* bridged networking configuration

−

=== Required packages for Xen host ===

+

In addition to these required steps, the current xen.org wiki has a section regarding [http://wiki.xen.org/wiki/Xen_Best_Practices best practices for running Xen.] It includes information on allocating a fixed amount of memory dom0 and how to dedicate (pin) a CPU core for dom0 use.

−

bridge-utils lzo2 bluez vde2 sdl libaio

−

== Bootloader Configuration ==

+

=== Bootloader Configuration ===

−

The menuentry for a Xen system starts a Xen kernel before starting the main host's kernel.

+

Xen requires that you boot a special xen kernel (xen.gz) which in turn boots your system's normal kernel. A new bootloader entry is needed. To boot into the Xen system, we need a new menuentry in grub.cfg. The Xen package provides a grub2 generator file: ''/etc/grub.d/09_xen''. This file can be edited to customize the Xen boot commands, and will add a menuentry to your grub.cfg when the following command is run:

Previous versions of Xen provided a bridge connection whereas Xen 4.2 requires that network communications between the guest, the host (and beyond) is set up separately. Using dhcp throughout simplifies things while we get everything working, the guest. When fully working, the guest will normally benefit from a static network address.

+

Issue the following commands as root so that the services are started at bootup:

+

# systemctl enable xenstored.service

+

# systemctl enable xenconsoled.service

+

# systemctl enable xendomains.service

−

Netcfg greatly simplifies network configuration and is now included as standard in the ''base'' package. Example configuration files are provided in ''etc/network.d/examples'' and Xen 4.2 provides scripts for various networking configurations in ''/etc/xen/scripts''.

+

=== Xenfs Mountpoint ===

+

Include in your ''/etc/fstab''

+

none /proc/xen xenfs defaults 0 0

−

=== Network Bridge ===

+

=== Bridged Networking ===

−

By default, Xen expects a bridge connection to be named xenbr0.

+

Previous versions of Xen provided a bridge connection whereas Xen 4.2 requires that network communications between the guest, the host (and beyond) is set up separately. The use of both DHCP and static addressing is possible, and the choice should be determined by your network topology. With basic bridged networking, a virtual switch is created in dom0 that every domu is attached to. More complex setups are possible, see the [http://wiki.xen.org/wiki/Xen_Networking Networking] article on the Xen wiki for details.

+

+

Netcfg greatly simplifies network configuration and is now included as standard in the ''base'' package. Example configuration files are provided in ''etc/network.d/examples'' and Xen 4.2 provides scripts for various networking configurations in ''/etc/xen/scripts''.

+

+

By default, Xen expects a bridge to exist named xenbr0. To set this up with netcfg, do the following:

# cd /etc/network.d

# cd /etc/network.d

Line 97:

Line 111:

make the following changes to xen-bridge:

make the following changes to xen-bridge:

INTERFACE="xenbr0"

INTERFACE="xenbr0"

−

BRIDGE_INTERFACE="eth0"

+

BRIDGE_INTERFACE="eth0" # Use the name of the external interface found with the 'ip link' command

DESCRIPTION="Xen bridge connection"

DESCRIPTION="Xen bridge connection"

Line 125:

Line 139:

xenbr0 8000.001a9206c0c0 no eth0

xenbr0 8000.001a9206c0c0 no eth0

−

== Creating Guest Domains (domU) ==

+

=== Final Steps ===

+

Reboot your dom0 host and ensure that the Xen kernel boots correctly and that all settings survive a reboot. A properly set up dom0 should report show the following when you run xl list (as root):

+

# xl list

+

Name ID Mem VCPUs State Time(s)

+

Domain-0 0 511 2 r----- 41652.9

+

Of course, the Mem, VCPUs and Time columns will be different depending on machine configuration and uptime. The important thing is that dom0 is listed.

+

+

== Using Xen ==

+

Once the dom0 is fully operational, domUs may be created / imported. Each OS has a slightly different method of installation, see the [http://wiki.xen.org/wiki/Category:Guest_Install Guest Install] page of the Xen wiki for links to instructions.

−

=== Creating Paravirtualized (PV) Guests===

+

=== Creating a Paravirtualized (PV) Arch domU ===

−

The general procedure is:

+

This is how to install Arch as a user domain (or VM) on an already-running Xen host. To install Arch ''as'' the Xen host (dom0), see the previous section.

−

perform a normal or minimal installation of the distro that will become a guest; copy its kernel/initrd to a directory on the host; modify its /etc/fstab to use the virtual disk; modify its the way it sets up a terminal (getty); create a config file for xl.

−

==== Example for Debian Squeeze ====

+

To begin, download the latest install ISO from the nearest mirror: [https://www.archlinux.org/download/ Dowload page]. Place the ISO file on the dom0 host. (it is recommended that its checksum be verified, too)

−

Install Debian 6.0 (do not bother with graphical interface, install as little as possible). Having installed it, boot into in your new Arch Xen system and mount it.

−

The example has the guest Debian Squeeze installed onto /dev/vg0/pv_squeeze

+

Create the hard disks for the new domU. This can be done with [[LVM]], raw hard disk partitions or image files. To create a 10GiB blank hard disk file, the following command can be used:

+

truncate -s 10G sda.img

+

This creates a sparse file, which grows (to a maximum of 10GiB) only when data is added to the image. If file IO speed is of greater importance than domain portability, using a [[LVM|Logical Volume]] or [[Partitioning|raw partition]] may be a better choice.

−

# mkdir /tmp/squeeze

+

Next, loop-mount the installation ISO. To do this, ensure the directory /mnt exists and is empty, then run the following command (being sure to fill in the correct ISO path):

−

# mkdir -p /var/lib/xen/images/squeeze

+

# mount -o loop /path/to/iso /mnt

−

# mount -text4 /dev/vg0/pv_squeeze /tmp/squeeze/

−

Copy the kernel and initrd to a location available Xen. n.b. Squeeze has softlinks (vmlinuz and initrd.img) in its root directory to the current kernel, so check you have copied a real kernel, and not just a link!

This file needs to tweaked for your specific use. Most importantly, the {{ic|1=archisolabel=ARCH_201301}} line must be edited to use the release year/month of the ISO being used. If you want to install 32-bit Arch, change the kernel and ramdisk paths from /x86_64/ to /i686/. The {{ic|"phy:/path/to/partition,sda1,w"}} line must be edited to point to the partition created for the domU. If an image file is being used, the {{ic|phy:}} needs to be changed to {{ic|file:}}. Finally, a MAC address must be assigned. The 00:16:3e MAC block is reserved for Xen domains, do the last three digits may be randomly filled in (hex values 0-9 and a-f only). See the xl.cfg man page for more information on what the .cfg file lines do. The AUR package [https://aur.archlinux.org/packages/xen-docs/ xen-docs] will need to be installed to access the man pages.

−

edit /tmp/squeeze/etc/fstab

+

Create the new domU:

−

change its root entry to begin with /dev/xvda1

+

# xl create -c /etc/xen/archdomu.cfg

−

# /dev/xvda1 / ext4 noatime,nodiratime,errors=remount-ro 0 1

+

The -c option will enter the new domain's console when successfully created. At this point, Arch should be installed as usual. The [[Installation Guide]] should be followed. There will be a few deviations, however. The block devices listed in the disks line of the cfg file will show up as {{ic|/dev/xvd*}}. Use these devices when partitioning the domU. After installation and before the domU is rebooted, the following modules must be added to {{ic|/etc/mkinitcpio.conf}}:

+

MODULES="xen-blkfront xen-fbfront xen-netfront xen-kbdfront"

+

Without these modules, the domU will not boot correctly. After saving the edit, rebuild the initramfs with the following command:

+

mkinitcpio -p linux

+

For booting, it is not necessary to install Grub. Xen has a Python-based grub emulator, so all that is needed to boot is a grub.cfg file: (It may be necessary to create the /boot/grub directory)

This file must be edited to match the UUID of the root partition. From within the domU, run the following command:

+

# blkid

+

Replace all instances of __UUID__ with the real UUID of the root partition (the one that mounts as "/").

−

Debian Squeeze uses ''/etc/inittab'' to configure its terminals. Other distros use other mechanisms. We need to replace the creation of terminals ''tty1'', ''tty2'' etc. with a single ''hvc0''.

+

Shutdown the domU with the {{ic|poweroff}} command. The console will be returned to the hypervisor when the domain is fully shut down, and the domain will no longer appear in the xl domains list. Now the ISO file may be unmounted:

+

# umount /mnt

+

The domU cfg file should now be edited. Delete the "kernel = ", "ramdisk = ", and "extra = " lines and replace them with the following line:

+

bootloader = "pygrub"

+

Also remove the ISO disk from the "disk = " line.

−

Comment out any ''getty tty'' lines like these:

+

The Arch domU is now set up. It may be started with the same line as before:

−

1:2345:respawn:/sbin/getty 38400 tty1

+

# xl create -c /etc/xen/archdomu.cfg

−

2:23:respawn:/sbin/getty 38400 tty2

+

If the domU should be started on boot, create a symlink to the cfg file in /etc/xen/auto and ensure the xendomains service is set up correctly.

−

−

Replace with the single line

−

hvc:2345:respawn:/sbin/getty 38400 hvc0

−

−

Create a guest configuration file by copying one of the given example files and editing as follows:

and then rebuild its initramfs-linux.img with ''mkinitcpio -p linux''.

−

−

=== Debian ===

−

Wheezy (testing) and Squeeze have identical setups.

== Common Errors ==

== Common Errors ==

−

'xl list' complains about libxl

+

* 'xl list' complains about libxl

- Either you have not booted into the Xen system, or xen modules listed in ''xencommons'' script are not installed

- Either you have not booted into the Xen system, or xen modules listed in ''xencommons'' script are not installed

−

''xl create'' fails

+

* ''xl create'' fails

- check the guest's kernel is located correctly, check the pv-xxx.cfg file for spelling mistakes (like using ''initrd'' instead of ''ramdisk'')

- check the guest's kernel is located correctly, check the pv-xxx.cfg file for spelling mistakes (like using ''initrd'' instead of ''ramdisk'')

−

Arch linux guest hangs with a ctrl-d message

+

* Arch linux guest hangs with a ctrl-d message

−

- press ctrl-d until you get back to a prompt, rebuild its initramfs as set

+

- press ctrl-d until you get back to a prompt, rebuild its initramfs described

−

−

The domu guest hangs at 'crond'

−

- The guest's terminal needs to be set to ''hvc0'' instead of ''tty1'' See the Debian Squeeze example above.

−

−

== Building and Installing Xen Hypervisor and Dom0 Host from Source ==

−

Xen recommends that a Xen host (dom0) is 64-bit, guests may be either 32-bit or 64-bit. To build such a system requires a mixed 64/32-bit installation and packages from the the Community repository; the host uses a network bridge and a modified entry in the bootloader configuration file (for example, grub.cfg). These notes assume an installation using systemd is in use, as is the default for a new installation of Arch. For these reasons, you may prefer to make a fresh installation of Arch on which to build and install Xen.

−

−

===Building Xen===

−

Building and installing Xen significantly modifies your system. Xen is an established program, but Xen 4.2 is extremely new. Consider Xen 4.2 on an Arch system to be untested. Consider yourself to be an alpha tester, perhaps make a throw-away Arch system for the Xen installation.

−

−

'''It is best practise to backup and highly recommended to make a fresh installation of Arch on which to build and install Xen.'''

−

−

* The build process installs additional source from git, so a working internet connection is required.

−

* systemd service files will be available soon. Until then we use (the currently still supported, but legacy) rc.d and rc.conf.

−

−

Edit /etc/pacman.conf to uncomment entries under repositries for multilib and community (three lines each).

* includes some udev rules for 'xend' which creates LOTS of error messages when booting up (xend is not used, having been replaced by xendomains)

−

−

The only script we need from ''etc/init.d'' is ''xendomains'' since the systemd service files given below replace ''etc/init.d/xencommons''. The service files are based on those in Fedora 17 (which uses systemd and provides Xen 4.1). However, it places ''xendomains'' in ''/usr/libexec'' which is not present in Arch. The ''xendomain.service'' below uses ''/etc/xen/scripts'' as the location for ''xendomains''.

−

−

Fix these problems with

−

# cd dist

−

# chmod -R -s install/

−

# cp install/etc/init.d/xendomains install/etc/xen/scripts

−

# rm install/etc/init.d/*

−

# rmdir install/etc/init.d

−

# rm install/etc/udev/rules.d/xend.rules

−

−

If installing to another Arch system, make a tarball and copy it over:

−

# cd ..

−

# tar cjf ~/xen-dist-4.2.bz2 dist/

−

−

copy the tarball to the other installation, boot into it and use 'tar xjf xen-dist-4.2.bz2 .' to unpack

−

then install packages listed under 'Packages required for host'

−

−

Now change to the 'dist' directory and install

−

# cd dist

−

−

Whether installing now, or to another installation, from the ''dist'' directory issue:

−

# ./install.sh

−

−

=== Enabling Xen under Systemd ===

−

Add the following files

−

−

'''/etc/modules/load/xen.conf'''

−

xen-evtchn

−

xen-gntdev

−

xen-gntalloc

−

xen-blkback

−

xen-netback

−

xen-pciback

−

xen-acpi-processor

−

−

n.b The following were included in xencommons, but were not inserted by systemd

What is Xen?

According to the Xen development team:

"The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."

The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader of the computer it is installed on, and allows multiple operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, it starts the "Dom0" (short for "domain 0"), or privileged domain, which in our case runs a Linux kernel (other possible Dom0 operating systems are NetBSD and OpenSolaris). The physical hardware must, of course, be supported by this kernel to run Xen. Once the Dom0 has started, one or more "DomUs" (short for user domains, sometimes called VMs) can be started and controlled from Dom0.

Types of Virtualization Available with Xen

Paravirtual (PV)

Paravirtualized guests require a kernel with support for Xen built in. This is default for all recent Linux kernels and some other Unix-like systems. Paravirtualized domUs usually run faster than HVM domains as they do not have to run in emulated hardware.

Hardware Virtual (HVM)

For OSes that do not natively support Xen (e.g. Windows), HVM offers full hardware virtualization. To use HVM in Xen, the host system hardware must include either Intel VT-x or AMD-V (SVM) virtualization support. In order to verify this, run the following command on the host system:

grep -E "(vmx|svm)" --color=always /proc/cpuinfo

If the above command does not produce output, then hardware virtualization support is unavailable and your hardware is unable to run Xen HVM guests. It is also possible that the host CPU supports one of these features, but that the functionality is disabled by default in the system BIOS. To verify this, access the host system's BIOS configuration menu during the boot process and look for an option related to virtualization support. If such an option exists and is disabled, then enable it, boot the system and repeat the above command.

Obtaining Xen

Xen is available from the AUR. The recommended current stable version is Xen 4.2, and the bleeding edge unstable package can be found here. Both packages provide the Xen hypervisor, current xl interface and all configuration and support files, including systemd services.

Xen, unlike certain other virtualization systems, relies on a full install of the base operating system. Before attempting to install Xen, your host machine should have a fully operational and up-to-date install of Arch Linux. If you are building a new host from scratch, see the Installation Guide for instructions on installing Arch Linux.

Like all AUR packages, the Xen binaries are built from source. Note that it is possible (but not necessary) to build the package on a separate machine and transfer the xz package over, assuming that the machines share the same architecture (e.g. x86_64). For Xen, an internet connection is needed during its compilation because further source files are downloaded during the process. Xen.org recommends a host to be 64-bit. This requires the 'multilib' repository to be enabled in etc/pacman.conf.

You will need to enable the 'extra' repository to get bin86. A tool such as yaourt or packer can aid in downloading, compiling and installing dependencies for AUR packages.

Configuring Xen

The following configuration steps are required once the Xen package is installed.

The dom0 host requires

an entry in the bootloader configuration file

systemd services to be started at boot time

a xenfs filesystem mount point

bridged networking configuration

In addition to these required steps, the current xen.org wiki has a section regarding best practices for running Xen. It includes information on allocating a fixed amount of memory dom0 and how to dedicate (pin) a CPU core for dom0 use.

Bootloader Configuration

Xen requires that you boot a special xen kernel (xen.gz) which in turn boots your system's normal kernel. A new bootloader entry is needed. To boot into the Xen system, we need a new menuentry in grub.cfg. The Xen package provides a grub2 generator file: /etc/grub.d/09_xen. This file can be edited to customize the Xen boot commands, and will add a menuentry to your grub.cfg when the following command is run:

Systemd Services

Xenfs Mountpoint

Include in your /etc/fstab

none /proc/xen xenfs defaults 0 0

Bridged Networking

Previous versions of Xen provided a bridge connection whereas Xen 4.2 requires that network communications between the guest, the host (and beyond) is set up separately. The use of both DHCP and static addressing is possible, and the choice should be determined by your network topology. With basic bridged networking, a virtual switch is created in dom0 that every domu is attached to. More complex setups are possible, see the Networking article on the Xen wiki for details.

Netcfg greatly simplifies network configuration and is now included as standard in the base package. Example configuration files are provided in etc/network.d/examples and Xen 4.2 provides scripts for various networking configurations in /etc/xen/scripts.

By default, Xen expects a bridge to exist named xenbr0. To set this up with netcfg, do the following:

# cd /etc/network.d
# cp examples/bridge xenbridge-dhcp

make the following changes to xen-bridge:

INTERFACE="xenbr0"
BRIDGE_INTERFACE="eth0" # Use the name of the external interface found with the 'ip link' command
DESCRIPTION="Xen bridge connection"

Of course, the Mem, VCPUs and Time columns will be different depending on machine configuration and uptime. The important thing is that dom0 is listed.

Using Xen

Once the dom0 is fully operational, domUs may be created / imported. Each OS has a slightly different method of installation, see the Guest Install page of the Xen wiki for links to instructions.

Creating a Paravirtualized (PV) Arch domU

This is how to install Arch as a user domain (or VM) on an already-running Xen host. To install Arch as the Xen host (dom0), see the previous section.

To begin, download the latest install ISO from the nearest mirror: Dowload page. Place the ISO file on the dom0 host. (it is recommended that its checksum be verified, too)

Create the hard disks for the new domU. This can be done with LVM, raw hard disk partitions or image files. To create a 10GiB blank hard disk file, the following command can be used:

truncate -s 10G sda.img

This creates a sparse file, which grows (to a maximum of 10GiB) only when data is added to the image. If file IO speed is of greater importance than domain portability, using a Logical Volume or raw partition may be a better choice.

Next, loop-mount the installation ISO. To do this, ensure the directory /mnt exists and is empty, then run the following command (being sure to fill in the correct ISO path):

This file needs to tweaked for your specific use. Most importantly, the archisolabel=ARCH_201301 line must be edited to use the release year/month of the ISO being used. If you want to install 32-bit Arch, change the kernel and ramdisk paths from /x86_64/ to /i686/. The "phy:/path/to/partition,sda1,w" line must be edited to point to the partition created for the domU. If an image file is being used, the phy: needs to be changed to file:. Finally, a MAC address must be assigned. The 00:16:3e MAC block is reserved for Xen domains, do the last three digits may be randomly filled in (hex values 0-9 and a-f only). See the xl.cfg man page for more information on what the .cfg file lines do. The AUR package xen-docs will need to be installed to access the man pages.

Create the new domU:

# xl create -c /etc/xen/archdomu.cfg

The -c option will enter the new domain's console when successfully created. At this point, Arch should be installed as usual. The Installation Guide should be followed. There will be a few deviations, however. The block devices listed in the disks line of the cfg file will show up as /dev/xvd*. Use these devices when partitioning the domU. After installation and before the domU is rebooted, the following modules must be added to /etc/mkinitcpio.conf:

MODULES="xen-blkfront xen-fbfront xen-netfront xen-kbdfront"

Without these modules, the domU will not boot correctly. After saving the edit, rebuild the initramfs with the following command:

mkinitcpio -p linux

For booting, it is not necessary to install Grub. Xen has a Python-based grub emulator, so all that is needed to boot is a grub.cfg file: (It may be necessary to create the /boot/grub directory)

This file must be edited to match the UUID of the root partition. From within the domU, run the following command:

# blkid

Replace all instances of __UUID__ with the real UUID of the root partition (the one that mounts as "/").

Shutdown the domU with the poweroff command. The console will be returned to the hypervisor when the domain is fully shut down, and the domain will no longer appear in the xl domains list. Now the ISO file may be unmounted:

# umount /mnt

The domU cfg file should now be edited. Delete the "kernel = ", "ramdisk = ", and "extra = " lines and replace them with the following line:

bootloader = "pygrub"

Also remove the ISO disk from the "disk = " line.

The Arch domU is now set up. It may be started with the same line as before:

# xl create -c /etc/xen/archdomu.cfg

If the domU should be started on boot, create a symlink to the cfg file in /etc/xen/auto and ensure the xendomains service is set up correctly.