Xen supports para-virtualized guests as well as fully virtualized guests with para-virtualized drivers. Para-virtualization is faster than full virtualization but does not work with non-Linux operating systems or Linux operating system without the Xen kernel extensions. Xen fully virtualized are slower than KVM fully virtualized guests.

* At least 256 megs of RAM per guest plus 256 for the base OS. At least 756MB is recommended for each guest of a modern operating system. A good rule of thumb is to think about how much memory is required for the operating system normally and allocate that much to the virtualized guest.

+

−

* Xen host or Domain-0 support requires Fedora 8. Support will return once [[Features/XenPvopsDom0|parvirt_ops]] features are implemented in the upstream kernel.

+

−

+

−

==== Additional requirements for para-virtualized guests ====

+

−

+

−

* Xen. KVM does not support para-virtualization at this time. The kernel-xen package is required with versions of Fedora older than 10.

+

−

* Any x86-64 or Intel Itanium CPU or any x86 CPU with the PAE extensions. Many older laptops (particularly those based on Pentium Mobile / Centrino) do not have PAE support. To determine if a CPU has PAE extensions, execute:

On some Intel based systems(usually laptops) the Intel VT extensions are disabled in BIOS. Enter BIOS and enable Intel-VT or Vanderpool Technology which is usually located in the CPU options or Chipset menus.

You can use QEMU software emulation for full virtualization. Software virtualization is far slower than virtualization using the Intel VT or AMD-V extensions. QEMU can also virtualize other processor architectures like ARM or PowerPC.

−

−

=== Installing the virtualization packages ===

−

−

When installing fedora, the virtualization packages can be installed by selecting '''Virtualization''' in the Base Group in the installer.

−

−

For existing fedora installations, QEMU, KVM, and other virtualization tools can be installed by running the following command:

<pre>

<pre>

−

su -c "yum groupinstall 'Virtualization'"

+

su -c "service libvirtd start"

</pre>

</pre>

−

This will install <code>qemu-kvm</code>, <code>python-virtinst</code>, <code>qemu</code>, <code>virt-manager</code>, <code>virt-viewer</code> and all dependencies are needed. Optional packages in this group are <code>gnome-applet-vm</code> and <code>virt-top</code>.

When using KVM, to display all domains on the local system the command is <code>virsh -c qemu:///system list</code>.

+

−

When using Xen, the same command is <code>virsh -c xen:///system list</code>.

+

−

Be aware of this subtle variation.

+

−

+

−

To verify that virtualization is enabled on the system, run the following command, where <URI> is a valid URI that <code>libvirt</code> can recognize. For more details on URIs: see http://libvirt.org/uri.html.

+

<pre>

<pre>

−

$ su -c "virsh -c <URI> list"

+

$ lsmod | grep kvm

−

Name ID Mem(MiB) VCPUs State Time(s)

+

kvm

−

Domain-0 0 610 1 r----- 12492.1

+

kvm_intel

</pre>

</pre>

−

The above output indicates that there is an active hypervisor. If virtualization is not enabled an error similar to the following appears:

* For either, ensure the URI is properly specified (see http://libvirt.org/uri.html for details).

+

−

+

−

+

−

{{Admon/note | Note that for the default setup, networking for the guest OS (DomU) is bridged. This means that DomU gets an IP address on the same network as Dom0. If a DHCP server provides addresses, it needs to be configured to give addresses to the guests. Another networking type can be selected by editing <code>/etc/xen/xend-config.sxp</code>}}

+

−

+

−

=== Creating a fedora guest ===

+

−

The installation of Fedora guests using anaconda is supported. The installation can be started on the command line via the <code>virt-install</code> program or in the GUI program <code>virt-manager</code>. You will be prompted for the type of virtualization (that is, KVM or Xen and para-virtualization or full virtualization) used during the guest creation process.

# What is the name of your virtual machine? This is the label that will identify the guest OS. This label is used with <code>virsh</code> commands and <code>virt-manager</code>(Virtual Machine Manager).

# How much RAM should be allocated (in megabytes)? This is the amount of RAM to be allocated for the guest instance in megabytes (eg, 256). Note that installation with less than 256 megabytes is not recommended.

# What would you like to use as the disk (path)? The local path and file name of the file to serve as the disk image for the guest (eg, /home/joe/xenbox1). This will be exported as a full disk to your guest.

# How large would you like the disk to be (in gigabytes)? The size of the virtual disk for the guest (only appears if the file specified above does not already exist). 4.0 gigabytes is a reasonable size for a "default" install

# After a connection is opened, click the new icon next to the hypervisor, or right click on the active hypervisor and select "New" (Note - the new icon is going to be improved to make it easier to see)

+

−

# A wizard will present the same questions as appear with the <code>virt-install</code> command-line utility (see descriptions above). The wizard assumes that a graphical installation is desired and does not prompt for this option.

+

−

# On the last page of the wizard there is a "Finish" button. When this is clicked, the guest OS is provisioned. After a few moments a VNC window should appear. Proceed with the installation as normal.

Guests can be managed on the command line with the <code>virsh</code> utility. The <code>virsh</code> utility is built around the libvirt management API and has a number of advantages over the traditional Xen <code>xm</code> tool:

For a complete list of commands available for use with <code>virsh</code>:

+

Per salvare una schermata della macchina virtuale su un file:

<pre>

<pre>

−

su -c "virsh help"

+

su -c "virsh save <macchina virtuale (nome | id | uuid)> <nomefile>"

</pre>

</pre>

−

Or consult the manual page: <code>man 1 virsh</code>

+

Per ripristinare una schermata precedentemente salvata:

−

+

−

Bugs in the <code>virsh</code> tool should be reported in [http://bugzilla.redhat.com BugZilla] against the 'libvirt' component.

+

−

+

−

==== Managing guests with qemu-kvm ====

+

−

+

−

KVM virtual machines can also be managed in the command line using the 'qemu-kvm' command. See <code>man qemu</code> for more details.

+

−

+

−

== Troubleshooting virtualization ==

+

−

+

−

=== SELinux ===

+

−

+

−

The SELinux policy in Fedora has the necessary rules to allow the use of virtualization. The main caveat to be aware of is that any file backed disk images need to be in the directory {{filename|/var/lib/libvirt/images}}. This applies both to regular disk images, and ISO images. Block device backed disks are already labelled correctly to allow them to pass SELinux checks.

+

−

+

−

Beginning with [[Releases/11|Fedora 11]], virtual machines under SELinux are isolated from each other with [[Features/SVirt_Mandatory_Access_Control|sVirt]].

+

−

+

−

=== Log files ===

+

−

The graphical interface, <code>virt-manager</code>, used to create and manage

The <code>virt-install</code> tool, used to create virtual machines, logs to {{filename|$HOME/.virtinst/virt-install.log}}

+

−

+

−

Logging from <code>virt-manager</code> and <code>virt-install</code> may be increased by setting the environment variable <code>LIBVIRT_DEBUG=1</code>.

+

−

See http://libvirt.org/logging.html

+

−

+

−

All QEMU command lines executed by <code>libvirt</code> are logged to {{filename|/var/log/libvirt/qemu/$DOMAIN.log}} where <code>$DOMAIN</code> is the name of the guest.

+

−

+

−

The <code>libvirtd</code> daemon is responsible for handling connections from

+

−

tools such as <code>virsh</code> and <code>virt-manager</code>.

+

−

The level and type of logging produced by <code>libvirtd</code>

+

−

may be modified in {{filename|/etc/libvirt/libvirtd.conf}}.

+

−

+

−

There are two log files stored on the host system to assist with debugging Xen related problems. The file {{filename|/var/log/xen/xend.log}} holds the same information reported with the '<code>xm log</code>' command.

+

−

+

−

The second file, {{filename|/var/log/xen/xend-debug.log}} usually contains much more detailed information.

+

−

+

−

When reporting errors, always include the output from both {{filename|/var/log/xen/xend.log}} and {{filename|/var/log/xen/xend-debug.log}} .

+

−

+

−

If starting a fully-virtualized domains (ie unmodified guest OS) there are also logs in {{filename|/var/log/xen/qemu-dm*.log}} which can contain useful information.

+

−

+

−

Xen hypervisor logs can be seen by running the '<code>xm dmesg</code>' command.

+

−

+

−

=== Serial console access for troubleshooting and management ===

+

−

Serial console access is useful for debugging kernel crashes and remote management can be very helpful. Accessing the serial consoles of xen kernels or virtualized guests is slightly different to the normal procedure.

+

−

+

−

==== Host serial console access ====

+

−

+

−

If the Xen kernel itself has died and the hypervisor has generated an error, there is no way to record the error persistently on the local host. Serial console lets you capture it on a remote host.

+

−

+

−

The Xen host must be setup for serial console output, and a remote host must exist to capture it. For the console output, set the appropriate options in /etc/grub.conf:

for a 38400-bps serial console on com1 (ie. /dev/ttyS0 on Linux.) The "sync_console" works around a problem that can cause hangs with asynchronous hypervisor console output, and the "pnpacpi=off" works around a problem that breaks input on serial console. "console=ttyS0 console=tty" means that kernel errors get logged both on the normal VGA console and on serial console. Once that is done, install and set up <code>ttywatch</code> to capture the information on a remote host connected by a standard null-modem cable. For example, on the remote host:

+

Per esportare il file di configurazione di una macchina virtuale:

−

+

<pre>

<pre>

−

su -c "ttywatch --name myhost --port /dev/ttyS0"

+

su -c "virsh dumpxml <macchina virtuale (nome | id | uuid)"

</pre>

</pre>

−

Will log output from /dev/ttyS0 into a file /var/log/ttywatch/myhost.log

Para-virtualized guest OS will automatically have a serial console configured, and plumbed through to the Domain-0 OS. This can be accessed from the command line using

+

−

+

<pre>

<pre>

−

su -c "virsh console &lt;domain name&gt;"

+

su -c "virsh help"

</pre>

</pre>

−

Alternatively, the graphical <code>virt-manager</code> program can display the serial console. Simply display the 'console' or 'details' window for the guest and select 'View -> Serial console' from the menu bar.

+

Oppure consultare la pagina di manuale: <code>man 1 virsh</code>

−

+

−

==== Fully virtualized guest serial console access ====

+

−

+

−

Fully-virtualized guest OS will automatically have a serial console configured, but the guest kernel will not be configured to use this out of the box. To enable the guest console in a Linux fully-virt guest, edit the /etc/grub.conf in the guest and add 'console=ttyS0 console=tty0'. This ensures that all kernel messages get sent to the serial console, and the regular graphical console. The serial console can then be access in same way as paravirt guests:

+

−

+

−

<pre>

+

−

su -c "virsh console &lt;domain name&gt;"

+

−

</pre>

+

−

+

−

Alternatively, the graphical <code>virt-manager</code> program can display the serial console. Simply display the 'console' or 'details' window for the guest & select 'View -> Serial console' from the menu bar.

+

−

+

−

=== Accessing data on guest disk images ===

+

−

+

−

There are two tools which can help greatly in accessing data within a guest disk image: ''lomount'' and ''kpartx''.

+

−

+

−

{{Admon/caution | Remember never to do this while the guest is up and running, as it could corrupt the filesystem}}

+

−

+

−

{{admon/note|libguestfs|You can also try the experimental [http://et.redhat.com/~rjones/libguestfs/ libguestfs tools].}}

lomount only works with small disk images and cannot deal with LVM volumes, so for more complex cases, kpartx (from the ''device-mapper-multipath'' RPM) is preferred:

+

−

+

−

* '''kpartx'''

+

−

+

−

<pre>

+

−

su -c "yum install device-mapper-multipath"

+

−

su -c "kpartx -av /dev/xen/guest1"

+

−

add map guest1p1 : 0 208782 linear /dev/xen/guest1 63

+

−

add map guest1p2 : 0 16563015 linear /dev/xen/guest1 208845

+

−

</pre>

+

−

+

−

Note that this only works for block devices, not for images installed on regular files. To use file images, set up a loopback device for the file first:

+

−

+

−

<pre>

+

−

su -c "losetup -f"

+

−

/dev/loop0

+

−

su -c "losetup /dev/loop0 /xen/images/fc5-file.img"

+

−

su -c "kpartx -av /dev/loop0"

+

−

add map loop0p1 : 0 208782 linear /dev/loop0 63

+

−

add map loop0p2 : 0 12370050 linear /dev/loop0 208845

+

−

</pre>

+

−

+

−

In this case we have added an image formatted as a default Fedora install, so it has two partitions: one /boot, and one LVM volume containing everything else. They are accessible under /dev/mapper:

+

−

+

−

<pre>

+

−

su -c "ls -l /dev/mapper/ | grep guest1"

+

−

brw-rw---- 1 root disk 253, 6 Jun 6 10:32 xen-guest1

+

−

brw-rw---- 1 root disk 253, 14 Jun 6 11:13 guest1p1

+

−

brw-rw---- 1 root disk 253, 15 Jun 6 11:13 guest1p2

+

−

su -c "mount /dev/mapper/guest1p1 /mnt/boot/"

+

−

</pre>

+

−

+

−

To access LVM volumes on the second partition, rescan LVM with <code>vgscan</code> and activate the volume group on that partition (named "VolGroup00" by default) with <code>vgchange -ay</code>:

+

−

+

−

<pre>

+

−

su -c "kpartx -a /dev/xen/guest1"

+

−

su -c "vgscan"

+

−

Reading all physical volumes. This may take a while...

+

−

Found volume group "VolGroup00" using metadata type lvm2

+

−

su -c "vgchange -ay VolGroup00"

+

−

2 logical volume(s) in volume group "VolGroup00" now active

+

−

su -c "lvs"

+

−

LV VG Attr LSize Origin Snap% Move Log Copy%

+

−

LogVol00 VolGroup00 -wi-a- 5.06G

+

−

LogVol01 VolGroup00 -wi-a- 800.00M

+

−

su -c "mount /dev/VolGroup00/LogVol00 /mnt/"

+

−

...

+

−

su -c "umount /mnt"

+

−

su -c "vgchange -an VolGroup00"

+

−

su -c "kpartx -d /dev/xen/guest1"

+

−

</pre>

+

−

{{Admon/caution | Note: '''Always''' deactivate the logical volumes with "vgchange -an", remove the partitions with "kpartx -d", and (if appropriate) delete the loop device with "losetup -d" after performing the above steps. The default volume group name for a Fedora install is always the same, it is important to avoid activating two volume group of the same name at the same time. LVM will cope as best it can, but it is not possible to distinguish between these two groups on the command line. In addition, if the volume group is active on the host and the guest at the same time, it can cause filesystem corruption.}}

If the Troubleshooting section above does not help you to solve your problem, check the

+

−

list of existing [[Virtualization bugs|virtualization bugs]], and search the archives of the mailing lists in the resources section. If you believe your problem is a previously undiscovered bug, please [[How to debug Virtualization problems|report it]] to Bugzilla.

+

−

==== Resources ====

+

=== QEMU/KVM senza Libvirt ===

−

* General virtualization discussion including [http://www.linux-kvm.org/page/Main_Page KVM] and [http://www.nongnu.org/qemu/ QEMU]