{{Out of date|{{Pkg|qemu-kvm}} no longer exists as all of its features have been merged into {{Pkg|qemu}}. Whole page needs update.}}

+

{{Article summary start}}

−

{{Merge|QEMU|For the same reason as above, I suggest to merge as much as possible into [[QEMU]]. It's not clear if things in this article are KVM-specific or can be applied to plain QEMU too.}}

+

{{Article summary text|This article covers checking for KVM support and some KVM-specific notes, features etc. It does not cover features common to multiple emulators using KVM as a backend. You should see related articles for such information.}}

−

'''KVM''', Kernel-based Virtual Machine, is a hypervisor built right into the Linux kernel. It is similar to [[Xen]] in purpose but much simpler to get running. To start using the hypervisor, just load the appropriate {{Ic|kvm}} kernel modules and the hypervisor is up. As with Xen's full virtualization, in order for KVM to work, you must have a processor that supports Intel's VT-x extensions or AMD's AMD-V extensions.

+

{{Article summary heading|Related}}

+

{{Article summary wiki|QEMU}}

+

{{Article summary wiki|Libvirt}}

+

{{Article summary wiki|VirtualBox}}

+

{{Article summary wiki|Xen}}

+

{{Article summary wiki|VMware}}

+

{{Article summary end}}

+

'''KVM''', Kernel-based Virtual Machine, is a hypervisor built into the Linux kernel. It is similar to [[Xen]] in purpose but much simpler to get running. Unlike native [[QEMU]], which uses emulation, KVM is a special operating mode of QEMU that uses CPU extensions ([[Wikipedia:Hardware-assisted virtualization|HVM]]) for virtualization via a kernel module. KVM originally supported x86 and x86_64 architectures and has been ported to S/390, PowerPC, IA-64, and ARM (since Linux 3.9).

−

Using KVM, one can run multiple virtual machines running unmodified GNU/Linux, Windows, or any other operating system. (See [http://www.linux-kvm.org/page/Guest_Support_Status Guest Support Status]). Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc. See [http://www.linux-kvm.org/page/HOWTO KVM Howto].

+

Using KVM, one can run multiple virtual machines running unmodified GNU/Linux, Windows, or any other operating system. (See [http://www.linux-kvm.org/page/Guest_Support_Status Guest Support Status] for more information.) Each virtual machine has private virtualized hardware: a network card, disk, graphics card, etc.

Differences among KVM, Xen, VMware, and QEMU can be found at the [http://www.linux-kvm.org/page/FAQ#General_KVM_information KVM FAQ].

Differences among KVM, Xen, VMware, and QEMU can be found at the [http://www.linux-kvm.org/page/FAQ#General_KVM_information KVM FAQ].

−

KVM is used alongside [[Libvirt]] for better management with "virsh" command set. Check [[Libvirt]] for more details.

You can check if your kernel supports KVM with the following command (assuming your kernel is built with {{ic|CONFIG_IKCONFIG_PROC}}):

−

$ zgrep KVM /proc/config.gz

KVM requires that the virtual machine host's processor has virtualization support (named VT-x for Intel processors and AMD-V for AMD processors). You can check whether your processor supports hardware virtualization with the following command:

KVM requires that the virtual machine host's processor has virtualization support (named VT-x for Intel processors and AMD-V for AMD processors). You can check whether your processor supports hardware virtualization with the following command:

$ lscpu

$ lscpu

−

You processor supports virtualization only if there is a line telling you so.

+

Your processor supports virtualization only if there is a line telling you so.

You can also run:

You can also run:

$ grep -E "(vmx|svm)" --color=always /proc/cpuinfo

$ grep -E "(vmx|svm)" --color=always /proc/cpuinfo

−

If nothing is displayed after running that command, then your processor does ''not'' support hardware virtualization, and you will ''not'' be able to use KVM.

+

If nothing is displayed after running that command, then your processor does '''not''' support hardware virtualization, and you will '''not''' be able to use KVM.

−

== Set up kernel modules ==

+

=== Kernel support ===

−

First, you need to add your user account into the {{Ic|kvm}} group to use the {{ic|/dev/kvm}} device.

+

You can check if necessary modules ({{ic|kvm}} and one of {{ic|kvm_amd}}, {{ic|kvm_intel}}) are available in your kernel with the following command (assuming your kernel is built with {{ic|CONFIG_IKCONFIG_PROC}}):

−

# gpasswd -a <Your_Login_Name> kvm

+

$ zgrep KVM /proc/config.gz

−

{{Note|If you use systemd and are a local user, this is not necessary, as access is now granted by systemd/udev.}}

Secondly, you have to choose one of the following depending on the manufacturer of the VM host's CPU.

+

=== Loading kernel modules ===

−

* If you have Intel's VT-x extensions, modprobe the {{Ic|kvm_intel}} module.

+

You need to load {{ic|kvm}} module and one of {{ic|kvm_amd}} and {{ic|kvm_intel}} depending on the manufacturer of the VM host's CPU. See [[Kernel modules#Loading]] and [[Kernel modules#Manual module handling]] for information about loading kernel modules.

−

# modprobe kvm_intel

−

−

* If you have AMD's AMD-V (code name "Pacifica") extensions, modprobe the {{Ic|kvm-amd}} module.

−

# modprobe kvm_amd

If modprobing {{Ic|kvm_intel}} or {{Ic|kvm_amd}} fails but modprobing {{Ic|kvm}} succeeds, (and {{ic|lscpu}} claims that hardware acceleration is supported), check your BIOS settings. Some vendors (especially laptop vendors) disable these processor extensions by default. To determine whether there's no hardware support or there is but the extensions are disabled in BIOS, the output from {{Ic|dmesg}} after having failed to modprobe will tell.

If modprobing {{Ic|kvm_intel}} or {{Ic|kvm_amd}} fails but modprobing {{Ic|kvm}} succeeds, (and {{ic|lscpu}} claims that hardware acceleration is supported), check your BIOS settings. Some vendors (especially laptop vendors) disable these processor extensions by default. To determine whether there's no hardware support or there is but the extensions are disabled in BIOS, the output from {{Ic|dmesg}} after having failed to modprobe will tell.

−

If you want these modules to persist, see [[Kernel_modules#Loading]].

+

{{Note|Newer versions of [[udev]] should load these modules automatically, so manual intervention is not required.}}

== How to use KVM ==

== How to use KVM ==

+

See the main article: [[QEMU]].

−

As {{ic|qemu-kvm}} package is merged into {{Pkg|qemu}}, see the main article [[QEMU]], and especially the section [[QEMU#Enabling KVM|Enabling KVM]].

+

== Tips and tricks ==

−

== Paravirtualized guests (virtio) ==

+

{{Note|See [[QEMU#Tips and tricks]] and [[QEMU#Troubleshooting]] for general tips and tricks.}}

−

KVM offers guests the ability to use paravirtualized block and network devices, which leads to better performance and lower overhead.

+

=== Nested virtualization ===

−

For Windows, a paravirtualized network driver can be obtained [http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers here].

−

FreeBSD has the ability to use virtio drivers since 10.0 (unreleased). A backport of the drivers are available in the port {{ic|emulators/virtio-kmod}} for FreeBSD 8.3 and 9.0.

{{Note|{{Ic|1=-boot order=c}} is absolutely necessary when you want to boot from it. There is no auto-detection as with {{Ic|-hd*}}.}}

−

−

Almost the same goes for the network:

−

$ qemu-kvm -net nic,model=virtio

−

−

=== Preparing an (Arch) Linux guest ===

−

−

To use virtio devices after an Arch Linux guest has been installed, the following modules can be loaded in the guest: {{Ic|virtio}}, {{Ic|virtio_pci}}, {{Ic|virtio_blk}}, {{Ic|virtio_net}}, and {{Ic|virtio_ring}} (for 32-bit guests, the specific "virtio" module is not necessary).

−

If you want to boot from a virtio disk, the initial ramdisk must be [[mkinitcpio|rebuilt]]. Add the appropriate modules in {{ic|/etc/mkinitcpio.conf}} like this:

+

{{Expansion|Is it possible also with {{ic|kvm_amd}}?}}

−

MODULES="virtio_blk virtio_pci virtio_net"

−

and rebuild the initial ramdisk:

−

# mkinitcpio -p linux

−

Virtio disks are recognized with the prefix '''''v''''' (e.g. ''v''da, ''v''db, etc.); therefore, changes must be made in at least {{ic|/etc/fstab}} and {{ic|/boot/grub/menu.lst}} when booting from a virtio disk. When using grub-pc which references disks by [[Persistent_block_device_naming#By-uuid|UUID's]], nothing has to be done.

+

On host, enable nested feature for {{ic|kvm_intel}}:

+

# modprobe -r kvm_intel

+

# modprobe kvm_intel nested=1

−

Edit or create {{ic|/boot/grub/device.map}}:

+

To make it permanent (see [[Kernel modules#Setting module options]]):

−

(hd0) /dev/vda

+

{{hc|/etc/modprobe.d/modprobe.conf|<nowiki>

−

+

options kvm_intel nested=1

−

{{Note|The following may be outdated since official installation ISO 2011.08.19. The repositories now offer GRUB v2.}}

Windows will detect the fake disk and try to find a driver for it. If it fails, go to Device Manager, locate the SCSI drive with an exclamation mark icon (should be open), click "Update driver" and select the virtual CD-ROM. Don't forget to mark the checkbox which says to search for directories recursively.

−

−

When the installation is successful, you can turn off the virtual machine and launch it again, now with the {{Ic|virtio}} driver.

−

−

$ qemu-kvm -drive file=windows.img,if=virtio -m 512 -vga std

−

−

{{Note|If you encounter the Blue Screen of Death, make sure you did not forget the {{ic|-m}} parameter, and that you do not boot with virtio instead of ide for the system drive before drivers are installed.}}

−

−

{{Note|The flag "boot&#61;on" was removed due to newer versions no need it anymore.}}

Then install the virtio drivers from the disk you downloaded; Go to the Device Manager, locate the network adapter with an exclamation mark icon (should be open), click "Update driver" and select the virtual CD-ROM. Don't forget to mark the checkbox which says to search for directories recursively.

−

−

=== Preparing a FreeBSD guest ===

−

−

Install the {{ic|emulators/virtio-kmod}} port if you are using FreeBSD 8.3 or later up until 10.0-CURRENT where they are included into the kernel. After installation, add the following to your {{ic|/boot/loader.conf}} file:

And verify that {{ic|/etc/fstab}} is consistent. If anything goes wrong, just boot into a rescue CD and copy {{ic|/etc/fstab.bak}} back to {{ic|/etc/fstab}}.

+

Run guest VM with following command:

+

$ qemu-system-x86_64 -enable-kvm -cpu host

−

== Enabling KSM ==

+

Boot VM and check if vmx flag is present:

−

+

$ grep vmx /proc/cpuinfo

−

Kernel Samepage Merging (KSM) is a feature of the Linux kernel introduced in the 2.6.32 kernel. KSM allows for an application to register with the kernel to have its pages merged with other processes that also register to have their pages merged. For KVM, the KSM mechanism allows for guest virtual machines to share pages with each other. In an environment where many of the guest operating systems are similar, this can result in significant memory savings.

−

−

There should be a {{ic|/sys/kernel/mm/ksm/}} directory containing several files. You can turn KSM on or off by echoing a {{ic|1}} or {{ic|0}} (respectively) to {{ic|/sys/kernel/mm/ksm/run}}:

−

# echo 1 > /sys/kernel/mm/ksm/run

−

Or set it up by creating the file {{ic|/etc/tmpfiles.d/ksm.conf}}:

−

w /sys/kernel/mm/ksm/run - - - - 1

−

If KSM is running, and there are pages to be merged (i.e. more than one similar VM is running), then {{ic|/sys/kernel/mm/ksm/pages_shared}} should be non-zero. From the kernel documentation in {{ic|Documentation/vm/ksm.txt}}:

−

The effectiveness of KSM and MADV_MERGEABLE is shown in /sys/kernel/mm/ksm/:

−

−

pages_shared - how many shared unswappable kernel pages KSM is using

−

pages_sharing - how many more sites are sharing them i.e. how much saved

−

pages_unshared - how many pages unique but repeatedly checked for merging

−

pages_volatile - how many pages changing too fast to be placed in a tree

−

full_scans - how many times all mergeable areas have been scanned

−

−

A high ratio of pages_sharing to pages_shared indicates good sharing, but

−

a high ratio of pages_unshared to pages_sharing indicates wasted effort.

−

pages_volatile embraces several different kinds of activity, but a high

−

proportion there would also indicate poor use of madvise MADV_MERGEABLE.

−

−

An easy way to see how well KSM is performing is to simply print the contents of all the files in that directory.

−

−

{{hc|# grep . /sys/kernel/mm/ksm/*|

−

/sys/kernel/mm/ksm/full_scans:151

−

/sys/kernel/mm/ksm/max_kernel_pages:246793

−

/sys/kernel/mm/ksm/pages_shared:92112

−

/sys/kernel/mm/ksm/pages_sharing:131355

−

/sys/kernel/mm/ksm/pages_to_scan:100

−

/sys/kernel/mm/ksm/pages_unshared:123942

−

/sys/kernel/mm/ksm/pages_volatile:1182

−

/sys/kernel/mm/ksm/run:1

−

/sys/kernel/mm/ksm/sleep_millisecs:20

−

}}

−

−

== Enable HugePages ==

−

−

You may also want to enable hugepages to improve the performance of your virtual machine.

−

With an up to date Arch Linux and a running KVM you probably already have everything you need. Check if you have the directory {{ic|/dev/hugepages}}. If not create it.

−

Now we need the right permissions to use this directory. Check if the group {{ic|kvm}} exist and if you are member of this group. This should be the case if you already have a running virtual machine.

−

{{hc|$ getent group kvm|

−

kvm:x:78:USERNAMES

−

}}

−

−

Add to your {{ic|/etc/fstab}}:

−

hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=78 0 0

−

Of course the gid must match that of the {{ic|kvm}} group. The mode of {{ic|1770}} allows anyone in the group to create files but not unlink or rename each other's files. Make sure {{ic|/dev/hugepages}} is mounted properly:

Now you can calculate how many hugepages you need. Check how large your hugepages are:

−

$ cat /proc/meminfo | grep Hugepagesize

−

Normally that should be 2048 kB ≙ 2 MB. Let's say you want to run your virtual machine with 1024 MB. 1024 / 2 = 512. Add a few extra so we can round this up to 550. Now tell your machine how many hugepages you want:

−

# echo 550 > /proc/sys/vm/nr_hugepages

−

If you had enough free memory you should see:

−

{{hc|$ cat /proc/meminfo <nowiki>|</nowiki> grep HugePages_Total|

−

HugesPages_Total: 550

−

}}

−

If the number is smaller, close some applications or start your virtual machine with less memory (number_of_pages x 2):

Bridged networking is used when you want your VM to be on the same network as your host machine. This will allow it to get a static or DHCP IP address on your network, and then you can access it using that IP address from anywhere on your LAN. The preferred method for setting up bridged networking for KVM is to use the {{Pkg|netcfg}} package. You will also need to install {{Pkg|bridge-utils}}.

−

−

For more information, see: [[Netcfg Tips#Configuring a bridge for use with virtual machines (VMs)]]

−

−

You can follow this page to configure the bridge: [[Libvirt#Bridged Networking]]

−

−

=== Additional notes ===

−

−

Other information can be found here: [[QEMU#Tap Networking with QEMU]] and [[QEMU#Networking with VDE2]].

−

−

If you are using {{Pkg|iptables}}, it is recommended for performance and security reasons to disable the firewall on the bridge:

−

# cat >> /etc/sysctl.conf <<EOF

−

net.bridge.bridge-nf-call-ip6tables = 0

−

net.bridge.bridge-nf-call-iptables = 0

−

net.bridge.bridge-nf-call-arptables = 0

−

EOF

−

# sysctl -p /etc/sysctl.conf

−

−

See the [http://wiki.libvirt.org/page/Networking#Creating_network_initscripts libvirt wiki] and [https://bugzilla.redhat.com/show_bug.cgi?id=512206 Fedora bug 512206]. If you get errors by sysctl during init (boot) about non-existing files, make the {{ic|bridge}} module load at boot. See [[Kernel_modules#Loading]].

−

−

Alternatively, you can configure {{Pkg|iptables}} to allow all traffic to be forwarded across the bridge by adding a rule like this:

−

-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT

−

−

== Tips and tricks ==

−

−

{{Note|See [[QEMU#Tips and tricks]] and [[QEMU#Troubleshooting]] for more tricks.}}

=== Live snapshots ===

=== Live snapshots ===

Line 315:

Line 135:

=== Poor Man's Networking ===

=== Poor Man's Networking ===

+

+

{{Merge|QEMU|This section is not KVM-specific, it's generally applicable to all QEMU VMs.}}

Setting up bridged networking can be a bit of a hassle sometimes. If the sole purpose of the VM is experimentation, one strategy to connect the host and the guests is to use SSH tunneling.

Setting up bridged networking can be a bit of a hassle sometimes. If the sole purpose of the VM is experimentation, one strategy to connect the host and the guests is to use SSH tunneling.

Line 342:

Line 164:

This is a quite basic strategy to do networking with VMs. However, it is very robust and should be quite sufficient most of the time.

This is a quite basic strategy to do networking with VMs. However, it is very robust and should be quite sufficient most of the time.

−

=== Nested virtualization ===

+

{{Accuracy|Isn't this option enough? I think it should have the same effect: {{ic|-redir tcp:2222:10.0.2.15:22}} (it redirects port 2222 from host to 10.0.2.15:22, where 10.0.2.15 is guest's IP address.}}

+

+

=== Enabling huge pages ===

+

+

{{Accuracy|With systemd, {{ic|hugetlbfs}} is mounted on {{ic|/dev/hugepages}} by default, but with mode 0755 and root's uid and gid.}}

+

{{Merge|QEMU|{{Pkg|qemu-kvm}} no longer exists as all of its features have been merged into {{Pkg|qemu}}. After the above issue is cleared, I suggest merging this section into [[QEMU]].}}

+

+

You may also want to enable hugepages to improve the performance of your virtual machine.

+

With an up to date Arch Linux and a running KVM you probably already have everything you need. Check if you have the directory {{ic|/dev/hugepages}}. If not create it.

+

Now we need the right permissions to use this directory. Check if the group {{ic|kvm}} exist and if you are member of this group. This should be the case if you already have a running virtual machine.

+

{{hc|$ getent group kvm|

+

kvm:x:78:USERNAMES

+

}}

+

+

Add to your {{ic|/etc/fstab}}:

+

hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=78 0 0

+

+

Of course the gid must match that of the {{ic|kvm}} group. The mode of {{ic|1770}} allows anyone in the group to create files but not unlink or rename each other's files. Make sure {{ic|/dev/hugepages}} is mounted properly:

Now you can calculate how many hugepages you need. Check how large your hugepages are:

+

$ cat /proc/meminfo | grep Hugepagesize

+

+

Normally that should be 2048 kB ≙ 2 MB. Let's say you want to run your virtual machine with 1024 MB. 1024 / 2 = 512. Add a few extra so we can round this up to 550. Now tell your machine how many hugepages you want:

+

# echo 550 > /proc/sys/vm/nr_hugepages

+

+

If you had enough free memory you should see:

+

{{hc|$ cat /proc/meminfo <nowiki>|</nowiki> grep HugePages_Total|

+

HugesPages_Total: 550

+

}}

−

{{Poor writing|The wrapper part is really terrible, and maybe unnecessary.}}

+

If the number is smaller, close some applications or start your virtual machine with less memory (number_of_pages x 2):

Using KVM, one can run multiple virtual machines running unmodified GNU/Linux, Windows, or any other operating system. (See Guest Support Status for more information.) Each virtual machine has private virtualized hardware: a network card, disk, graphics card, etc.

Differences among KVM, Xen, VMware, and QEMU can be found at the KVM FAQ.

Checking support for KVM

Hardware support

KVM requires that the virtual machine host's processor has virtualization support (named VT-x for Intel processors and AMD-V for AMD processors). You can check whether your processor supports hardware virtualization with the following command:

$ lscpu

Your processor supports virtualization only if there is a line telling you so.

You can also run:

$ grep -E "(vmx|svm)" --color=always /proc/cpuinfo

If nothing is displayed after running that command, then your processor does not support hardware virtualization, and you will not be able to use KVM.

Kernel support

You can check if necessary modules (kvm and one of kvm_amd, kvm_intel) are available in your kernel with the following command (assuming your kernel is built with CONFIG_IKCONFIG_PROC):

If modprobing kvm_intel or kvm_amd fails but modprobing kvm succeeds, (and lscpu claims that hardware acceleration is supported), check your BIOS settings. Some vendors (especially laptop vendors) disable these processor extensions by default. To determine whether there's no hardware support or there is but the extensions are disabled in BIOS, the output from dmesg after having failed to modprobe will tell.

Note: Newer versions of udev should load these modules automatically, so manual intervention is not required.

Live snapshots

A feature called external snapshotting allows one to take a live snapshot of a virtual machine without turning it off. Currently it only works with qcow2 and raw file based images.

Once a snapshot is created, KVM attaches that new snapshotted image to virtual machine that is used as its new block device, storing any new data directly to it while the original disk image is taken offline which you can easily copy or backup. After that you can merge the snapshotted image to the original image, again without shutting down your virtual machine.

At this point, you can go ahead and copy the original image with cp -sparse=true or rsync -S.
Then you can merge the original image back into the snapshot.

# virsh blockpull --domain archey --path /vms/archey.snapshot1

Now that you have pulled the blocks out of original image, the file /vms/archey.snapshot1 becomes the new disk image. Check its disk size to see what it means. After that is done, the original image /vms/archey.img and the snapshot metadata can be deleted safely. The virsh blockcommit would work opposite to blockpull but it seems to be currently under development in qemu-kvm 1.3 (including snapshot-revert feature), scheduled to be released sometime next year.

This new feature of KVM will certainly come handy to the people who like to take frequent live backups without risking corruption of the file system.

In this example a tunnel is created to the SSH server of the VM and an arbitrary port of the host is pulled into the VM.

This is a quite basic strategy to do networking with VMs. However, it is very robust and should be quite sufficient most of the time.

The factual accuracy of this article or section is disputed.

Reason: Isn't this option enough? I think it should have the same effect: -redir tcp:2222:10.0.2.15:22 (it redirects port 2222 from host to 10.0.2.15:22, where 10.0.2.15 is guest's IP address. (Discuss in Talk:KVM#)

Enabling huge pages

The factual accuracy of this article or section is disputed.

Reason: With systemd, hugetlbfs is mounted on /dev/hugepages by default, but with mode 0755 and root's uid and gid. (Discuss in Talk:KVM#)

Notes:qemu-kvm no longer exists as all of its features have been merged into qemu. After the above issue is cleared, I suggest merging this section into QEMU. (Discuss in Talk:KVM#)

You may also want to enable hugepages to improve the performance of your virtual machine.
With an up to date Arch Linux and a running KVM you probably already have everything you need. Check if you have the directory /dev/hugepages. If not create it.
Now we need the right permissions to use this directory. Check if the group kvm exist and if you are member of this group. This should be the case if you already have a running virtual machine.

$ getent group kvm

kvm:x:78:USERNAMES

Add to your /etc/fstab:

hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=78 0 0

Of course the gid must match that of the kvm group. The mode of 1770 allows anyone in the group to create files but not unlink or rename each other's files. Make sure /dev/hugepages is mounted properly:

Now you can calculate how many hugepages you need. Check how large your hugepages are:

$ cat /proc/meminfo | grep Hugepagesize

Normally that should be 2048 kB ≙ 2 MB. Let's say you want to run your virtual machine with 1024 MB. 1024 / 2 = 512. Add a few extra so we can round this up to 550. Now tell your machine how many hugepages you want:

# echo 550 > /proc/sys/vm/nr_hugepages

If you had enough free memory you should see:

$ cat /proc/meminfo | grep HugePages_Total

HugesPages_Total: 550

If the number is smaller, close some applications or start your virtual machine with less memory (number_of_pages x 2):