소개

KVM is a full virtualization solution for Linux on x86 (64-bit included) hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko.

설치

It is possible to install only QEMU and KVM for a very minimal setup, but most users will also want libvirt-daemon or even virt-manager for a GUI. For Debian/stretch, jessie-backports, and newer:

# apt install qemu-kvm libvirt-clients libvirt-daemon-system

For jessie and older:

# apt-get install qemu-kvm libvirt-bin

The daemon libvirt-bin daemon will start automatically at boot time and load the appropriate kvm modules, kvm-amd or kvm-intel, which are shipped with the Linux kernel Debian package. If you intend to create VMs from the command-line, install virtinst.

In order to be able to manage virtual machines as regular user, that user needs to be added to some groups. For Debian/stretch, jessie-backports, and newer:

# adduser <youruser> libvirt
# adduser <youruser> libvirt-qemu

For jessie and older:

# adduser <youruser> kvm
# adduser <youruser> libvirt

You should then be able to list your domains:

# virsh list --all

libvirt defaults to qemu:///session for non-root. So from <youruser> you'll need to do:

$ virsh --connect qemu:///system list --all

You can use LIBVIRT_DEFAULT_URI to change this.

새 guest 만들기

The easiest way to create and manage VM guest is using GUI application Virtual Machine Manager virt-manager.

In alternative, you may create VM guest via command line. Below is example to create a Squeeze guest with name squeeze-amd64:

브리지 네트워크 세팅

VM guest 사이

By default, QEMU uses macvtap in VEPA mode to provide NAT internet access or bridged access with other guest. Unfortunately, this setup could not let the host to communicate with any guests.

VM host 와 guest 사이

To let communications between VM host and VM guests, you may setup a macvlan bridge on top of a dummy interface similar as below. After the configuration, you can set using interface dummy0 (macvtap) in bridged mode as the network configuration in VM guests configuration.

VM host, guests 그리고 세계

In order to let communications between host, guests and outside world, you may set up a bridge and as described at QEMU page.

For example, you may modify network configuration file /etc/network/interfaces for setup ethernet interface eth0 to a bridge interface br0 similar as below. After the configuration, you can set using Bridge Interface br0 as the network connection in VM guests configuration.

Managing VMs from the command-line

You can then use the virsh(1) command to start and stop virtual machines. VMs can be generated using virtinst. For more details see the libvirt page. Virtual machines can also be controlled using the kvm command in a similar fashion to QEMU. Below are some frequently used commands:

Start a configured VM guest "VMGUEST":

# virsh start VMGUEST

Notify the VM guest "VMGUEST" to graceful shutdown:

# virsh shutdown VMGUEST

Force the VM guest "VMGUEST" to shutdown in case it is hanged, i.e. graceful shutdown not work:

# virsh destroy VMGUEST

GUI로 VM guests 다루기

On the other hand, if you want to use a graphical UI to manage the VMs, you can use the Virtual Machine Manager virt-manager.

Automatic guest management on host shutdown/startup

Guest behavior on host shutdown/startup is configured in /etc/default/libvirt-guests.

This file specifies whether guests should be shutdown or suspended, if they should be restarted on host startup, etc.

where vcpu are the virtual cpu core id; cpuset are the allocated physical CPU core id. Adjust the number of lines of vcpupin to reflect the vcpu count and cpuset to reflect the actual physical cpu core allocation. In general, the higher half physical CPU core are the hyperthreading cores which cannot provide full core performance while have the benefit of increasing the memory cache hit rate. A general rule of thumb to set cpuset is:

For the first vcpu, assign a lower half cpuset number. For example, if the system has 4 core 8 thread, the valid value of cpuset is between 0 to 7, the lower half is therefore between 0 to 3.

For the second and the every second vcpu, assign its higher half cpuset number. For example, if you assigned the first cpuset to 0, then the second cpuset should be set to 4.

For the third vcpu and above, you may need to determine which physical cpu core share the memory cache more to the first vcpu as described here and assign it to the cpuset number to increase the memory cache hit rate.

디스크 I/O

Disk I/O is usually the bottleneck of performance due to its characteristics. Unlike CPU and RAM, VM host may not allocate a dedicated storage hardware for a VM. Worse, disk is the slowest component between them. There is two types of disk bottleneck, throughput and access time. A modern harddisk can perform 100MB/s throughput which is sufficient for most of the systems. While a modern harddisk can only provides around 60 transactions per seconds (tps).

For VM Host, you can benchmark different disk I/O parameters to get the best tps for your disk. Below is an example of disk tuning and benchmarking using fio:

For Windows VM guests, you may wish to switch between the slow but cross-platform Windows built-in IDE driver or fast but KVM specific VirtIO driver. As a result, the installation method for Windows VM guest provided below is a little bit complicated while provides a way to install both driver and use one for your needs. Under virt-manager:

Install the VirtIO driver from the IDE CDROM when Windows prompt for new hardware driver

Shutdown VM guest

Reconfigure VM guest with below configuration:

Remove IDE storage for Windows OS, DO NOT delete WINDOWS.qcow2

Remove VirtIO storage for dummy storage, you can delete DUMMY.qcow2

Remove IDE storage for CD ROM

Add a new VirtIO / VirtIO SCSI storage and attach WINDOWS.qcow2 to it

Restart the VM guest

Native driver for Linux VM guests

Select VirtIO / VirtIO SCSI storage for the storage containers

Restart the VM guest

VirtIO / VirtIO SCSI storage

VirtIO SCSI storage provides richer features than VirtIO storage when the VM guest is attached with multiple storage. The performance are the same if the VM guest was only attached with a single storage.

Disk Cache

Select "None" for disk cache mode

Block dataplane

Edit the VM guest configuration, assume the VM guest name is "VMGUEST"

If you had configured a bridge network on the CentOS host, please refer to this wiki article on how to make it work on Debian.

문제 해결

No network bridge available

virt-manager uses a virtual network for its guests, by default this is routed to 192.168.122.0/24 and you should see this by typing ip route as root.

If this route is not present in the kernel routing table then the guests will fail to connect and you will not be able to complete a guest creation.

Fixing this is simple, open up virt-manager and go to "Edit" -> "Host details" -> "Virtual networks" tab. From there you may create a virtual network of your own or attempt to fix the default one. Usually the problem exists where the default network is not started.

Some Windows guest using some high-end N-way CPU may found frequently hang or BSOD, this is a known kernel bug while unfortunately not fixed in Jessie (TBC in Stretch). Below workaround can be applied by adding a section <hyperv>...</hyperv> in the guest configuration via command virsh edit GUESTNAME: