KVM

From ParabolaWiki
Jump to: navigation, search
Summary
This article covers checking for KVM support and some KVM-specific notes, features etc. It does not cover features common to multiple emulators using KVM as a backend. You should see related articles for such information.
Related
QEMU
Libvirt
Xen

KVM, Kernel-based Virtual Machine, is a hypervisor built right into the Linux kernel. It is similar to Xen in purpose but much simpler to get running. Unlike QEMU, which uses emulation, KVM uses CPU extensions (HVM) for virtualization. KVM originally supported x86 and x86_64 architectures and has been ported to S/390, PowerPC, IA-64 and since Linux kernel 3.9 also arm.

Using KVM, one can run multiple virtual machines running unmodified GNU/Linux, or any other operating system. (See Guest Support Status for more information.) Each virtual machine has private virtualized hardware: a network card, disk, graphics card, etc.

Differences among KVM, Xen, VMware, and QEMU can be found at the KVM FAQ.

1 Checking support for KVM

1.1 Hardware support

KVM requires that the virtual machine host's processor has virtualization support (named VT-x for Intel processors and AMD-V for AMD processors). You can check whether your processor supports hardware virtualization with the following command:

$ lscpu

Your processor supports virtualization only if there is a line telling you so.

You can also run:

$ grep -E "(vmx|svm)" --color=always /proc/cpuinfo

If nothing is displayed after running that command, then your processor does not support hardware virtualization, and you will not be able to use KVM.

1.2 Kernel support

You can check if necessary modules (kvm and one of kvm_amd, kvm_intel) are available in your kernel with the following command (assuming your kernel is built with CONFIG_IKCONFIG_PROC):

$ zgrep KVM /proc/config.gz
Note: Parabola kernels provide the appropriate kernel modules to support KVM.

1.3 User access to /dev/kvm

You need to add your user account into the kvm group to use the /dev/kvm device.

# gpasswd -a <login_name> kvm
Note: If you use systemd and are a local user, this is not necessary, as access is now granted by systemd/udev.

1.4 Loading kernel modules

You need to load kvm module and one of kvm_amd and kvm_intel depending on the manufacturer of the VM host's CPU. See Kernel modules#Loading and Kernel modules#Manual module handling for information about loading kernel modules.

If modprobing kvm_intel or kvm_amd fails but modprobing kvm succeeds, (and lscpu claims that hardware acceleration is supported), check your BIOS settings. Some vendors (especially laptop vendors) disable these processor extensions by default. To determine whether there's no hardware support or there is but the extensions are disabled in BIOS, the output from dmesg after having failed to modprobe will tell.

Note: Newer versions of udev should load these modules automatically, so manual intervention is not required.

2 How to use KVM

Note: The userspace tool qemu-kvm has been fully merged with upstream qemu starting with version 1.3.0, so the qemu-kvm package is gone.

See the main article: QEMU.

3 Tips and tricks

Note: See QEMU#Tips and tricks and QEMU#Troubleshooting for general tips and tricks.

3.1 Nested virtualization

This article or section needs expansion.
Please help expand this article so the intended scope is covered in sufficient detail. (Discuss)

On host, enable nested feature for kvm_intel:

# modprobe -r kvm_intel
# modprobe kvm_intel nested=1

To make it permanent (see Kernel modules#Setting module options):

/etc/modprobe.d/modprobe.conf
options kvm_intel nested=1

Verify that feature is activated:

$ systool -m kvm_intel -v | grep nested
    nested              = "Y"

Run guest VM with following command:

$ qemu-system-x86_64 -enable-kvm -cpu host

Boot VM and check if vmx flag is present:

$ grep vmx /proc/cpuinfo

3.2 Live snapshots

This article is a candidate for merging.
It is suggested that this page or section be merged with libvirt. ([[virsh is part of libvirt|Discuss]])

A feature called external snapshotting allows one to take a live snapshot of a virtual machine without turning it off. Currently it only works with qcow2 and raw file based images.

Once a snapshot is created, KVM attaches that new snapshotted image to virtual machine that is used as its new block device, storing any new data directly to it while the original disk image is taken offline which you can easily copy or backup. After that you can merge the snapshotted image to the original image, again without shutting down your virtual machine.

Here's how it works.

Current running vm

# virsh list --all
Id    Name                           State
----------------------------------------------------
3     archey                            running

List all its current images

# virsh domblklist archey 
Target     Source
------------------------------------------------
vda        /vms/archey.img

Notice the image file properties

# qemu-img info /vms/archey.img
image: /vms/archey.img
file format: qcow2
virtual size: 50G (53687091200 bytes)
disk size: 2.1G
cluster_size: 65536

Create a disk-only snapshot. The switch --atomic makes sure that the VM is not modified if snapshot creation fails.

# virsh snapshot-create-as archey snapshot1 --disk-only --atomic

List if you want to see the snapshots

# virsh snapshot-list archey
Name                 Creation Time             State
------------------------------------------------------------
snapshot1           2012-10-21 17:12:57 -0700 disk-snapshot

Notice the new snapshot image created by virsh and its image properties. It weighs just a few MiBs and is linked to its original "backing image/chain".

# qemu-img info /vms/archey.snapshot1
image: /vms/archey.snapshot1
file format: qcow2
virtual size: 50G (53687091200 bytes)
disk size: 18M
cluster_size: 65536
backing file: /vms/archey.img

At this point, you can go ahead and copy the original image with cp -sparse=true or rsync -S. Then you can merge the original image back into the snapshot.

# virsh blockpull --domain archey --path /vms/archey.snapshot1

Now that you have pulled the blocks out of original image, the file /vms/archey.snapshot1 becomes the new disk image. Check its disk size to see what it means. After that is done, the original image /vms/archey.img and the snapshot metadata can be deleted safely. The virsh blockcommit would work opposite to blockpull but it seems to be currently under development in qemu-kvm 1.3 (including snapshot-revert feature), scheduled to be released sometime next year.

This new feature of KVM will certainly come handy to the people who like to take frequent live backups without risking corruption of the file system.

3.3 Poor Man's Networking

This article is a candidate for merging.
It is suggested that this page or section be merged with QEMU. (Discuss)

Setting up bridged networking can be a bit of a hassle sometimes. If the sole purpose of the VM is experimentation, one strategy to connect the host and the guests is to use SSH tunneling.

The basic steps are as follows:

  • Setup an SSH server in the host OS
  • (optional) Create a designated user used for the tunneling (e.g. tunneluser)
  • Install SSH in the VM
  • Setup authentication

See: SSH for the setup of SSH, especially SSH#Forwarding_Other_Ports.

When using the default user network stack, the host is reachable at address 10.0.2.2.

This article or section is poorly written.
Please help improve the quality of writing in this article to prevent it from being considered for deletion. (Discuss)

If everything works and you can SSH into the host, simply add something like the following to your /etc/rc.local

# Local SSH Server
echo "Starting SSH tunnel"
sudo -u vmuser ssh tunneluser@10.0.2.2 -N -R 2213:127.0.0.1:22 -f
# Random remote port (e.g. from another VM)
echo "Starting random tunnel"
sudo -u vmuser ssh tunneluser@10.0.2.2 -N -L 2345:127.0.0.1:2345 -f

In this example a tunnel is created to the SSH server of the VM and an arbitrary port of the host is pulled into the VM.

This is a quite basic strategy to do networking with VMs. However, it is very robust and should be quite sufficient most of the time.

The factual accuracy of this article or section is disputed.
Please help improve the article and verify/correct/remove disputed content. (Discuss)

3.4 Enabling huge pages

The factual accuracy of this article or section is disputed.
Please help improve the article and verify/correct/remove disputed content. (Discuss)
This article is a candidate for merging.
It is suggested that this page or section be merged with QEMU. ([[qemu-kvm no longer exists as all of its features have been merged into qemu. After the above issue is cleared, I suggest merging this section into QEMU.|Discuss]])

You may also want to enable hugepages to improve the performance of your virtual machine. With an up to date Parabola and a running KVM you probably already have everything you need. Check if you have the directory /dev/hugepages. If not create it. Now we need the right permissions to use this directory. Check if the group kvm exist and if you are member of this group. This should be the case if you already have a running virtual machine.

$ getent group kvm
kvm:x:78:USERNAMES

Add to your /etc/fstab:

hugetlbfs       /dev/hugepages  hugetlbfs       mode=1770,gid=78        0 0

Of course the gid must match that of the kvm group. The mode of 1770 allows anyone in the group to create files but not unlink or rename each other's files. Make sure /dev/hugepages is mounted properly:

# umount /dev/hugepages
# mount /dev/hugepages
$ mount | grep huge
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,mode=1770,gid=78)

Now you can calculate how many hugepages you need. Check how large your hugepages are:

$ cat /proc/meminfo | grep Hugepagesize

Normally that should be 2048 kB ≙ 2 MB. Let's say you want to run your virtual machine with 1024 MB. 1024 / 2 = 512. Add a few extra so we can round this up to 550. Now tell your machine how many hugepages you want:

# echo 550 > /proc/sys/vm/nr_hugepages

If you had enough free memory you should see:

$ cat /proc/meminfo | grep HugePages_Total
HugesPages_Total:  550

If the number is smaller, close some applications or start your virtual machine with less memory (number_of_pages x 2):

$ qemu-system-x86_64 -enable-kvm -m 1024 -mem-path /dev/hugepages -hda <disk_image> [...]

Note the -mem-path parameter. This will make use of the hugepages.

Now you can check, while your virtual machine is running, how many pages are used:

$ cat /proc/meminfo | grep HugePages
HugePages_Total:     550
HugePages_Free:       48
HugePages_Rsvd:        6
HugePages_Surp:        0

Now that everything seems to work you can enable hugepages by default if you like. Add to your /etc/sysctl.conf:

vm.nr_hugepages = 550

See also:

4 See also

5 Acknowledgement

This wiki article is based on ArchWiki. We may have removed non-FSDG bits from it.