Linux virtualization with QEMU/KVM and libvirt

Linux kernel features the Kernel-based Virtual Machine – a hardware-based virtualization infrastructure (a type-I hypervisor). QEMU is an emulator/virtualizer software that, when used together with KVM, allows you to run virtual machines at a near-native speed. Using these together with Libvirt library, which, apart from the programming API, provides virt-manager or virsh (libvirt shell) tools makes virtual machine management a breeze. This duet is used by many successful open-source commercial projects like OpenStack or CC1.

In this post I’ll describe how to get all of these going, showing you how easy it is to create and run virtual machines that are almost as fast as your own hardware host using professional grade tools – QEMU/KVM and Libvirt

Requirements

Linux KVM requires a CPU with hardware virtualization extensions to run. For a standard x86 PC, you need a CPU that have either Intel VT-x or AMD-V extensions. You can check if your CPU supports these by looking at the /proc/cpuinfo file or by issuing the ‘lscpu’ command. Grepping /proc/cpuinfo for “vmx” or “svm” flags will tell you if your CPU features hardware-assisted virtualization:

The lscpu command will give you more concise output:

Look for “Virtualization” entry with “VT-x” or “AMD-V” values.

However, be advised that sometimes these features won’t show up, as they can be disabled by the BIOS setup and must be explicitly enabled first. Unfortunately, I have seen a few PC’s with the hardware virtualization features permanently disabled by the firmware without any option to reenable them whatsoever, regardless of the hardware.

Fortunately, in this case, both commands showed that my i5-2520m CPU has hardware-assisted virtualization enabled.

The next thing to check is whether the KVM support in your kernel is enabled. Among all of the KVM related configuration options, the most important for you are CONFIG_KVM and either CONFIG_KVM_INTEL or CONFIG_KVM_AMD for your processor, respectively. Most of the modern Linux distributions already have these enabled, so you only have to load the kvm and kvm_intel (or kvm_amd) modules into the kernel. If the modules are not present, you have to recompile your kernel with these options enabled.

Now, if you’re sure your system meets the requirements listed above, then you’re good to go with your ultra-fast virtualization :). If your system does not support hardware assisted virtualization, don’t worry! For newly created machines (domains) QEMU will fall back to the emulation if KVM is not available. However, due to emulation, virtual machines will be very, very slow. Nonetheless, you can still follow this tutorial but pay attention to the messages from libvirt and QEMU.

If you have installed libvirt already (see the Software components section), you may want run “virt-host-validate” command to check your system for virtualization setup, like:

Software components

We can now install the required virtualization software. We’re gonna need:

  • QEMU – a processor emulator and virtualizer
  • libvirt – an API library, a daemon and a few userspace tools for controlling virtualization services, like virsh
  • virt-manager – Virtual Machine Manager GUI application for virtual machine management.

These packages should install some more software as dependencies, like virtual ethernet, virtual terminal emulator, open BIOS images, graphical console clients etc.

The following packages are not essential to run virtual machines, but still are very handy, so I really recommend installing them, according to your needs.

  • ebtables for NAT networking for your virtual machines,
  • dnsmasq for the DHCP service
  • bridge-utils for bridged networking feature,
  • samba for file sharing, especially for MS Windows guests

Now that the required software is installed, let’s get down to business and setup our first virtual machine using QEMU/KVM hypervisor together with Virtual Machine Manager and virsh – a libvirt shell. As a side note, libvirt can connect to other hypervisors as well, like Xen, VMWare, VirtualBox and even Microsoft Hyper-V, among others.

Libvirt concepts

We have to explain the concepts used by libvirt first. According to libvirt, a hypervisor is a layer of software that virtualizes the host machine – a node – and allows for creating and running virtual machines with different configuration than the node. A virtual machine can run an operating system (or a subsystem in case of the container backend) and these two constitute a domain. Libvirt defines the storage and network related concepts as well. The storage pool defines a storage place where images can be created and stored.

Of course, libvirt provides the network as well. Various bridging, routing, NAT, QoS parameters can be set. Even the hardware interface can be used with a PCI passthrough. Each libvirt installation sets up the default network. This is an excellent choice for laptops, so that the domains will use NAT and will connect to the external network regardless of the laptop’s connection. This network also provides a DHCP server for the sake of convenience. Of course, you can add and/or modify the networks any time.

Libvirt uses XML files for configuration and many virsh commands need either an XML file or a set of parameters.

The XML format documentation is readily available on the libvirt site. You will get some sense of how enormous bank of options are supported by libvirt just by scrolling through this :). However, if you need some specific configuration option for your setup – this is the place to go.

The shell way – using virsh and friends for deploying and managing virtual domains

As always, I’ll be using Arch Linux host on a Dell E6320 laptop. In this example, we’re going to install Debian 8 (Jessie) on our virtual machine.

A little side note should go here. The libvirt and QEMU/KVM combined provide a huge load of configuration options, really. For the purpose of this article, I will limit the option used to the minimum needed to deploy and use hardware assisted virtualization domains.

Deploying virtual machines

Let’s start the libvirtd daemon first, if you (or your system’s install scripts ;)) haven’t done so already:

We have it enabled as well, so it will start automatically on every boot. Now, let’s bring up virsh and begin creating our first virtual machine. You have to have administrator priviledges to do that. Of course, it can be changed, see the both libvirt and your system manuals. Virsh has an internal help for each of its 222 commands, as well as bash-like TAB completion, although it is limited to commands and switches only. Most of the configuration commands support XML files as well as a list of parameters. We’ll use the parameter list versions in this tutorial. This is the virsh prompt:

Let’s see if we are already connected to a correct hypervisor:

As you can see, libvirt has connected to QEMU/KVM by default. You can connect (or reconnect) to a hypervisor with “connect” command with the correct uri parameter, as seen above.

Now, let’s see if there are any storage pools defined already:

Of course, this being a fresh installation, no storage pools are defined, so let’s define some. We’re going to use directory based pools, but, of course, this is not the only pool type you can get. For example, if you happen to have a LVM setup on you host, you may as well use a volume group for a storage pool. This is a good and often used configuration. See the libvirt documentation for more information on pool types.

I want my virtual machines images to reside in ~/virtual_machines directory and I have some operating systems ISO images in the ~/isos directory as well. Please note that the installation ISO image will be set upon the machine deployment – it doesn’t need to be stored in a defined pool. This pool is made just as an example. Let’s now create these pools with “pool-define-as” and activate them with “pool-start” commands:

The pool-list command lists all active pools, so to list every defined pool in the system, the ‘–all‘ switch must be used. We have marked the “machines” pool as autostart. This will activate the pool on each libvirtd start (e.g. each host restart). As we’re going to create volumes there, it is quite desirable thing to autostart it.

With the pools created, we now have to create a volume for our new virtual machine. This step may be omitted, as the volume may be created during domain installation later.

We have created a 10GiB volume named “sinclair” in the “machines” storage pool. The name reflects the hostname of the machine we are creating. The qcow2 is the QEMU Copy-On-Write, v.2 format that, as the name suggests, optimizes the disk space allocation (allocates space when the data is actually saved – right after creation it’s size is about 200kB). Moreover, it supports multiple snapshots, full image encryption and zlib compression.

We have made all the setup needed to install a new, virtualized operating system. We now leave the virsh and use another handy tool – virt-install – to create the virtual machine. As libvirt and virt-install handle a huge lot of options, I won’t describe them all and I will try to use as little options as possible to create a virtual machine that runs at near native speed. Minimum required options for virt-install are the name of the machine, amount of memory on boot, storage option and install option, but we need to do some parameter tweaking to speed the machine up.

So far, after reading the virt-install man pages, I came up with the following set of options:

  • –connect=qemu:///system – connect to the QEMU hypervisor
  • –virt-type=kvm – use KVM
  • –name=sinclair – set the virtual machine (a libvirt domain) name
  • –memory=1024 – use 1024MiB memory
  • –vcpus=4 – set number of the virtualized CPUs (can be dynamically adjusted with maxcpus option)
  • –cpu=host – expose full CPU feature set to the virtual machine
  • –disk=virtual_machines/sinclair,bus=virtio – set the virtual machine storage type
  • –cdrom=/home/cristos/isos/debian-8.0.0-amd64-netinst.iso – use cdrom with the installation medium ISO image
  • –network=default,model=virtio – connect to the “default” network provided by  libvirt

Apart from the obvious ones, some of the options require a bit of explanation. The cpu=host exposes full CPU feature set to the guest. This will maximize performance but will cause issues when migrating to a host with a different CPU feature set. This is OK for me, as I don’t plan to migrate this domain anywhere.

The virtio driver is used for the disk and network subsystem. Virtio is the paravirtualized driver pack for the guest operating systems. A virtualized system that runs atop the QEMU/KVM hypervisor will still need an emulated disk and networking devices. These are slow, so to achieve greater I/O throughput for both of these, the paravirtualized drivers need to be installed on the guest. These drivers communicate directly with the hypervisor (an with host operating system kernel eventually) to achieve higher throughput than with the fully emulated devices. These driver are available for Linux guests (integrated into mainline kernel since v2.6.25), various *BSD’s (Open, Free, Net, DragonFly and others) and even Microsoft Windows guests as well.

Now, let’s launch the virtual machine installer:

As you can see, I don’t have the virt-viewer installed, so the graphical console won’t show up automatically. However, there is a SPICE server running for that machine, so you can connect to it via the SPICE client.

or:

Now, run the SPICE client:

You should now have an application widnow opened with the display from the virtual machine. If you click inside it, the mouse and keyboard get grabbed – press Shift-F12 to release them. You can also toggle fullscreen operation with Shift-F11. These shortcuts can be redefined with the spice client’s –hotkey option. See the spicec’s manual for details.

Ok, we have the machine is running and have the graphical console for it as well, so now we can proceed with installing the operating system (Debian 8 in this particular case). Devices that use virtio drivers may be named differently (like vda instead of sda in case of disks), but, apart from that, everything else, including the installation process, is same as with the hardware machine.

That’s it! We just deployed a brand-new domain that is virtualized with hardware assisted virtualization extensions and uses paravirtualized drivers! But we barely scratched the surface, though. The plethora of configuration options needs some time to dive into. Happily, the domain configuration can be adjusted anytime and will be applied after a domain restart. So read on, as there’s a management section coming 🙂

Managing virtual machines

Let me show some basic domain management commands. This list is by no means exhaustive. It’s only to guide you, to provide you with a smooth start with libvirt based virtual machine management.

As you may have seen so far, each virsh command can be executed from within the virsh shell or from the bash command prompt – whatever floats your boat. Please look up the commands and their optional switches in the virsh shell help as this list is neither exhaustive nor complete.

To start the machine after installation type

You can now connect to it using a SPICE client, as presented above. If you configured your machine with a console, add a ‘–console‘ option so you’d get automatically connected to the started domain.

To gracefully stop the machine, just type:

This is equivalent to the ACPI shutdown by pressing the power button (by default), thus, the guest need to understand the ACPI events, i.e., needs to have ACPI support running. You can specify the shutdown mode with ‘–mode‘ switch.

If your machine became unresponsive, you can shut it down forcibly (e.g. by ripping  the power cord out 😉 ) with:

Libvirt with QEMU/KVM lets you create a snapshot of a domain. Three types of snapshots are possible

  • memory state or VM state – only the memory and other resources states are saved. If the disk contents have changed since the snapshot, data corruption is VERY likely to occur. This is done by the ‘save‘ and ‘restore‘ commands (see below)
  • disk snapshot – contents of one (or more) disk is saved and can be restored back. These kind of snapshot can be internal – where the single, original qcow2 file holds original and delta data, or external – where deltas are kept in a separate qcow2 image. Internals are handy way of keeping disk data in one file, so moving the snapshot to another machines is easy. Externals on the other hand are useful to make incremental backups, like, before an update operation, for easy reverting in case of failure.
  • system snapshot – combination of the two above. This leads to a similar behaviour to hibernation. All network connections will be reset (timed out), obviously, but all other data is preserved.

The snapshots are made using the snapshot-create (if you have a snapshot configuration in XML file) or with snapshot-create-as – so you can enter all parameters from the command line. By default, this will make a system snapshot – both the disk and VM state. To make a disk-only snapshot, pass the ‘–disk-only‘ option.

Let’s make a VM snapshot now. With the ‘save domainname filename‘ command, the state of the machine domainname is saved to a file named filename and then the machine is stopped. You can restore the machine state with ‘restore filename‘ command:

There is also a ‘managedsave’ command. Works quite similar to the ‘save’ command, yet, if there is a managedsave state available, the ‘start’ command will restore the domain state instead of a typical cold boot. This method is useful to “pause” the domain if the host is to be restarted. In most situations, the host would be configured to autostart the domains on boot, so, in such case, the domains will continue from the moment they were ‘managesaved’.

Let’s now make a full system snapshot. It’s as easy as typing in the following command:

As you can see, the snapshot has been created and the domain is still up and running. Now, for the sake of testing ;), make some changes in the file system with the graphical console. After you’ve done so, you can see what happens when you revert the machine state to the previously made snapshot:

See? Everything went back to the state saved by the snapshot – the domain is still running as if nothing ever happened :).

The snapshot topic is far more than the couple examples I gave you here. If you need more information, please refer to the libvirt manual (and virsh help).

Let’s now quickly skim through the other useful management options.

The edit command starts the text editor using the $EDITOR environment variable (defaults to vi) and lets you edit the domain XML config file. If you save and quit, libvirt shell will check the file for errors, and if none are found, the new file will be applied immediately. That doesn’t mean that the changes will be applied immediately. To apply the changes the domain needs to be restarted (and thus the new file will be reloaded). The ‘dump‘ command dumps the configuration file to the screen.

Another useful command is ‘screenshot‘. It does what it says – saves the screenshot (uncompressed portable bitmap) of the selected domain. The ‘changemedia‘ command lets you change the media of the CDROM or the floppy drive.

So there we are. We now know how to deploy a virtual domain and how to manage it using command line tools. As with nearly all CLI tools, these can be scripted, so you can automate the deployment and management according to your needs.

Is there a GUI software for all that jazz?

Yes! There is a pretty good graphical software for setting up and managing the virtual domains. It’s call virt-manager. You can do almost anything we’ve already done here with it. Virt-manager has a SPICE client built-in, so you can easily configure the graphical console behaviour like the size, fullscreen/windowed operation, input grabbing etc. You can easily clone or migrate your domains with it, as well. There’s some screenshots for you, clickers, to whet your appetite a bit ;).

virt-manager-ss1virt-manager-ss4 virt-manager-ss3 virt-manager-ss2

However, I very strongly (and I can’t emphasize it more enough) recommend you to learn the console tools first. You’d get the big picture how things work – how libvirt manages its stuff, how domains are configured a.s.o. The graphical manager does the same things using libvirt API, so, if you mess around with the console way of doing virtualization first, you won’t get puzzled with terminology or seemingly unintended behaviour.

Wrap-up

The topic of virtualization with QEMU (with or without KVM) and Libvirt is HUGE. I have barely scratched the surface here. However, with this basic knowledge you can get rid of inferior virtualization solutions and use pro-grade, open-source tools either for your everyday computing or your professional work. Should you need more elaborate setups, as always, consult the manuals for KVM, Libvirt, QEMU.

Thank you for visiting my blog, and, as always, stay tuned for more, hopefully interesting posts.
I really encourage you to comment on this article (and other ones as well). Feel free to share your thoughts, spread the free knowledge and keep using open-source!

Cheers!

Cristos.

11 thoughts on “Linux virtualization with QEMU/KVM and libvirt

  • 22nd May 2015 at 20:35
    Permalink

    Great article. You just forgot to mention the need also the VT-d. 😉

    Reply
    • 22nd May 2015 at 21:38
      Permalink

      Hi Mario,
      Yes, you are right. Well, almost right ;).

      VT-x and VT-d are really two different beasts.

      VT-x (or its AMD counterpart – AMD-V) is a set of CPU extensions for harware accelerated virtualization – and these are REQUIRED in order to use KVM.

      On the other hand, VT-d (AMD-Vi) is an Input/Output Memory Mapping Unit (abbr. IOMMU). This guy is used when you need to assign the hardware device to the guest. The VT-d will then translate I/O requests from/to the virtualized guest directly to/from the assigned physical device on the host machine. This is sometimes called PCI passthrough (you need to assign the real PCI id’s of the devices that you want to be passed directly to the guest). As you can see, the mode of operation is remotely similar to the memory mapping unit inside our CPUs, where each process lies in a different address space and each memory access is then translated by the MMU to the hardware memory address.

      This being said, you don’t need VT-d to use hardware-based virtualization. Moreover, you don’t need VT-d to use VIRTIO drivers as well. These guys use different, kernel-based mechanisms to speed up transfers.

      However, you need VT-d if you want to pass your hardware devices directly to the guest, e.g. network controllers, SCSI, or even graphic cards. But there is a catch. Not only you need your CPU to support VT-d, your motherboard and your firmware must support it as well. Of course, your kernel has to support is too.

      I hope I cleared the things up for you.

      Thanks for visiting my blog – stay well!
      Cheers!
      Cristos.

      Reply
  • 22nd May 2015 at 22:12
    Permalink

    Great tutorial…. i will greatfully if you can do the same for we can configure virt i/o and get dual pci-e working toghether one in host and the another in guest machine. I don’t know how do that…. thanks…..

    Reply
    • 23rd May 2015 at 12:04
      Permalink

      Hello Federico,
      Let me check if I understood you alright.
      You have a setup with two PCI-e (GPU’s?) devices and you want to assing one to the host and the other to the guest, right?
      Then virtio is not your option. You have to use IOMMU, and you have to have VT-d enabled CPU, motherboard and firmware.

      I haven’t set anything with IOMMU so far, but I have some links that may be helpful for you:

      https://www.centos.org/forums/viewtopic.php?f=47&t=48115
      http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM

      Please check my reply to the Brett’s comment, follow these links as well, they may be helpful too.

      Cheers!
      Cristos.

      Reply
  • 22nd May 2015 at 22:30
    Permalink

    first thanks for useful article.
    i have a problem with virt-manager in mandriva. i installed kvm on mandriva server but when i connect to kvm by virt-manager in open mandriva, keyboard not working in vm, i also set key map to en but my problem is not solved.

    Reply
    • 23rd May 2015 at 11:50
      Permalink

      Hi Farhad,
      I didn’t have such problem. You may want to check virt-manager logs in ~/.cache/virt-manager/virt-manager.log, or start virt-manager from the terminal with –debug switch.

      Cheers!
      Cristos

      Reply
  • 23rd May 2015 at 01:22
    Permalink

    Hi Cristos, I’ve got a HP laptop with Intel and Nvidia setup. I need my Windows 7 vm to access my Nvidia card so I can install the Nvidia drivers on the vm. I use unit 3d and it needs to see the Nvidia card. How do I go about setting up vga passthrough.

    Reply
    • 31st May 2015 at 10:02
      Permalink

      Hi! Thanks for the tip. Definitely worth checking out!

      Cheers!
      Cristos

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.