OA - Ubuntu
A blog about Ubuntu, mobile GIS and archaeology

About KVM

Jan 04, 2009 by Yann Hamon

At Oxford Archaeology, we are running many, many different systems. Websites, Database application, license servers, development servers, mail, monitoring, backups, remote access servers, proxyes... Most of these do not have huge performance requirements, but are quite complex to set up.

We decided, now a couple of years ago, that virtualization would allow us to virtually separate the systems, while limiting the number of physical servers needed. At the time, the quick and easy choice was vmware server - while we would study a long-term, stable solution. As always with temporary solutions, the vmware servers have been running far longer than initially planned, in fact we still have two legacy vmware server. Among the many problems we have had with vmware server:

  • Time shifting. We tried many, many different approaches - pretty much everything we could find on the net - and none really worked in the end.
  • SMP for vms not working properly: first, there is a limit to 2 CPUs you can assign to a VM - and as soon that you assign more than one CPU to a VM, it starts to use a lot, awful lof of CPU doing nothing.
  • Impossible to use a physical disk bigger than 2TB (which ultimately was a problem for us)
  • Complex install.. see http://moxiefoxtrot.com/2009/01/02/installing-ubuntu-810-in-fusion/
  • Very poor performance, even with vmtools...

At the time, we planned to move to Vmware ESX. It is a tested and solid solution that has been around for a while. We finally decided not to use it though, because of its requirement for a windows license server, and expensive licensing and support scheme. We then started to consider Sun's XVM, XEN and KVM. Our choice ultimately went on KVM for the following reasons:

  • The host running KVM is an unmodified linux OS - for both XEN and ESX you need to run a special patched operating system (that makes it quite hard to stay up to date with security updates).
  • Being included in Linux, you get KVM support when you buy normal Ubuntu server support. As we planned to get Ubuntu support anyway, we got support for our virtualization platform at no additional cost.
  • Performance: the performances of KVM are very good, as it is assisted by hardware extensions.
  • VMbuilder: Ubuntu has developped a tool (ubuntu-vm-builder, now called vmbuilder) that can build appliance in approx. a minute. This is a huge improvement in comparison to the way we used to create VMs. (VMbuilder can now also build images for XEN and vmware)
  • Keep It Stupid Simple: KVM uses everything that linux already provides. Every VM is a simple process, it uses linux' process scheduler, you can renice the processes, assign them to a specific CPU... This results a lot smaller codebase than XEN, for example, which I assume will be easier and more cost-effective to maintain on the longterm, making the project more trustworthy.

And most importantly: the companies involved. Even if KVM is at the moment lacking several features, posts on the development mailing list show patches coming from companies such as redhat (which bought KVM last year), AMD, Intel, IBM, HP, Novell, Bull... Who are you going to trust, a project managed by a single company (citrix, vmware), or a project supported by so many major actors?

We started to test KVM in April 2008 and started deploying it in June. We purchased supported server for this task: Sun XVM X4150 (nice little pieces of hardware). So far we are running up to 20 servers per host. We are also very happy that KVM allows to run unmodified windows guests with good performance. There are even paravirtualised network drivers for windows if you need hardcore network performances.

We haven't had major issues with KVM since its deployment; most of our issues have been solved in a quick and friendly way on the #ubuntu-virt IRC channel. We went in some networking issues, which got solved by using the e1000 nic instead of virtio or the default one on Ubuntu VMs.

So, a small trick now for those who have been reading that far. Most of those of you who have already deployed KVM know that the display is exported via VNC, tunneled in SSH. Imagine that you have your KVM host at work, but not accessible from the outside; and you want to access directly the display of one of your VMs. All that you will need is one machine with SSH that is accessible from the outside, and from which you will be able to connect to your KVM host.

Edit your local ~/.ssh/config file, add this:

Host kvmhost
ProxyCommand ssh -l login SSHHost nc InternalIPOfKVMServer %p
User login
Compression yes

Save, exit; now you can run:

virt-viewer -c qemu+ssh://kvmhost/system NameOfTheVM

I am not sure if I was clear... Feel free to ask questions about this, or KVM generally - I would be happy to answer them either in the comments or in further posts.

Yann



Comments:

Great post, its useful for me. I just try kvm, and more suitable with xen, but right now I wanna to stick with kvm, so I wait your tutorial about kvm..

--budiw

Posted by budiwijaya on January 04, 2009 at 06:11 PM GMT+00:00 #

Hello budyw,

I've largely been contributing to the documentation on the Ubuntu wiki: https://help.ubuntu.com/community/KVM .

Is there any area that is still dark that I could maybe enlighten?

Posted by Yann on January 04, 2009 at 06:18 PM GMT+00:00 #

Any particular reason for going for 'real' virtualisation over something like openvz?

Posted by Robin on January 04, 2009 at 07:22 PM GMT+00:00 #

Robin > Yes, the need to run unmodified Windows guests, mostly, but also the need to run other distros (different versions of Ubuntu, for example). OpenVZ also runs a modified kernel.

Posted by Yann on January 04, 2009 at 07:30 PM GMT+00:00 #

Thanks for the write up on KVM. Do you have any reference on virtio vs. e1000 drivers? I have found out the hard way that network performance with a default Ubuntu 8.10 host and vmbuilder-created guest really sucks (I got ping-times of 2-3 seconds during an nfs read of a large file). I have now moved to virtio instead based on a tip in KVM/Networking. Are you saying that e1000 is even better?

Posted by Mattias Holmlund on January 04, 2009 at 09:54 PM GMT+00:00 #

Hello Matthias, no, virtio should be the fastest option; but I noticed that the network sometimes dropped using virtio (quite annoying... i got a support ticket on that). I would be happy if more people would experience it though :) Using the default one, i had a small level of eth0 errors, which eventually became a problem. E1000 seems fine so far.

A ping time of 2 to 3 seconds sounds *highly* surprising though - I doubt it is linked to network performance. Check the load during the file copy maybe? I've never experienced this though...

PS: I am using Ubuntu 8.04 LTS server, 64bits.

Posted by Yann on January 04, 2009 at 10:24 PM GMT+00:00 #

Great post! Very well written!

Xen is dying a quick death. More and more, Linux distributors are realizing that Xen provides absolutely nothing that KVM doesn't. Further, as you mentioned in your post, using Xen means using a different operating system, other than Linux. This doesn't appeal to Linux vendors such as Red Hat and Canonical. Using the existing Linux kernel as a hypervisor just makes sense. Then we can use one operating system, as we should have been doing all along, instead of many.

KVM might not be "production quality" yet, but it's getting there. I'm really excited about its future, and I'm glad Canonical made the decision to support KVM over Xen. Now if only we can get Canonical to support SELinux over that crap AppArmor. :)

Cheers!

Posted by Aaron Toponce on January 04, 2009 at 11:09 PM GMT+00:00 #

Hi again Yann,

Thanks for your response. Yes, I agree that 2-3 seconds of ping time sounds very strange. What I did was this:

1. Start a ping from a physical host to the VM.
2. In the vm, do "md5sum largefile" where largefile is a six megabyte file on an nfs-mount.

The md5sum takes 100 seconds to finish. During these 100 seconds, ping times increase from 0.3ms to between 1700 and 2700ms. No ping-packets are lost however. The CPU load on the VM is zero during the md5sum.

When I switched to virtio (everything else unchanged), the same md5sum takes 0.6 seconds and ping time is not affected, not even for larger files.

Host is x86_64, guest is i686.

Good to know I can stay with virtio. I have not seen any problems with it so far.

Posted by Mattias Holmlund on January 05, 2009 at 10:40 AM GMT+00:00 #

Hi Yann

An awesome post, a agree with you, KVM is "the option" and has a brighter future, but while looking for options instead of ESX and Xen I found Proxmox it is "something" similar to ESX and its based on debian and the virtual machines are manage thorugh a web page, you can test if you want.

Another interesting post is one made by Gunnar Wolf
http://gwolf.org/node/1845

Greetings

Posted by Victor Muchica on January 05, 2009 at 07:28 PM GMT+00:00 #

One thing I have been looking for is a tutorial on live migration of KVM guests. Or even how other people shutdown migrate and startup guests.

At the moment I have two KVM servers with the guests stored on an NFS share and found the performance quite outstanding.

Posted by Karl Bowden on January 06, 2009 at 02:33 AM GMT+00:00 #

Matthias > From my tests, virtio is approx 8-10 times faster than the default. In fact, virtio is paravirtualised networking (the guests knows the network card isn't a real network card) - and fully virtualised network card (the guests thinks it is using a real network card). I am a bit surprised by the extent of the difference, but my best guess it that it is due to the way NFS works. Try to download 4-5GB with your VM, I would be interested if the network doesn't drop? I need to do more tests...

Karl > KVM does support live migration. But there are two ways to use KVM; one using the VIRSH interface, one using just plain command line. With Hardy and the virsh version in hardy, there is no way to use virsh to do live migration. You can use live migration using command line though. If you accept to turn the machine off then it should be pretty much straight forward - do you need help on this?

Victor > Thanks for the link, I will have a look at it.

Posted by Yann on January 06, 2009 at 12:06 PM GMT+00:00 #

Yann> To be honest it was about a year ago I last tried, but it seems that with the stock versions of KVM in Hardy and Intrepid it is still not supported. I have been trying:
virsh -c qemu+ssh://virt01/system migrate testhost qemu+ssh://virt02/system
And get in response:
libvir: error : this function is not supported by the hypervisor: virDomainMigrate

I am using a pair of Intrepid amd64 hosts. Do you keep the stock version of KVM, and if not do you use a repository for KVM and libvirt updates?

Posted by Karl Bowden on January 07, 2009 at 03:29 AM GMT+00:00 #

Hello Karl, as I said, it is not support by virsh. Maybe you could pop up on #ubuntu-virt on freenode for a longer discussion on this? I've never done live migration myself (as I am using virsh), but heard some people were successful.

Posted by Yann on January 07, 2009 at 10:37 AM GMT+00:00 #

Post a Comment:
Comments are closed for this entry.