Skip to content

Archive for


Virtualization Notes

We’ve been using Xen to manage our fleet of virtual servers for just about 2 years at this point.

I did a few quick tests for each of the options we were considering. I limited my tests to freely available options, as the cost for licenses for the entire datacenter would be prohibitive. I’m running Ubuntu 8.04 x64 across the farm, and we use the virtualization primarily for load balancing and failover- For example, I have scripts which will detect if a machine has stopped responding, and issue a stop and start to it’s parent automatically. If it doesn’t come back up, the script starts the most recent backup image on the failover servers.

  • Vmware ESXi – ESXi is a free, but closed-source virtualization solution from Vmware.
  • Open VZ – OpenVZ is the opensource version of Parallels Virtuozzo Containers. The main difference between OpenVZ and Virtuozzo is that Virtuozzo has nice tools for automatic management and failover, and has ‘better algorithms for sharing memory’.
  • KVM – KVM seems to be the favorite solution of the kernel developers; It’s been integrated into the mainline Linux kernel and has the strongest support from Redhat and Ubuntu.
  • Xen – Xen has been a good workhorse, but it’s been increasingly difficult to support it- We found Xen unstable on kernel releases other than the official Xensource 2.6.18 kernel, which is now badly out of date.

Subjective impressions:

First of all, I’d like to acknowledge that KVM, Xen, OpenVZ and Vmware ESXi come from very different places technologically- Each of the solutions offer different levels of independence from the underlying OS. ESXi for instance, requires a fresh-install on your hardware, and provides a highly-customized version of linux as the control mechanism for it’s hypervisor.
OpenVZ on the otherhand, installs onto a system you’re already using, and works to separate processes from other another. While inaccurate, the phrase “Glorified chroot” is not entirely unwarranted.

One important note- While KVM, Vmware, and Xen support multiple operating systems, all systems running in OpenVZ need to be the same kernel revision of the same OS.

For our needs, which is running the same version of Ubuntu across multiple farm nodes, the level of paravirtualization only matters as a curiosity. The important element is how it performs, and how easy it is to manage.

Having used Xen for several years, I’ve found it very easy to script- To execute a start/stop, simply ssh into the machine and run (or allow your script to run) a xen destroy machinename.
Each machine has it’s own ini-style file, in /etc/xen

KVM can be used similarly, although the preferred method is through libvirt.

The Libvirt/virsh method is interesting- It stores all the VM information in XML files which are then imported and managed by libvirt. In theory, this makes it easy to make machine-parsable. Subjectively, I prefer the ini-style format, as more of my scripts run in bash, than python or another language that makes parsing trivial.

OpenVZ’s management is similar to KVM/Libvirt- It has several tools which manipulate one repository of information about the virtual machines. To show the running machines, use vzlist. To start/stop a machine, use vzctl start 123 or vzctl stop 123, where 123 is the ID of the VM.

Rather than taking in settings via an XML file, to change settings in OpenVZ you specify them directly on the command line- For instance,
vzctl set 123 –diskinodes 6203072 –save

This tells OpenVZ to set a certain number of inodes for a particular virtual machine, and to save the setting going forward. For my particular needs, this solution is perfect. It makes it easy for me to script each vmhost, and set options from bash scripts.

Vmware ESXi is installed directly onto your hardware, and doesn’t officially support running commands on the host machine directly.

The preferred way to manage ESXi is to run their Remote CLI tools on a separate Linux or Windows machine. –url –username root –password ***** –host vmware1 –vmxpath datastore1/TEST1/TEST1.vmx

I found this very cumbersome, and the system made it difficult to write scripts which would automatically backup and copy files, start/stop systems, query system health and the like. While it is usually doable after some work, it felt like I was working against the system, rather than with it. This makes sense, since Vmware makes their income selling tools to handle exactly the sort of tasks that I want to script myself.

Speed Tests:
I did a few very basic speed tests. I wouldn’t call them benchmarks, as they were a lot more adhoc than that, but they gave me sufficient information for my purposes. We also did a series of tests with production traffic, but the results of that approximately followed these guidelines. I repeated each of these tests several times, and took the averages of the results.

Quick CPU test
ls /large/directory > file; time for i in `seq 1 5000`; do md5sum file > /dev/null; done
Xen- 0m18.108s
ESXi- 0m15.014s
Unmodified- 0m14.153s
KVM – 0m28.474s
openvz- 0m8.316s

The CPU performance is currently coming back faster on OpenVZ that it did on physical hardware, which I used as a control. I’m not sure how this would be- It must be caching the results somehow, but I repeated the test several times on each, and got similar results.

Pure disk-
time dd if=/dev/zero of=/tmp/file count=1000000
Xen- 0m5.521s
KVM- 0m4.485s
Unmodified- 0m5.256s
ESXi- 0m11.831s
openvz – 0m4.076s

I had a lot of variation on this test; I’ve averaged the results above, but I’d see ranges from 5-9 seconds on most of the options. ESXi fared the worst.

Pure network
time scp root@OtherServer:/tmp/ubuntu-9.10-desktop-i386.iso /dev/null
ESXi- 0m24.113s
Xen- 0m13.583s
KVM- 0m13.320s

Network + disk
scp root@OtherServer:/tmp/ubuntu-9.10-desktop-i386.iso /tmp
Xen- 0m18.458s
ESXi- 0m29.399s
Unmodified- 0m18.724s
openvz- 0m14.557s

Of course, most VM machines have more than one image running on them, so single-VM tests are only marginally useful. I fired up multiple virtual machines on each, and re-ran the SCP to a local filesystem test.
scp root@OtherServer:/tmp/ubuntu-9.10-desktop-i386.iso /tmp

ESXi, 2 vms- 0m24.109s
Xen, 2 vms- 0m22.581s
KVM, 2 vms- 2+ minutes
KVM, 2 vms, virtio- 0m29.792s

ESXi, 3 vms- 0m26.054s
Xen, 3 vms- 0m31.929s
KVM, 3 vms- 5+minutes
KVM, 3 vms, virtio- 0m54.392ss
openvz, 3 vms- 0m20.949s

Having the Virtio drivers for KVM makes an enormous difference in it’s time. You need a kernel which is supported- Ubuntu 8.04 will work, but 6.06 doesn’t make the cut.


OpenVZ showed me the best performance, and has a management interface which fits well into our existing workflow. For our needs, it’s a great fit- The system provides process protection and failover I’m looking for, while being light enough that the overhead is minimal.

If we were supporting a wider variety of configurations, I’d move to KVM with Virtio; While it’s performance isn’t quite as strong as Xen, it’s close enough while being far better supported.