We have an Heartbeat/DRBD/Pacemaker/KVM/Qemu/libvirt cluster consisting of two nodes. Each node runs Ubuntu 12.04 64 Bit with the following packages/versions:
- Kernel 3.2.0-32-generic #51-Ubuntu SMP
- DRBD 8.3.11
- qemu-kvm 1.0+noroms-0ubuntu14.3
- libvirt 0.9.13
- pacemaker 1.1.7
- heartbeat 3.0.5
The virtual guests are running Ubuntu 10.04 64 Bit and Ubuntu 12.04 64 Bit. We use a libvirt feature to pass the capabilities of the host CPUs to the virtual guests in order to achieve best CPU performance.
Now here is a common setup on this cluster:
- VM "monitoring" has 4 vCPUs
- VM "monitoring" uses ide as disk interface (we are currently switchting to VirtIO for obvious reasons)
We recently ran some simple tests. I know they are not professional and do not reach high standards, but they already show a strong trend:
Node A is running VM "bla" Node B is running VM "monitoring"
When we rsync a file from VM "bla" to VM "monitoring" we achieve only 12 MB/s. When we perform a simple dd if=/dev/null of=/tmp/blubb inside the VM "monitoring" we achieve around 30 MB/s.
Then we added 4 more vCPUs to the VM "monitoring" and restartet it. The VM "monitoring" now has 8 vCPUs. We re-ran the tests with the following results: When we rsync a file from VM "bla" to VM "monitoring" we now achieve 36 MB/s. When we perform a simple dd if=/dev/null of=/tmp/blubb inside the VM "monitoring" we now achieve around 61 MB/s.
For me, this effect is quite surprising. How comes that apparently adding more virtual CPUs for this virtual guest automatically means more disk performance inside the VM?
I don't have an explanation for this and would really appreciate your input. I want to understand what causes this performance increase since I can reproduce this behaviour a 100%.