I have a following problem. I run several stress tests on a Linux machine
$ uname -a
Linux debian 3.14-2-686-pae #1 SMP Debian 3.14.15-2 (2014-08-09) i686 GNU/Linux
It's an Intel i5 Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz, 8 G RAM, 300 G HDD.
These tests are not I/O intensive, I mostly compute double arithmetic in the following way:
start = rdtsc();
do_arithmetic();
stop = rdtsc();
diff = stop - start;
I repeat these tests many times, running my benchmarking application on a physical machine or on a KVM based VM:
qemu-system-i386 disk.img -m 2000 -device virtio-net-pci,netdev=net1,mac=52:54:00:12:34:03 -netdev type=tap,id=net1,ifname=tap0,script=no,downscript=no -cpu host,+vmx -enable-kvm -nographichere
I collect data statistics (i.e., diffs) for many trials. For the physical machine (not loaded), I get the data distribution of processing delay mostly likely to be a very narrow lognormal.
When I repeat the experiment on the virtual machine (physical and virtual machines are not loaded), the lognormal distribution is still there (of a little bit wider shape), however, I collect a few points with completion times much shorter (about two times) than the absolute minimum gathered for the physical machine!! (Notice that the completion time distribution on the physical machine is very narrow lying close to the min value). Also there are some points with completion times much longer than the average completion time on the hardware machine.
I guess that my rdtsc benchmarking method is not very accurate for the VM environment. Can you please suggest a method to improve my benchmarking system that could provide reliable (comparable) statistics between the physical and the kvm-based virtual environment? At least something, that won't show me that the VM is 2x faster than a hardware PC in a small number of cases.
Thanks in advance for any suggestions or comments on this subject.
Best regards