0

I've got one KVM and some LUNs (Compellent SAN) in a multipath storage pool. All filesystems are xfs.

> # virsh vol-list --pool multipath 
dm-3     /dev/mapper/maildata-store-2-repl
dm-4     /dev/mapper/maildata-store-1-back
dm-5     /dev/mapper/metadata-store-2-repl
dm-6     /dev/mapper/metadata-store-1-back
dm-7     /dev/mapper/images

One LUN is dedicated for the storage of VMs (/var/lib/libvirt/images) and others will mounted directly in VMs for future storage of mail and relative metadata.

# df /dev/mapper/images1
Sys. de fichiers    blocs de 1K  Utilisé Disponible Uti% Monté sur
/dev/mapper/images1   209611780 18752452  190859328   9% /var/lib/libvirt/images

fio is used to compare IOPs on random write:

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/tmp/10g.file --bs=4k --iodepth=64 --size=4G --readwrite=randwrite

So I've got this result for the fio test while writing in tmp folder of a VM (/dev/mapper/images), quite good!

write: IOPS=66.1k, BW=258MiB/s

Now, I attach a LUN to this VM with this xml file:

<disk type='block' device='lun'>
  <driver name='qemu' type='raw'/>
  <source dev='/dev/mapper/maildata-store-1-back'/>
  <target dev='sda' bus='scsi'/>
  <address type='drive' controller='0' bus='0' target='0' unit='0'/>

And this command:

virsh attach-device VM_TEST --file lun.xml --persistent

Then, on VM_TEST:

#fdisk /dev/sda #mkfs.xfs /dev/sda1 #mount /dev/sda1 /test

And rerun the fio test on the newly created partition:

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/test/10g.file --bs=4k --iodepth=64 --size=4G --readwrite=randwrite

The results are quite worse:

write: IOPS=17.6k, BW=68.7MiB/s

I've played with different options in the xml file, like cache=none, bus=virtio, .., but I didn't manage to really increase the measures.

For now I'm stucked. I don't really where to look for.

Thank you.

  • Note: in one case you did I/O against a file in a filesystem and in the other you did it against what was likely a block device atop another block device... – Anon Feb 20 '19 at 06:45
  • Hello, the first filesystem is also on the block device, the spool for Vms in on a LUN. The second case is also a file system on a block device. The main difference is all the libvirt stuff between. I think [this answer](https://serverfault.com/questions/425607/kvm-guest-io-is-much-slower-than-host-io-is-that-normal) might be useful. Thank you – Is ma Live Feb 20 '19 at 08:24
  • Sure, but do you know what your filesystem does when it gets direct I/O? This in itself is a question which may have a complicated answer (there are some warnings on https://serverfault.com/a/864574/203726 ) and one you can sidestep by checking just directly checking the host block device you are passing to the KVM guest... – Anon Feb 20 '19 at 10:22

1 Answers1

0

So, I manage to get same IOPs on hosts and guests with this tuning:

<driver name='qemu' type='raw' cache='directsync' io='native'/>

I also try to mount the block device as a disk or as a lun:

<disk type='block' device='lun'>
  <target dev='sda' bus='scsi'/>

And

<disk type='block' device='disk'>
  <target dev='sda' bus='virtio'/>

With quite the same results.