For more than a week, I am trying to determine the reason for the following IO performance degradation between proxmox host and a Windows Server 2019 VM(s).
I have to ask for your help guys because I've run out of ideas.
Environment data:
- Single proxmox host, no cluster, pve 6.1-8 with ZFS
- A few WS19 VMs all having this issue, very low load, SOHO-usage
- ZFS sync=disabled, volblocksize for VM disks = 4k
- VM has all the latest VirtIO drivers (0.1.173)
IO test on both VM and host with the following fio command:
fio --filename=test --sync=1 --rw=$TYPE --bs=$BLOCKSIZE --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=1G --runtime=30
Results (Host vs VM):
- 4K Random Read: 573 vs 62.5 MiB/s
- 4K Random Write: 131 vs 14.1 MiB/s
- 4K Sequential Read: 793 vs 56,2 MiB/s
- 4K Sequential Write: 240 vs 3,42 MiB/s
- 64K Random Read: 1508 vs 831 MiB/s
- 64K Random Write: 596 vs 62,5 MiB/s
- 64K Sequential Read: 1631 vs 547 MiB/s
- 64K Sequential Write: 698 vs 43,8 MiB/s
Charts:
What I have tried so far: different volblocksizes on ZFS, different ZFS sync setting (left it on disabled, since the host is in DC), virtio-blk vs virtio scsi single (not much difference), writeback cache (became even worse).
Any suggestions what am I missing?