1

Until now I've enabled discard on my QEMU guests using the libvirt XML command

..
<driver name='qemu' type='qcow2' discard='unmap' />
..

And it seems to work fine.

Now that I'm about to convert my storage from virtio-scsi to virtio-blk because it now support discard, I ran into the option detect_zeroes=off|on|unmap (or QEMUs equivalent detect-zeroes)

Should I also use this option and why? I assume beside the areas will be marked "available" by discard they are also written by zeroes, but what values does it have, especially on SSD backed storage?

Using qcow2 images I see the point of writing zeros to mark the empty space sparsed, but it seems the qcow2 image files are actually getting smaller (sparsed) without this option and just using discard='unmap'.

The libvirt documentation says:

The optional detect_zeroes attribute controls whether to detect zero write requests. The value can be "off", "on" or "unmap". First two values turn the detection off and on, respectively. The third value ("unmap") turns the detection on and additionally tries to discard such areas from the image based on the value of discard above (it will act as "on" if discard is set to "ignore"). NB enabling the detection is a compute intensive operation, but can save file space and/or time on slow media. Since 2.0.0

Which unfortunately didn't got me closer to a decision of using detect_zeroes or not :-/

Backing storage for my QEMU guests are both qcow2 images on HDD and SSD and LVM block devices located on SSD.

MrCalvin
  • 354
  • 1
  • 6
  • 18

1 Answers1

5

Modern operating systems are capable of sending TRIM/UNMAP commands to the virtual storage to free up the space, so detect_zeroes is not necessary for such OSes.

The only reason I can think of to use detect_zeroes is to gain discard support for an ancient operating system that doesn't support TRIM/UNMAP. In this case blocks of zeroes will be unmapped when you set detect_zeroes='unmap' instead of blocks of zeroes being written to disk. In the virtual guest OS you would run some utility that writes zeroes to all the free space on the disk, and KVM will convert these to TRIM/UNMAP. This can be CPU intensive though. Also, I can't figure any good reason to have it on without unmap.


P.S. You said: "Now that I'm about to convert my storage from virtio-scsi to virtio-blk" ... Did you get these backward? Normally you convert from virtio-blk to virtio-scsi.

Michael Hampton
  • 244,070
  • 43
  • 506
  • 972
  • Thanks! Yes, I sure is moving from `virtio-scsi` to `virtio-blk`, as according to RHEL the later is more performant. See this PDF [Storage-Performance-Tuning-for-FAST-Virtual-Machines](https://events19.lfasiallc.com/wp-content/uploads/2017/11/Storage-Performance-Tuning-for-FAST-Virtual-Machines_Fam-Zheng.pdf). And lately got support for discard. Haven't done any benchmark though. – MrCalvin Jun 24 '20 at 05:31
  • 1
    That's pretty out of date, and the performance difference was minimal even then. Today virtio-scsi is multithreaded and significantly outperforms virtio-blk on most realistic workloads. It was specifically designed to replace virtio-blk, which is now on its way out and will eventually be dropped. – Michael Hampton Jun 24 '20 at 11:39
  • 1
    The choice between `virtio-scsi` and `virtio-blk` is really dependent on what your needs are. According to [this QEMU article](https://www.qemu.org/2021/01/19/virtio-blk-scsi-configuration/) (Jan 2021), you should prefer `virtio-blk` for performance on fewer devices, while `virtio-scsi` should be preferred when trying to scale up to many devices. As described in [this SO thread](https://stackoverflow.com/a/45487683/4027379), while `virtio-scsi` is getting more development, its lower performance for fewer devices is due to the costs of its extra features.. – spaceman spiff Jun 03 '22 at 14:21
  • This is a very good answer, proxmox add this detect-zeroes=unmap flag automatically and silently, I tried to get them to remove but they wouldnt budge, the impact can be quite high, I added a nvme SSD to a VM, and observed writing non zeroes it performed normally, but writing zeroes it was down to about 40% of normal write speed. When I removed the detect zeroes flag it was back to normal speed on zeroes. – Chris C May 28 '23 at 08:52