I am testing a new ZFS configuration with z-std for log storage and storage of highly compressible files. The Array is tested on a 5 drive raidz-1 in a virtual machine on my PC which has direct access to the whole HDDs.
ZFS 2.0.2 is running in a Hyper-V Ubuntu VM and I am copying files from the Windows host via Samba. It's running locally on the PC, so network transfer speeds should not be a problem.
When I transfer a big, compressible file, the transfer itself is extremely bursty. You can see it here:
I guess writes are being captured in a TXG, compressed, and then committed to disk. But there is some downtime when the CPU is essentially idle and the HDDs themselves are also not really utilized (which is expected, as CPU is the bottleneck when compressing data).
Can I somehow tune ZFS so that it accepts new data while a TXG is being compressed? Or is this the intented, optimal behaviour? If feel like speeds could be better when ZFS constantly accepts and compresses data.