0

Hey guys I am turning up a little FAMP stack on an Azure VM. 4 Xeon cores 16GB ram. Second SSD is using ZFS and the boot drive is UFS2 (Default). When I dd if=/dev/zero of=testfile bs=1024 count=1024000 to my home directory which is located on the boot drive I can see a full 1GB file. When performing the same action on my ZFS dataset it appears to only be writing 512b. Not sure what is going on here. I can copy larger files from the UFS2 partitions to the ZFS datasets with no problem.

Any suggestions would be great as this is a concern for me before I release this into production.

[Thanks]

Zork
  • 1
  • So you're saying that zfs is unable to store files larger than one 512 bytes block. DEFINITELY WORTH REPORTING, lol. – drookie Jan 30 '22 at 20:08
  • No, if you read my post you will see I can copy files from the UFS2 drive to the ZFS SSSD with no issue. Just when I use DD it doesn't work :) – Zork Jan 30 '22 at 20:10

2 Answers2

6

When writing an all-zero file on a ZFS dataset with compression enabled, it will be collapsed into a completely sparse file with minimum space consumption - 512B, as you saw. When reading the file back, it will be "re-hydrated" with all the originally written zeroes.

As a side note, you can do something similar even with classical filesystem as EXT4 or XFS: try issuing truncate -l 1G <filename> and you will end with a 1G file using only 512B (or 4K) of real space.

Rather than disabling compression, try copying something from /dev/urandom and you will see the expected space usage.

shodanshok
  • 47,711
  • 7
  • 111
  • 180
  • *When writing an all-zero file on a ZFS dataset with compression enabled* And the performance is downright **amazing**. ;-) – Andrew Henle Jan 31 '22 at 14:29
-1

I turned of lz4 compression and the dd works just fine. Apparently this has something to do with dd if=/dev/zero. I am not thinking this will be a concern going forwards so most likely going to turn compression back on.

Zork
  • 1