I have set up a vm with debian buster and ZFS. Initially the machine booted from first hard drive, but added x4 20Gb hd and transfered the system to zfs for testing purposes.
It works, so I added some datasets to see how it grows. But when querying about the used / free space it does not show the real numbers. It seems nothing happened. Also tested reaching the quota and the result is the same,
What i am doing wrong ?
Thanks.
The disk layout
root@debzfs:~# fdisk -l | grep sd | sort
/dev/sda1 2048 40892415 40890368 19.5G Solaris /usr & Apple ZFS
/dev/sda9 40892416 41943006 1050591 513M BIOS boot
/dev/sdb1 2048 40892415 40890368 19.5G Solaris /usr & Apple ZFS
/dev/sdb9 40892416 41943006 1050591 513M BIOS boot
/dev/sdc1 2048 40892415 40890368 19.5G Solaris /usr & Apple ZFS
/dev/sdc9 40892416 41943006 1050591 513M BIOS boot
/dev/sdd1 2048 40892415 40890368 19.5G Solaris /usr & Apple ZFS
/dev/sdd9 40892416 41943006 1050591 513M BIOS boot
Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdc: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 20 GiB, 21474836480 bytes, 41943040 sectors
The root pool and dataset (where the original system has been copied)
zpool create -d -o feature@async_destroy=enabled -o feature@empty_bpobj=enabled -o feature@lz4_compress=enabled -o ashift=12 -O compression=lz4 rpool raidz2 /dev/sd[a-d]1 -f
zfs create rpool/root
zfs set quota=10gb rpool/root
# The new datasets
zfs create rpool/smalldb
zfs set quota=5gb rpool/smalldb
zfs create rpool/greatdb
zfs set quota=20gb rpool/greatdb
Current disk layout after creating datasets
root@debzfs:~# zpool status
pool: rpool
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(5) for details.
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
sda1 ONLINE 0 0 0
sdb1 ONLINE 0 0 0
sdc1 ONLINE 0 0 0
sdd1 ONLINE 0 0 0
errors: No known data errors
root@debzfs:~# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 77.5G 3.04G 74.5G - - 3% 1.00x ONLINE -
root@debzfs:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 1.47G 34.9G 244K /rpool
rpool/greatdb 198K 20.0G 198K /rpool/greatdb
rpool/root 1.47G 8.53G 1.47G /
rpool/smalldb 198K 5.00G 198K /rpool/smalldb
Test commands and output (wrong). Expected current size and free space. But it seems nothing happened.
truncate -s 2G /rpool/smalldb/smalldb.log
truncate -s 8G /rpool/smalldb/limitdb.log #what ? if ls the file is there !!
truncate -s 10G /rpool/greatdb/greatdb.log
root@debzfs:~# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 77.5G 3.03G 74.5G - - 3% 1.00x ONLINE -
root@debzfs:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 1.47G 34.9G 244K /rpool
rpool/greatdb 209K 20.0G 209K /rpool/greatdb
rpool/root 1.47G 8.53G 1.47G /
rpool/smalldb 209K 5.00G 209K /rpool/smalldb
root@debzfs:~# ls -lh /rpool/smalldb/ /rpool/greatdb/
/rpool/greatdb/:
total 512
-rw-r--r-- 1 root root 10G Nov 4 00:11 greatdb.log
/rpool/smalldb/:
total 1.0K
-rw-r--r-- 1 root root 8.0G Nov 4 00:14 limitdb.log
-rw-r--r-- 1 root root 2.0G Nov 4 00:09 smalldb.log
root@debzfs:~#