The pool usage is very large compared to the lvm volume, but it doesn't seem to be actually used.
Previously, the metadata area was full and the metadata was expanded. Since then I've had "lvm transaction id mismatch" issue and I solved it through vgcfgbackup -> change transaction id -> vgcfgrestore.
The unreclaimed lvm thin pool space problem occurred after vgcfgrestore. deleting snapshots, fstrim for mounted lvm volumes didn't solve it either.
Any ideas for solving this problem?
$ lvs -a vg0 -o +discards
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Discards
20221101.120002 vg0 Vwi-aotz-k 15.00t tpool0 tvol0 29.13 passdown
20221101.180001 vg0 Vwi-aotz-k 15.00t tpool0 tvol0 29.13 passdown
20221102.000001 vg0 Vwi-aotz-k 15.00t tpool0 tvol0 29.13 passdown
20221102.060001 vg0 Vwi-aotz-k 15.00t tpool0 tvol0 29.13 passdown
20221102.120001 vg0 Vwi-aotz-k 15.00t tpool0 tvol0 29.13 passdown
tpool0 vg0 twi-aotz-- 16.00t 90.86 0.59 passdown
[tpool0_tdata] vg0 Twi-ao---- 16.00t
[tpool0_tmeta] vg0 ewi-ao---- <15.01g
[tpool0_tmeta] vg0 ewi-ao---- <15.01g
tvol0 vg0 Vwi-aotz-- 15.00t tpool0 29.13 passdown
[lvol0_pmspare] vg0 ewi------- <15.01g
[lvol0_pmspare] vg0 ewi------- <15.01g
[lvol0_pmspare] vg0 ewi------- <15.01g
$ dmsetup ls | grep vg0 | sort -k2 -V
vg0-tpool0_tmeta (253:4)
vg0-tpool0_tdata (253:5)
vg0-tpool0-tpool (253:6)
vg0-tpool0 (253:7)
vg0-tvol0 (253:8)
vg0-20221102.000001 (253:16)
vg0-20221102.060001 (253:17)
vg0-20221102.120001 (253:18)
vg0-20221101.120002 (253:19)
vg0-20221101.180001 (253:20)
$ grep . /sys/block/dm-{4..8}/queue/discard_max_bytes
/sys/block/dm-4/queue/discard_max_bytes:0
/sys/block/dm-5/queue/discard_max_bytes:0
/sys/block/dm-6/queue/discard_max_bytes:0
/sys/block/dm-7/queue/discard_max_bytes:0
/sys/block/dm-8/queue/discard_max_bytes:17179869184