1

I have problems with mounting encrypted zfs dataset after boot. Pool is properly imported and visible in zpool status output. Then I manually load keys with: zfs load-key -a - still no issues. And here comes the mounting part. Dataset has canmount=on and mountpoint=/mnt/ssd properties set. Directory /mnt/ssd is empty and is not a Proxmox storage. Command zfs mount pool-ssd fails silently. Dataset is not mounted and it is confirmed by zfs mount and by mounted property. What have I tried:

  • removing /mnt/ssd directory
  • exporting/importing pool
  • changing mountpoint to other directory - this works but only till next reboot. Then the situation repeats and I have to change mountpoint once again.

I can't make head or tail of it. There is no error, or any other clue. I tried to import and mount this dataset on different server - one with only debian without proxmox and it works flawlessly. However after installing proxmox there is the same problem on both machines.

It looks like proxmox is doing something after pool is imported and the original mountpoint becomes broken. Changing mountpoint to different directory works, but after reboot this other directory is also broken. Changing it again to the first one works again - so this "corruption" does not persist over reboots.

How can I debug this?

> zfs --version
zfs-0.8.4-2~bpo10+1
zfs-kmod-0.8.4-pve1

> pveversion
pve-manager/6.2-15/48bd51b6 (running kernel: 5.4.65-1-pve)
tlaguz
  • 11
  • 2

1 Answers1

0

Ok, so the problem was version mismatch between zfs and zfs-kmod.

I installed proxmox on top of debian installed like this tutorial describes: https://openzfs.github.io/openzfs-docs/Getting Started/Debian/Debian Buster Root on ZFS.html. I ended up with /etc/apt/preferences.d/90_zfs file, which forced zfs to be installed from buster-backports repository.

After deleting this file and running apt upgrade && apt autoremove version mismatch was resolved. After reboot everything works fine!

tlaguz
  • 11
  • 2