1

We have a Proxmov HV setup connecting to 2 FreeNAS boxes. We are using the FreeNAS ZFS over iSCSI interface to present the zvols as the volumes to the Proxmox VMs.

We have enabled snapshotting on the FreeNAS side and have the NAS boxes replicating to each other.

What we are now trying to do is mount an existing snapshot to the Proxmox VM to do a restore.

tl8
  • 131
  • 2
  • 4

1 Answers1

2

Firstly, you will need console access to the FreeNAS box, Promox HV box (with the VM running) and the VM. You will also need UI access to FreeNAS and potentially Proxmox.

You will need to enter in your own pool paths and names.

In SSH session on NAS:

  1. Put a hold on the snapshot you want. This will ensure that it isn't accidentally deleted.

    sudo zfs hold keep tank/tank-iscsi/*snapshot-name*

  2. If you need to get a list of snapshots, this command can help

sudo zfs list -t snapshot -o name | grep *vm-id*

  1. If you are using the orginal NAS as the source (Primary NAS)

    sudo zfs clone tank/tank-iscsi/*snapshot-name* tank/tank-iscsi/*cloned-snapshot-name*

  2. If you are using the replicated NAS as the source (Replicated NAS)

    sudo zfs clone tank/replicated/tank-iscsi/*snapshot-name* tank/replicated/tank-iscsi/*cloned-snapshot-name*

NB: The zvol path/name needs to be less than 67 chars and must not be readonly. This might require modification of the parent dataset.

In FreeNAS UI:

  1. Navigate to Sharing->iSCSI->Extents->Add Extent
  2. Add a new extent:
    • Name:
    • Primary NAS: tank/tank-iscsi/*cloned-snapshot-name*
    • Replicated NAS: tank/replicated/tank-iscsi/*cloned-snapshot-name*
    • Device: Select cloned-snapshot-name
  3. Navigate to Sharing->iSCSI->Associated Targets->Add Target
    • Select the correct storage. (We use the same as the other target for the tank)
    • Enter a LUN ID that won't impact future additions (IE a high number)
    • Select the extent just created

NB: This can be done on either of the NAS due to replication. If using replicated target, ensure that the ReadOnly is false and replication is off. You may need to set up a new storage iscsi target in Proxmox for the pathing for the replicated iscsi target.

In SSH on the Proxmox HV (https://johnkeen.tech/proxmox-physical-disk-to-vm-only-2-commands/):

  1. Check that the zvol is visible to Proxmox:

    • Primary NAS: pvesm list tank-zfs-iscsi
    • Replicated NAS: pvesm list tank-zfs-iscsi-replicated
  2. Add the disk to the VM:

    • Primary NAS: qm set *vm-id* -virtio1 tank-zfs-iscsi:*cloned-snapshot-name*
    • Replicated NAS: qm set *vm-id* -virtio1 tank-zfs-iscsi-replicated:*cloned-snapshot-name*

In the VM (https://sontsysadmin.blogspot.com/2017/09/mounting-lvm-with-same-pv-lv-vg-names.html):

  1. Check that the disk is present (Most likely it will be vbb or vbc)
  2. Change the name of the LVM volume
  3. Mount the volume
vgimportclone --basevgname recover /dev/vdX3
lvs
vgchange -a y  recover
ls /dev/recover/
mkdir /mnt/recover
mount /dev/recover/ubuntu-lv /mnt/recover/
ls /mnt/recover/

If you get the following error:

 $ vgchange -ay recover
  device-mapper: create ioctl on recover-ubuntu--lv LVM-3jIHEjL7LvdGGd4BP08N failed: Device or resource busy 

Try these commands to reset the disk:

dmsetup ls
    recover-ubuntu--lv      (253:1)
    ubuntu--vg-ubuntu--lv   (253:0)
dmsetup remove recover-ubuntu--lv

Retry $ vgchange -ay recover

Cleaning Up

On VM:

umount /mnt/recover
lvchange -an recover
vgchange -an recover

On Proxmox UI

  • Detach the disk If the snapshot is no longer required, it can then be deleted (NB: this operation can’t be undone, once it is gone, it is gone)

On NAS SSH:

  • Release the hold on the snapshot:

sudo zfs release keep Slow1/slow1-iscsi/*snapshot-name*

tl8
  • 131
  • 2
  • 4
  • An important note: a cloned FreeNAS snapshot (and Proxmox Snapshot) doesn’t replicate properly. Hence the hold is important if it needs to be redone on the replicated NAS. – tl8 Aug 27 '19 at 03:42