Firstly, you will need console access to the FreeNAS box, Promox HV box (with the VM running) and the VM. You will also need UI access to FreeNAS and potentially Proxmox.
You will need to enter in your own pool paths and names.
In SSH session on NAS:
Put a hold on the snapshot you want. This will ensure that it isn't accidentally deleted.
sudo zfs hold keep tank/tank-iscsi/*snapshot-name*
If you need to get a list of snapshots, this command can help
sudo zfs list -t snapshot -o name | grep *vm-id*
If you are using the orginal NAS as the source (Primary NAS)
sudo zfs clone tank/tank-iscsi/*snapshot-name* tank/tank-iscsi/*cloned-snapshot-name*
If you are using the replicated NAS as the source (Replicated NAS)
sudo zfs clone tank/replicated/tank-iscsi/*snapshot-name* tank/replicated/tank-iscsi/*cloned-snapshot-name*
NB: The zvol path/name needs to be less than 67 chars and must not be readonly. This might require modification of the parent dataset.
In FreeNAS UI:
- Navigate to
Sharing->iSCSI->Extents->Add Extent
- Add a new extent:
- Name:
- Primary NAS:
tank/tank-iscsi/*cloned-snapshot-name*
- Replicated NAS:
tank/replicated/tank-iscsi/*cloned-snapshot-name*
- Device: Select cloned-snapshot-name
- Navigate to
Sharing->iSCSI->Associated Targets->Add Target
- Select the correct storage. (We use the same as the other target for the tank)
- Enter a LUN ID that won't impact future additions (IE a high number)
- Select the extent just created
NB: This can be done on either of the NAS due to replication. If using replicated target, ensure that the ReadOnly is false and replication is off. You may need to set up a new storage iscsi target in Proxmox for the pathing for the replicated iscsi target.
In SSH on the Proxmox HV (https://johnkeen.tech/proxmox-physical-disk-to-vm-only-2-commands/):
Check that the zvol is visible to Proxmox:
- Primary NAS:
pvesm list tank-zfs-iscsi
- Replicated NAS:
pvesm list tank-zfs-iscsi-replicated
Add the disk to the VM:
- Primary NAS:
qm set *vm-id* -virtio1 tank-zfs-iscsi:*cloned-snapshot-name*
- Replicated NAS:
qm set *vm-id* -virtio1 tank-zfs-iscsi-replicated:*cloned-snapshot-name*
In the VM (https://sontsysadmin.blogspot.com/2017/09/mounting-lvm-with-same-pv-lv-vg-names.html):
- Check that the disk is present (Most likely it will be vbb or vbc)
- Change the name of the LVM volume
- Mount the volume
vgimportclone --basevgname recover /dev/vdX3
lvs
vgchange -a y recover
ls /dev/recover/
mkdir /mnt/recover
mount /dev/recover/ubuntu-lv /mnt/recover/
ls /mnt/recover/
If you get the following error:
$ vgchange -ay recover
device-mapper: create ioctl on recover-ubuntu--lv LVM-3jIHEjL7LvdGGd4BP08N failed: Device or resource busy
Try these commands to reset the disk:
dmsetup ls
recover-ubuntu--lv (253:1)
ubuntu--vg-ubuntu--lv (253:0)
dmsetup remove recover-ubuntu--lv
Retry $ vgchange -ay recover
Cleaning Up
On VM:
umount /mnt/recover
lvchange -an recover
vgchange -an recover
On Proxmox UI
- Detach the disk
If the snapshot is no longer required, it can then be deleted (NB: this operation can’t be undone, once it is gone, it is gone)
On NAS SSH:
- Release the hold on the snapshot:
sudo zfs release keep Slow1/slow1-iscsi/*snapshot-name*