2

I have an Ubuntu installation here that fails to close a LUKS device after unmount.

kernel: 5.3.0-42-generic
cryptsetup: 2.2.0

root@pc:~# cryptsetup --debug close sdbackup
# cryptsetup 2.2.0 processing "cryptsetup --debug close sdbackup"
# Running command close.
# Locking memory.
# Installing SIGINT/SIGTERM handler.
# Unblocking interruption on signal.
# Allocating crypt device context by device sdbackup.
# Initialising device-mapper backend library.
# dm version   [ opencount flush ]   [16384] (*1)
# dm versions   [ opencount flush ]   [16384] (*1)
# Detected dm-ioctl version 4.40.0.
# Detected dm-crypt version 1.19.0.
# Device-mapper backend running with UDEV support enabled.
# dm status sdbackup  [ opencount noflush ]   [16384] (*1)
# Releasing device-mapper backend.
# Allocating context for crypt device (none).
# Initialising device-mapper backend library.
Underlying device for crypt device sdbackup disappeared.
# Deactivating volume sdbackup.
# dm versions   [ opencount flush ]   [16384] (*1)
# dm status sdbackup  [ opencount noflush ]   [16384] (*1)
# dm versions   [ opencount flush ]   [16384] (*1)
# dm table sdbackup  [ opencount flush securedata ]   [16384] (*1)
Device sdbackup is still in use.
# Releasing crypt device (null) context.
# Releasing device-mapper backend.
# Unlocking memory.
Command failed with code -5 (device already exists or device is busy)

Googling around produced the following solutions, but all of them fail

Deleting the partition manually from the kernel map (not working, device busy)

kpartx -d /dev/mapper/sdbackup
partprobe

dmsetup, lsof, kill

dmsetup ls
nvme0n1p3_crypt (253:0)
sdbackup        (253:2)
another_disk (253:1)               
sudo lsof |grep 253,2 # produces no results, so nothing to kill

umount -f

The mount output does not contain the partition anymore...

How can I safely remove the encrypted device?

moestly
  • 1,188
  • 9
  • 11
  • 2
    I had a similar problem. Turned out that I had started a process with its own mount namespace (using unshare --mount) while the luks partition was still mounted, so despite having unmounted it at the global level, it was still mounted in that namespace. Solution: exit that process. If you're running any software that uses namespaces, such as OS containers or security sandboxes, you might want to try exiting that software to see if it releases your luks device. – ʇsәɹoɈ May 28 '20 at 04:50
  • I have also seen a running FlatPak application keep the mount in use until it was closed, presumably also because of mount namespaces. – ʇsәɹoɈ May 29 '20 at 06:36

1 Answers1

1

I can't add a comment since I just created this account to comment in this post so I'll post as an answer. The credit should go to ʇsәɹoɈ and his first comment should really be marked as the answer instead of a comment. Worked like a charm for me.

I had the same issue as OP, tried everything, not a single record of anything still using the mount. What actually happened was I still had a running docker-container with a bind towards the mount (i.e in it's own namespace).

Shut the container down, and the luksClose ran without any issues!

neotheg
  • 11
  • 1