8

frequently I notice that after a

$ sudo lvcreate vg -L 10G

immediately followed by a

$ sudo lvremove vg/<created volume>

I get the error message

Can't remove open logical volume "..."

while a

$ sudo lvs

shows me for that volume

  lvol2         vg   -wi-a-  10,00g
  • so there os a - after the a in the flags, where there should be an o if the volume was really open.

After some time, deletion works.

Why is that the case? How can I make it work immediately?

EDIT: The following did not lead to something useful:

$ sudo rm /dev/mapper/vg-lvol24 
$ sudo lvremove /dev/vg/lvol24
  Can't remove open logical volume "lvol24"
$ sudo lvs vg/lvol24
  LV     VG   Attr   LSize  Origin Snap%  Move Log Copy%  Convert
  lvol24 vg   -wi-a- 10,00g                                      
$ sudo lvremove /dev/vg/lvol24
  Can't remove open logical volume "lvol24"
glglgl
  • 711
  • 1
  • 6
  • 22

7 Answers7

6

So there is yet another possibility other than NFS, stale bash process and udev misbehaviour, namely partitions opened on the block device.

kpartx -d /dev/vg1/lv1

Then you can verify that # open drops to 0 in lvdisplay's output.

PAStheLoD
  • 256
  • 3
  • 7
5

It seems that there is a problem with the cooperation between lvm and udev.

On a lvremove, there are udev change events for every available block device. Their processing seems to disturb the removal process, and removal fails.

The solution is to deactivate the LV(s) to be removed with lvchange -an <given LV>. In this case, only a handful of "remove" events is created, which results from the associated dm device being removed.

If I lvremove the now deactivated LV, there are still a lot of udev change events, but they do not affect the LV to be removed (because it doesn't exist in the dm any longer), so it works without problems.

glglgl
  • 711
  • 1
  • 6
  • 22
2

Remove all mappings to that LV from /dev/mapper/ by deleting the symlinks and then you will be able to remove it.

freiheit
  • 14,544
  • 1
  • 47
  • 69
Stone
  • 7,011
  • 1
  • 21
  • 33
1

Some useful commands to find out if something is still using a disk:

cat /proc/mounts
dmsetup ls --tree
lsof <device>
user1338062
  • 165
  • 5
  • Generally right, but "an `lvcreate` immendiately followed by a `lvremove`" implies that no commands have been executed in-between: no mkfs, no mount etc. – glglgl Apr 18 '13 at 21:19
1

I ran into a similar issue with an OpenStack installation. lsof showed nothing for me, but dmsetup ls --tree did show a dependency/target. lvchange -an <given LV> didn't work for me either. Neither did deleting symlinks to the /dev/dm-* devices

In my case, I shut down the OpenStack services and was then able to lvremove the recalcitrant volumes.

This is an experimental setup, and I think I may have caused the problem initially by a forced reboot I did to get around some other problems.

Graham Klyne
  • 169
  • 1
  • 4
0

I also received this error. Luckily, I was in an initial build-out phase, no data to lose. I should note that the logical volume had just been created within the last half hour.

To allow removal of the logical volume I performed the following:

  1. (In fdisk) Delete the table partition I had created for this logical volume.
  2. (In fdisk) Performed a write - after confirming no table partitions existed
  3. partprobe
  4. multipath -F (Flushes unused multi-path devices)
  5. service multipathd stop
  6. service multipathd start
  7. reboot the server
  8. lvscan when server came back up
  9. lvremove
0

I ran into this recently. I completely forgot that device XYZ was a luks volume, mapped to XYZ and that the underlying lvm was actually in use by the luks map.

petermolnar
  • 1,029
  • 1
  • 12
  • 17