1

I'm doing some testing in which I utilise iSCSI. Strange things are happening and I'm looking for an explanation. If anyone could suggest something, I'd be really grateful. So here we go:

There are two VMs running Debian9. One is an iSCSI target (server), the other is an iSCSI initiatior (client). The server shares a disk (ie. /dev/sdb) or a partition on that disk (ie. /dev/sdb1) as an iSCSI LUN. The client connects to the server and properly recognizes the LUN as a new device (ie. /dev/sdc). Then a LVM is configured on /dev/sdc. Nothing out of ordinary: PV on /dev/sdc, VG on PV, LV on VG, some data on LV. It all works the way it should. Then I shutdown both machines and start them up again. All important services are set to autostart, both machines see each other, the client creates a sessions (connects to the iSCSI server). But now the magic happens:

Despite the client being connected to the server, it no longer sees to the LUN - so no /dev/sdc device or PV / VG / LV on the client. The server properly displays the target (LUN) as being shared, but the LUN size is displayed as "0" and the backing store path as "none". The PV / VG / LV are also now displayed by the iSCSI server.

My first idea would be that the LVM metadata gets copied to the iSCSI server, but there are no lvm2-related packages on the server. Since these machines will be used (once I straigten up the iSCSI issues) to cluster tests, lvm locking_type is already set as 3 (clustered locking with clvmd) on the iSCSI client - not sure if that makes a difference here. Also checked if sharing /dev/sdb1 partition makes any difference in comparison to sharing /dev/sdb device - but no difference. So currently I'm out of ideas. Could anyone assist? Thanks in advance!

before restart, server:

# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0    8G  0 disk
├─sda1   8:1    0    7G  0 part /
├─sda2   8:2    0    1K  0 part
└─sda5   8:5    0 1022M  0 part [SWAP]
sdb      8:16   0    8G  0 disk
└─sdb1   8:17   0    8G  0 part
sr0     11:0    1 1024M  0 rom

# tgtadm --mode target --op show
Target 1: iqn.20181018:test
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            SWP: No
            Thin-provisioning: No
            Backing store type: null
            Backing store path: None
            Backing store flags:
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 8589 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            SWP: No
            Thin-provisioning: No
            Backing store type: rdwr
            Backing store path: /dev/sdb
            Backing store flags:
    Account information:
        vgs-user-incoming
        vgs-user-outcoming (outgoing)
    ACL information:
        192.168.106.171

before restart, client:

# lvs
  WARNING: Not using lvmetad because locking_type is 3 (clustered).
  connect() failed on local socket: No such file or directory
  Internal cluster locking initialisation failed.
  WARNING: Falling back to local file-based locking.
  Volume Groups with the clustered attribute will be inaccessible.
  LV              VG  Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  virtualMachine1 vg1 -wi-a----- 2,00g
  lv_001          vg2 -wi-a----- 4,00m
  lv_002          vg2 -wi-a----- 2,00g

# lsblk
NAME                    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                       8:0    0    8G  0 disk
├─sda1                    8:1    0    7G  0 part /
├─sda2                    8:2    0    1K  0 part
└─sda5                    8:5    0 1022M  0 part [SWAP]
sdb                       8:16   0    4G  0 disk
└─sdb1                    8:17   0    4G  0 part
  └─vg1-virtualMachine1 254:0    0    2G  0 lvm
sdc                       8:32   0    8G  0 disk
├─vg2-lv_001            254:1    0    4M  0 lvm
└─vg2-lv_002            254:2    0    2G  0 lvm
sr0                      11:0    1 1024M  0 rom

after restart, server:

# lsblk
NAME           MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda              8:0    0    8G  0 disk
├─sda1           8:1    0    7G  0 part /
├─sda2           8:2    0    1K  0 part
└─sda5           8:5    0 1022M  0 part [SWAP]
sdb              8:16   0    8G  0 disk
└─sdb1           8:17   0    8G  0 part
  ├─vg2-lv_001 254:0    0    4M  0 lvm
  └─vg2-lv_002 254:1    0    2G  0 lvm
sr0             11:0    1 1024M  0 rom

# tgtadm --mode target --op show
Target 1: iqn.20181018:test
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            SWP: No
            Thin-provisioning: No
            Backing store type: null
            Backing store path: None
            Backing store flags:
    Account information:
        vgs-user-incoming
        vgs-user-outcoming (outgoing)
    ACL information:
        192.168.106.171

after restart, client:

# lvs
  WARNING: Not using lvmetad because locking_type is 3 (clustered).
  connect() failed on local socket: No such file or directory
  Internal cluster locking initialisation failed.
  WARNING: Falling back to local file-based locking.
  Volume Groups with the clustered attribute will be inaccessible.
  LV              VG  Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  virtualMachine1 vg1 -wi-a----- 2,00g

# lsblk
NAME                    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                       8:0    0    8G  0 disk
├─sda1                    8:1    0    7G  0 part /
├─sda2                    8:2    0    1K  0 part
└─sda5                    8:5    0 1022M  0 part [SWAP]
sdb                       8:16   0    4G  0 disk
└─sdb1                    8:17   0    4G  0 part
  └─vg1-virtualMachine1 254:0    0    2G  0 lvm
sr0                      11:0    1 1024M  0 rom

1 Answers1

0

The server is detecting the LVM and starting it up. Later, when it tries to share /dev/sdb1, it can't, because the device is in use.

You can prevent this with a filter in lvm.conf on the server. If you don't need LVM at all on the server, you can just tell it to avoid scanning (remove) all block devices:

filter = [ "r/.*/" ]

Source: https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/lvm_filters.html

Mike Andrews
  • 3,045
  • 18
  • 28