1

I'm using an iSCSI pool as the storage backend for a couple of virtual machines and I was wondering how other people use targets and LUNs in this use-case.

I started out with one target called iqn.2016-06.iscsihost:kvmguests that has one LUN per VM. That however results in some not so ideal names for the storage target assiciated with a VM (see this question) so I was wondering if I should switch to one target per VM (with potentially a couple of LUNs per target, e.g. for separate OS disks etc.). That would have the side-effect of very neat names that can't be mixed up so easily on the KVM side of things (e.g. iqn.2016-06.iscsihost:webserver01, iqn.2016-06.iscsihost:database07, etc.) . I'm not sure what implications this would have, so any pointers are greatly appreciated.

So the question is: What's best practice here? One target with one (or multiple) LUNs per VM, or one target per VM?

Update: Thinking about it, one has to add every iSCSI target as a storage pool to every KVM host. That is just very inconvenient since one would have to change every KVM host's configuration every time a new VM is added... Or am I missing something. How is this done in the real world?

Clayton Louden
  • 333
  • 1
  • 4
  • 16

2 Answers2

2

I can't speak for KVM, but we're using iSCSI to connect our VMWare VSphere cluster with 4 ESXi hosts to our Dell Compellent storage system and we're simply creating datastores for approx. 4-5 VMs, currently around 2 TB per LUN. You don't want too many VMs sitting on a single iSCSI connection but you also don't want the massive administration overhead that comes with multiple LUNs per VM, especially if your environment grows beyond 4-5 VMs. The "best" setup depends on your environment, the capabilities of your storage backend (multiple controllers, load balancing etc.) and your virtualization solution.

Dirk Trilsbeek
  • 296
  • 3
  • 6
0

The Libvirt project initially designed iSCSI storage to exist in pools on the KVM host. As you observe, it does not make very much sense to do that with iSCSI, especially if you want to move the virtual machine around. I believe that is why the QEMU project implemented its own "built-in iSCSI" driver with the libiscsi library. That way, the virtual machine itself can connect to the iSCSI target for its storage. If the QEMU process connects to the target on its own, then it does not depend upon the host and can be live-migrated to other hosts quite easily without the host knowing anything about the iSCSI storage infrastructure.

It is easy to do with raw QEMU, like this:

-device virtio-scsi-pci,id=scsi0 \
-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,drive=disk0 \
-drive id=disk0,file=iscsi://server.example.com/iqn.2000-01.com.example:server:target/0,if=none,cache=none,format=raw,media=disk,discard=unmap

Find the QEMU documentation here:

https://www.qemu.org/docs/master/qemu-doc.html#Device-URL-Syntax

I have been able to get the Libvirt abstraction to work like this:

<domain type='qemu'>
  <!-- snip -->
  <devices>
    <!-- snip -->
    <controller type='scsi' index='0' model='virtio-scsi'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <disk type='network' device='lun'>
      <driver name='qemu' cache='none' discard='unmap'/>
      <source protocol='iscsi' name='iqn.2000-01.com.example:server:target/0'>
        <host name='server.example.com' port='3260'/>
      </source>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <!-- snip -->
  </devices>
</domain>
Troy
  • 26
  • 3