1

Single Dell Equallogic PS5000E array, fw 7.0.5, dual controller, NIC0 and NIC1 on each controller are on the iSCSI VLAN, NIC2 on each controller is on the management VLAN.

VMware vSphere 5.5, software iSCSI, each host has two physical NICs connected to the iSCSI VLAN. iSCSI traffic runs through a standard vSwitch which has two vmkernel ports, each vmkernel port has one physical NIC assigned as active and the respective other set to unused. Port bindings are set up correctly in the network configuration tab of the software iSCSI adapter.

The PS5000E will not be presented to the ESXi hosts and will not be used as VMDK datastore. The PS5000E will be presented as an iSCSI device to a VM than runs Windows Server 2012 R2, formatted with NTFS and used by that VM exclusively. The VM will have two virtual NICs on the iSCSI VLAN. VM will have Dell EqualLogic Host Integration Tools for Microsoft installed which should take care of the OS side of MPIO to the PS5000E.

The question is whether the ESXi hosts need the Dell EqualLogic Multipathing Extension Module for VMware vSphere to take full advantage of MPIO between a VM on those hosts and the PS5000E without that the storage is presented to ESXi?

It seems to me that the module is not required because the PS5000E is not presented to ESXi and any iSCSI traffic is pass-through, but I can't find an actual reference to back this up.

Sidebar discussion: The only reason to give the VM two virtual NICs in the iSCSI VLAN is to aggregate storage bandwidth in and out of the VM to the PS5000E.

Reality Extractor
  • 1,490
  • 2
  • 14
  • 23

2 Answers2

2

The Dell Equallogic MEM is never required, even when you are using it for VMDK storage. VMware's built-in path selection policies will get the job done, though not quite as well (e.g. Round Robin, or the less-desirable default "Static" PSP).

Since you're doing direct iSCSI access from your virtual machine, the host integration tools installation directly on the VM is a good call. This also gives you access to "Smart Copy" snapshots if you should find need for them, in addition to the DSM (device specific module) for improved MPIO functionality.

Example of configuration with 2 NICs / 2 VM port groups from comment on another answer: Create 2 virtual machine port groups on your iSCSI vSwitch, then modify the failover order so that each one only has access to 1 NIC, with one active and one unused (port group 1 -> NIC 1 active / NIC 2 unused. Port group 2 -> NIC 1 unused / NIC 2 active), then give the VM in question 2 NICs, each with access to one of the VM port groups. This would simulate the configuration "expected" by the EHCM MPIO driver, and give you the desired load balancing and session management behavior (and hopefully performance benefits).

Reminder: If it doesn't work, just give support a call! EQL support is happy to lend a hand with little configuration tweaks like this.

JimNim
  • 2,776
  • 13
  • 24
  • Thanks for the clarification Jim. I'll have to look into potential unintended consequences to creating two iSCSI port groups but at face value it seems like a good advice. – Reality Extractor Aug 01 '14 at 04:03
2

The extension module only helps you if you are using iSCSI with the host itself. Since you are using iSCSI from within the guest Dell's MPIO DSM for Windows should handle MPIO though I doubt that is possible with your current setup. Since the DSM relies on the fact that it has two physical NICs and not two virtual NICs connected to the same vSwitch which would actually reduce the MPIO to VMware's built-in path-selection. You'd need to use two vSwitches - one for each physical NIC: vSwitch1 and vSwitch2 - and connect one of the virtual NICs of the guest to vSwitch1 and the other to vSwitch2. That way you can benefit from Dell's MPIO DSM.

lsmooth
  • 1,541
  • 1
  • 9
  • 18
  • Interesting, so what you are saying is that Dell's MPIO could actually reduce performance in my current setup? I don't need MPIO inside the guest OS for redundancy since redundancy is provided at the host and physical switch level. My physical connections are 1 GbE and the array can saturate the connection, so I am thinking to connect a second virtual NIC for more bandwidth. So perhaps the better solution would be to use NIC teaming inside the Windows VM for bandwidth aggregation? – Reality Extractor Jul 29 '14 at 23:09
  • 1
    You should never use NIC teaming with iSCSI - this is "unsupported" by Microsoft. See my edited answer for an example config that would accomplish what you're after. – JimNim Jul 31 '14 at 03:21