1

I am setting up a lab and wanted to be sure the following makes sense\is possible:

One server running vsphere 5.5 with two fiber HBAs 2 Windows 2012 Hyper-V VMs each bound to an HBA

I'm using vSphere because it supports nested visualization, but I'm really setting up this lab to test out hyper-v and live migrations.

Will I easily be able to bind each VM to a physical HBA on the host or are there any caveats I should know about?

*edit: Sorry this question was vague, I have not used vSphere before nor worked with HBA cards in a virtual setup like this.

I located the HBA's drivers for vSphere and for windows.

Now my question is this: Do I install the vSphere drivers on the host and make the HBA's available to the guests somehow OR should I use PCI pass-through and install the HBA drivers on the Windows guests?

The host is a Dell T610 which I'm pretty sure supports IOMMU. I think PCI pass-through is the only way to achieve what I want- is this correct?

red888
  • 4,183
  • 18
  • 64
  • 111
  • I think you can do this with PCI passthrough. – joeqwerty Oct 23 '13 at 20:59
  • 1
    is there a reason you're not just using shared vmdks between the two hyper-v vms? Would give the same functionality. – Trondh Oct 24 '13 at 09:47
  • Possibly, but I have an actual SAN device I am planning on connecting this server to. I want to test serving up LUNs from the SAN directly to the Hyper-V VMs. I want to use the VMs like they are two physical servers attached via HBAs to a SAN. – red888 Oct 24 '13 at 12:40

1 Answers1

0

Sorry, I forgot about this question and arrived at a solution some time ago.

Turns out the server did not support PCI pass-through.

But I served up LUNS to the host and used shared vmdks as suggested, which worked well enough.

Unfortunately I won't be able to play with stuff like Server 2012's native MPIO offering, but I'm limited by the hardware.

red888
  • 4,183
  • 18
  • 64
  • 111