2

I want to set up Hyper-V server according to the following:

Diagram

I want the Hyper-V server to, if both interfaces are up, to use the NIC connected to Switch A for storage traffic, and the NIC connected to Switch B for all other IP traffic.

I know this configuration is achievable in ESXi by adding both interfaces as active to a vSwitch and using the "Override switch failover order" option on the port groups or VMkernel Interfaces to setup an active/standby configuration for those specific interfaces. Is there any way to do the same thing with Windows Server 2012 and Hyper-V?

The reason I want to do it this way is because I do not have the neccessary equipment for a dedicated storage network, but I still want to seperate out the storage traffic, and try to minimize the load on the link between the two switch stack members, and to minimize contention for network resources, while still providing failover.

The alternative would be to use port based load balancing, but if I were to do that, I could not control the amount of traffic that would flow across the cross-stack link, making it potentially act as a bottleneck.

Per von Zweigbergk
  • 2,625
  • 2
  • 19
  • 28
  • what's this "Link B to standby" thing? Is it a separate controller operating in standby mode? Or is it a second network interface at the same controller that you've teamed up with the first one in a failover-only fashion and which could be configured separately? – the-wabbit Feb 21 '13 at 12:23
  • The storage server will be set up with two 10G network cards. The network card connected to Switch A will be active by default, the network card connected to Switch B will, as long as the link to Switch A is up, not be used. – Per von Zweigbergk Feb 21 '13 at 12:26
  • It looks like a rather uncommon configuration for iSCSI storage controllers. And I cannot see the rationale behind this configuration as it seems inferior to the more simpler solutions having two active interfaces. Who is selling that? – the-wabbit Feb 21 '13 at 12:44
  • The rationale would be to keep iSCSI traffic on a single switch. But if you can use iSCSI multipathing on the same physical NICs that are also used for other IP traffic in a team to the Hyper-V Virtual Switch, this would also be an acceptable design. – Per von Zweigbergk Feb 21 '13 at 12:54
  • If you have a rationale which is not obvious from the setup, it is always a good idea to include it into the question as it not only likely to yield more specific answers, but also might help future visitors determine if the described problem is similar to theirs. – the-wabbit Feb 21 '13 at 13:25

2 Answers2

1

You should not try setting up teaming for connections carrying iSCSI traffic as it is unsupported (although arguably it will technically work with some limitations).

http://technet.microsoft.com/en-us/library/ee338480(v=ws.10).aspx

Installing and Configuring Microsoft iSCSI Initiator

[...]

Applies To: Windows Server 2008 R2, Windows Server 2012

[...]

Use Microsoft MultiPath IO (MPIO) to manage multiple paths to iSCSI storage. Microsoft does not support teaming on network adapters that are used to connect to iSCSI-based storage devices.

[...]

  • Configure additional paths for high availability. Use MPIO or multiple connections per session (MCS) with additional network adapters in the server. This creates additional connections to the storage array in Microsoft iSCSI Initiator through redundant Ethernet switch fabrics.

Using MC/S would be taking the easiest route as long as your target supports that. It would need two active interfaces with different IP addresses on the target side.

the-wabbit
  • 40,737
  • 13
  • 111
  • 174
  • In that case, as an alternative, can the same physical adapters be used both for iSCSI traffic using MCS as well as members of a team carrying VLAN traffic to the Hyper-V virtual switch? – Per von Zweigbergk Feb 21 '13 at 12:42
  • I noticed from reading the document that the reason Microsoft do not support teaming with iSCSI is because teaming is a "non-Microsoft" product. I was under the impression that in Windows Server 2012 teaming was built in to the OS and as such very much a Microsoft product. – Per von Zweigbergk Feb 21 '13 at 12:46
  • @PervonZweigbergk Technically, yes. In the docs you often will find that this kind of configuration is discouraged due to possible network contention issues. This being said, a lot of people are running exactly this kind of config. – the-wabbit Feb 21 '13 at 12:46
  • I'm aware of the network contention issues. That's exactly what I'm trying to address by (1) keeping the iSCSI traffic on a single switch under normal operating conditions and thus not loading the stacking links and (2) under normal operating conditions passing iSCSI traffic over one NIC and all other traffic on another NIC. This is what I'm trying to figure out if it's even possible to do with Hyper-V. I know it is possible with VMware. (Also, could you please edit your comment to make it clear what you're saying "yes" to?) – Per von Zweigbergk Feb 21 '13 at 12:49
  • @PervonZweigbergk In this case using MC/S with a single target but different sources and a policy of "Round-robin with a subset of paths" should get you there by setting one path to "Active" and the other to "Standby". Note that comments cannot be edited after 5 minutes, but readers should be able to figure out that I could not type a 37-words reply within 2 seconds so [this comment](http://serverfault.com/questions/481050/481067#comment534529_481067) references [the first of your two comments to my answer](http://serverfault.com/questions/481050/481067#comment534523_481067) – the-wabbit Feb 21 '13 at 13:15
  • Oh! Of course, how silly of me, I didn't realise there was posting time information as granular as by second. I was left headscratching myself to figure out what you were responding to. Thank you for your help. I will try this configuration of using iSCSI Multipathing on the same physical interfaces as teaming, since it seems it'll do what I want it to do. – Per von Zweigbergk Feb 21 '13 at 13:56
  • I just got the opportunity to try out this solution. It did not work. It is not possible to have physical interfaces be a member of a team while at the same time running IP directly over them. Did I misunderstand anything in the answer? – Per von Zweigbergk Feb 25 '13 at 13:04
  • @PervonZweigbergk I believe I misunderstood your request. You *can* have an interface in a Hyper-V virtual switch and use it for local traffic like iSCSI. You *cannot* use an adapter which is part of a Server 2012 team for any other purpose. – the-wabbit Feb 26 '13 at 07:21
0

Yes. Nothing to do with Hyper-V - use NIC Teaming (OS feature) to make a virtual NIC out of the two physical ones. THen use that virtual NIC In Hyper-V. NIC-Teaming was added with server 2012.

TomTom
  • 51,649
  • 7
  • 54
  • 136
  • NIC teaming has been [unsupported for iSCSI with Hyper-V 1 & 2](http://support.microsoft.com/kb/968703). I remember reading that it is still discouraged for Hyper-V 3 since the way it is done it would not increase the available bandwidth and only add to redundancy. Multipath or [MC/S](http://forum.synology.com/wiki/index.php/How_to_use_iSCSI_Targets_on_Windows_computers_with_Multiple_Connections_per_Session) (i.e. a target with several interfaces / individual IP addresses) is what should be done for connecting to iSCSI instead. – the-wabbit Feb 21 '13 at 12:18
  • I'm aware that teaming will not increase the usable bandwidth for iSCSI. That's not what I'm trying to do. – Per von Zweigbergk Feb 21 '13 at 12:24
  • How would you specifically set up the teaming to do what was asked in the question? Can you make two virtual NICs, one of which uses NIC A as active and NIC B as standby, and the other one using NIC B as active and NIC A as standby? – Per von Zweigbergk Feb 21 '13 at 12:25