3

I'm sure this question will have been asked somewhere before. In fact I'm sure I've read about it before too, but I can find any resources to help me along my way.

What I'm trying to do is deploy a set of Hyper-V servers without having to do anything other than start the process. I can think of ways to configure everything through PowerShell and/or unattend.xml, except for the network adapters. The commands are available, but there's one significant problem:

How do I get Windows to consistently detect the correct adapter to assign each network to?

These are clustered Hyper-V hosts with multiple IP addresses and VLANs, and teamed adapters but my understanding is that Windows detects adapters in a random order. To script it, I need consistency. I can't assume that Local Area Connection 12 is always port 3 on card 2, for example. The same physical port on each cluster node will be connected to the same VLAN or aggregate.

  • Do I have to go around and collect the MAC address of every port on every adapter and have some kind of lookup table in my scripts?
  • Is there an attribute in WMI/registry that I can reference when configuring my adapters and teams?
john
  • 1,995
  • 2
  • 17
  • 30
  • So your hosts are members of the multiple VLANs, and have an IP on each VLAN? Is there a particular reason for that? Do you separate VLANs for, cluster, replication, migration etc? – Drifter104 Jul 10 '15 at 16:20
  • That's correct. Regardless of the cluster-specific networks (CSV, live migration, etc.) being separated there will always be at least four distinct groups: management, iSCSI, cluster, and guest. Some will be in specific VLANs, others will be trunked. I need to identify the correct ones. – john Jul 10 '15 at 16:30
  • If you put them all in trunk ports, or even all but say storage/isci. You could use vNIC on the host for each VLAN. This would then be easy to script as actually you don't care what way windows configures the physical NICs. I do something similar I have a team for storage and team for everything else. – Drifter104 Jul 10 '15 at 16:37
  • I see what you're saying. Reduce the complexity. I'm all for it, but it doesn't solve the problem. I still need to distinguish the iSCSI ports from the trunked. – john Jul 10 '15 at 16:42
  • Is there any difference between the cards you use? For example the onboard card is intel and the card you as is Broadcom? If there is then you can do a get-wmi for that and use that in a script. Or if they are all intel, and you have dual and quad cards that is in the description – Drifter104 Jul 10 '15 at 16:47
  • For arguments sake, yes. – john Jul 10 '15 at 16:48
  • Something like this should start you off....... Get-WmiObject -Class Win32_NetworkAdapterConfiguration -ComputerName someserver | select description, index| Wher e-Object {$_.description -like "*intel*"} will get you the index value off all the intel cards – Drifter104 Jul 10 '15 at 17:03
  • Thanks, I'll give it a try. Does the interface index survive an OS rebuild? – john Jul 10 '15 at 17:07

1 Answers1

0

Answering my own question, because I stumbled across the answer the other day.

What I was looking for was Consistent Device Naming. Apparently this problem is not specific to Windows devices.

Consistent Network Device Naming is a convention for naming Ethernet adapters in Linux.

It was created around 2009 to replace the old standard ethX which caused problems on multihomed machines because the network interface controllers (NICs) would get named based on the order in which they were found by the kernel as it booted. Adding new interfaces could cause the previously added ones to change names.

https://en.m.wikipedia.org/wiki/Consistent_Network_Device_Naming

This problem downs not manifest itself in the same way under Windows, but as I described in my question. Ultimately, Windows uses the name provided by the BIOS rather than the default "Local Area Connection #X". This means the interface names persist between OS builds.

I'm not sure how prevalent the hardware support of it is, but the more recent Dell and HP generations have this feature in the BIOS It is also fully supported for physical device deployment in System Center Virtual Machine Manager. Scripts can also be written to configure interfaces, because it's safe to assume the interface name is always the same.

For anyone working with Virtual Machine Manager, see http://www.hyper-v.nu/archives/mvaneijk/2013/08/system-center-vmm-2012-r2-bare-metal-deployment-with-converged-fabric-and-network-virtualization-part-1-intro/. This series of posts is of really good quality.

john
  • 1,995
  • 2
  • 17
  • 30