2

I installed Windows Server 2016 Datacenter on a Dell T620, and then I installed the Hyper-V Role. Next, I created a NIC Team comprised of 2 physical 1Gbps network adapters. Team is called LANTeam. Settings are: Teaming Mode - Switch Independent, Load Balancing Mode - Dynamic, Standby Adapter - None (all adapters active)

In this server's Network Connections control panel, I see all of my physical NICs, and I also now see one more connection called 'LANTeam'. That is the name of the connection but the Device Name is 'Microsoft Network Adapter Multiplexor Driver'

If I double-click on this network connection, it shows a speed of 2.0 Gbps which makes sense since this is 2 x 1 Gbps connections, teamed together.

Here's where things get a little cloudy for me:

I open Hyper-V Manager and click on Virtual Switch Manager. I create a new Virtual Switch (External) and select 'Microsoft Network Adapter Multiplexor Driver' from the drop-down list.

I name this switch 'LAN vSwitch'

Next, I create my first VM. In its Properties window I select 'LAN vSwitch' from the drop-down.

When I start this VM (I installed Windows 2016 Server), go to Network Connections and double-click on the one (and only) Network Adapter (which is just called 'Ethernet', it shows the speed is only 1.0Gbps.

Why not 2.0Gbps? My goal is to create a few VMs, all having a 2.0Gbps Ethernet connection.

E C
  • 99
  • 3
  • 9

1 Answers1

2

The discrepancy you're seeing lies only in reported speed of the NIC. Some background first:

Windows is lying a bit when it's telling you that the teamed NICs are running at 2Gbps, as that's not really how teaming or bonding works. Using teaming, you can load balance discrete connections across the two NICs. A single connection can only saturate a single NIC. Teaming only becomes effective when dealing with multiple network endpoints, so it's generally a good option for establishing on the VM host. Establishing bonds or teams within VMs and not on the host can have strange consequences on various platforms, and you should avoid doing that if you at all can. In general, it's best to put the bond in the place that's going to get the largest amount of connections, and that's usually as close to a network trunk as you can get.

Back to the speed reporting issue within your VM - it's not lying. You have 1Gbps NICs plugged into your host and joined to the vswitch that's providing networking for your instances. This reduces the speed of that entire vswitch to 1Gbps, and this is a known flow control limitation of HyperV. The vswitch ignores the reported capacity of the bond, as that is immaterial to flow control. You can still push a total of 2 Gbps from the host, just not to any one VM.

If you still want host system bus speed networking between VMs, you can create an "empty" vswitch that doesn't connect to any kind of physical NIC, but only to each VM and the host. This can be useful if you have lots of inter-VM East-West traffic.

Building on that above example, you could get around this issue almost entirely by terminating layer 2 at the hypervisor. You could join all of your VMs to that empty vswitch, enabling bus speed communication. After that is established, you could use the hyperv host as a gateway, routing the layer 3 traffic from the fully virtual vswitch to a layer 3 addressed team on the host. This would introduce a few netowrk complexities, such as the need to port forward and use of a NAT. However, hyperv has very friendly controls for this.

Spooler
  • 7,046
  • 18
  • 29