0

My ESXi hosts are connected to two switches as described in figure 1. enter image description here

When i configure the Load-Balancing method of the trunks to "IP-Hash" the virtual hosts are flapping in between the portchannel-groups:

21-07-2017 09:00:45     Warning (4)     SW_MATM-4-MACFLAP_NOTIF   Host 0050.5688.1141 in vlan 60 is flapping between port Po1 and port Po3

If the trunks are configured to load-balance by src-MAC the virtual host is not flapping anymore. But i don't get the benefits of the IP-Hash load-balancing method. Does anyone know how great the performance-loss is due to the MAC-flapping?

Backup-Question: Is there a "supported" way to connect two switches to two esxi-hosts (VMware ESXi 6.5) without VDS?

Zac67
  • 10,320
  • 2
  • 12
  • 32
derhelge
  • 23
  • 2
  • 7

2 Answers2

0

The answer to the second question is simple. No, there is no supported way to connect one ESXi-Host to two Switches: KB1001938:

  • ESXi/ESX host only supports NIC teaming on a single physical switch or stacked switches.
  • VMware supports only one Etherchannel bond per Virtual Standard Switch (vSS).

The first answer is a bit more tricky... It depends... In my tests, the VM was flapping only a few times, so there was absolutley no increase in the cpu usage of the switch. But as described here, this can be an issue when the mac table alters quickly.

derhelge
  • 23
  • 2
  • 7
0

With an ESXi standard vSwitch, don't configure any bonding/trunking on the physical switch(es). Regardless of which load balancing you choose, any connection will only use a specific port. When a link goes down, the vNIC will move according to the failover scheme you configured.

A vSwitch will never resend a frame received on a physical link. This makes spanning tree, trunking/bonding or more elaborate features unnecessary and potentially more harm than benefit.

If you do need more bandwidth for a guest VM, just add more vNICs to it and the load will distribute over all ports and physical switches - depending on the load balancing scheme.

Zac67
  • 10,320
  • 2
  • 12
  • 32
  • No bonding/trunking at all? Would not traffic from the physical switch or beyond to the VM never have more bandwidth than a single port? – derhelge Aug 22 '17 at 06:52
  • It simply works better without. You don't need it and it interferes with controlling the flows. Link bonding only increases the bandwidth for multiple connections anyway, never for a single flow. Do you need to increase ingress or egress bandwidth or both directions (from the VM/host POV)? What does the scenario look like (hosts to physical switches layout; large number of clients, low number of partners, ....)? – Zac67 Aug 22 '17 at 10:30
  • As hinted in [figure1](https://i.stack.imgur.com/5NrPg.png) the physical switches are used as routers. Both have an 1G uplink to an ISP. The VMs on the ESXi hosts are virtual firewalls with a few hundred clients. Therefor it would be great to achieve more than 1G for the overall flows. – derhelge Aug 22 '17 at 13:55
  • I'd use four (or two) vNICs per VM, attach them to both switches (either with dedicated pNICs (for four) or teamed pNICs (for two) and have the clients attach per round-robin. On the switches, configure ports without bonding. If you use dedicated pNICs, add port groups with a single active NIC and the other one as fail-over and attach each vNIC to one of these port groups. Have you read [this](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/virtual_networking_concepts.pdf)? Aged, but still a good primer. – Zac67 Aug 22 '17 at 17:53