1

so a little info about my setup:

  • Running Proxmox 5.3-11 with latest updates
  • Running 2 Windows 2012 guests (we'll call them winhost1 and winhost2) with virtio NICs and latest spice-guest-tools (with the netsh modifications described here)

I'm using iperf between the 2 Windows guests to gauge the bandwidth between them. Going from either winhost1 to winhost2, and vice versa, I see a maximum bandwidth rate of ~350Mpbs. However, when I iperf from either winhost1 or winhost2 to the Proxmox host, I receive a +2Gbps bandwidth rate. Going the other direction in either case yields similar results.

The bridge slaving the winhost1 and winhost2 virtual NICs (vmbr0) was created within the Proxmox GUI with a balance-alb bond as the "Bridge port". Below is the output of bridge link show vmbr0:

9: bond1 state UP : <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 master vmbr0 state forwarding priority 32 cost 100
26: tap106i1 state UNKNOWN : <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 master vmbr0 state forwarding priority 32 cost 100
27: tap102i1 state UNKNOWN : <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 master vmbr0 state forwarding priority 32 cost 100

Is there anything I can do on either the host or the VMs to address this issue? I assume this is related to the host kernel config in some way since I see the desired bandwidth rates between the VMs and the host, just not between the VMs attached to the host bridge.

Thank you in advance

maff1989
  • 311
  • 2
  • 7

0 Answers0