2

We have two datacentres either side of the Atlantic with a 10Gb VPLS connection between sites with 80ms latency and better than 0.0000001% packet loss.

When moving VMs between datastores on either end of the link we are seeing exremely slow speeds, EG: 15MB/s.

We have confirmed the underlying performance of the storage arrays and have confirmed all the networks involved are running at 10Gb with throughput and packet loss tests. Local data transfers within the same vCenter are very quick. We have run iperf between VMs on each DC as well.

I assume this is due to a TCP windowing issue or something similar to how SMB/CIFS struggles with high letency links.

Is there any configuration within ESXi or vCenter to optimise this such as specifying larger buffers or larger window sizes?

We are running 6.5 Ent Plus with vCenters in enhanced link mode. These are sperate clusters and do have a stretched VLAN.

ZZ9
  • 888
  • 3
  • 16
  • 47
  • Does the traffic is routed via router that do DPI ? If so, please desactivacte it for that traffic, it put a high load on the router CPU and can affect the output – yagmoth555 Apr 18 '18 at 16:03
  • There is no router performance bottleneck, our router/firewalls are capable for 150Gbps and show no CPU or ASIC use during tests, DPI is disabled anyway. This is not a network capacity issue. We have run iPerf over the link at 10Gbps as well to confirm. – ZZ9 Apr 18 '18 at 18:00

0 Answers0