1

I've got a setup as depicted in the picture. The problem is catastrophic speed when using routing.

  • All WireGuard interfaces MTU = 1420.
  • (1), (2) are debian servers. (3) is windows machine.
  • The only setup done on (2) is net.ipv4.ip_forward = 1
  • CPU and RAM usage on all machines is about 5%
  • Bandwidth is being measured with iperf3

network topology

Update

  • #2 is doing routing between #1 and #3. #1 and #3 does not have direct connection. #1, #2, #3 are WireGuard peers.
  • I've updated picture to show each direction bandwidth. How is it possible, that #2 -> #1 upload speed via WireGuard is so slow?
  • #1 is in Warsaw, #2 is in Ashburn (Hetzner), #3 is VM with Windows 11 onboard.
  • I've tried Ubuntu for #2. Nothing changed ¯_(ツ)_/¯

download/upload speed between each peer

Update 2

Wireguard IRC channel required info:

Ignatella
  • 11
  • 3
  • It could be related to reordering problems (WG is multi-core so on multi-queue NICs ...). You could test replacing (3) with Linux or else somehow disabling multi-queue NIC features on both (1) and (2) to see if things improve. – A.B Sep 19 '22 at 07:37
  • 1
    On both (1) and (2) multi-queue is disabled: `ethtool -l eth0` gives: Channel parameters for eth0: Pre-set maximums: RX: n/a TX: n/a Other: n/a Combined: 1 Current hardware settings: RX: n/a TX: n/a Other: n/a Combined: 1 – Ignatella Sep 19 '22 at 07:50
  • Just to check all bases, what about testing with Linux as (3)? And also lowering MTU even lower than 1420? Anyway I don't have other ideas. – A.B Sep 19 '22 at 08:22
  • @A.B. There is some update on the issue. I've tested it with linux hosts: where all 1 - 3 hosts are linux machines with up to 200 Mbps speed without VPN. VPN speed between peers is like 100 Mpbs. VPN routing speed is still incredibly low :( – Ignatella Sep 25 '22 at 11:01
  • Furthermore there is related topic on Reddit https://www.reddit.com/r/WireGuard/comments/xhv2kw/slow_routing/ – Ignatella Sep 25 '22 at 11:01
  • Are you use ipv6? Try to disable it everywhere – gapsf Sep 25 '22 at 17:57
  • Cant figure out from your discription: do you have two tunnels - first between 1 and 2 and second netween 2 and 3? Or just one tunnel between 1 and 2? – gapsf Sep 25 '22 at 18:06
  • Did you see fragmented packets on 1 and 3? – gapsf Sep 25 '22 at 18:09
  • Try GRE tunnel between 1 and 2 instead of wg – gapsf Sep 25 '22 at 18:11
  • What rtt between 1 and 2, 2 and 3, 1 and 3? – gapsf Sep 25 '22 at 18:19
  • Iperf udp datagram default size of 1470 bytes. You have 1420 - fragmentation, lower iperf datagram size to 1400 – gapsf Sep 25 '22 at 18:23
  • What mtu iperf reports after test? – gapsf Sep 25 '22 at 18:35
  • On 2 clean up nftables ruleset to nothing like on 1 – gapsf Sep 25 '22 at 18:38

0 Answers0