2

We have the problem that the connection from multiple client networks via Wireguard Tunnel to a Samba share on a server is slow, but weirdly it only affects Windows 10 and only uploads.

A Linux Host can upload with up to 120MB/s while Windows can only upload with 10-50MB/s (it varies for the different networks we have). It's not limited to smb, I get the exact same test results with Iperf (udp and tcp).

Out of curiosity I tested if Windows 11 is also affected and it is not! What could this be and how can I fix it?

Melkor333
  • 33
  • 8

2 Answers2

2

The experimental kernel driver they added in the 0.4.8 release broke the windows upload speed. Just run an older version until they fix it.

https://download.wireguard.com/windows-client/wireguard-amd64-0.4.7.msi

  • Welcome to Server Fault! Your answer suggests a workable solution to the question is available via another website. The Stack Exchange family of Q&A websites [generally frowns on this type of answer](https://meta.stackexchange.com/questions/8231/are-answers-that-just-contain-links-elsewhere-really-good-answers). Please read [How do I write a good answer?](http://serverfault.com/help/how-to-answer) and consider revising your answer to include the steps required to resolve the issue. And don't forget to take the [site tour](http://serverfault.com/tour). – Paul Dec 05 '21 at 15:15
  • @Paul but his answer doesn't say "look here for the answer" - the answer directly states that a version of the software is broken and you should use an older version. The link is just a convenience and is clearly a link to a file, not another website. – fabspro Aug 20 '22 at 03:31
1

It seems to be the same or at least a similar problem as described by Dropbox (https://dropbox.tech/infrastructure/boosting-dropbox-upload-speed). As far as I understand (please correct me!) when the Linux Gateway uses NIC multi-queue with Wireguard a lot of package reordering happens and apparently Windows 10 can't handle that too well. The package reordering somehow causes Windows 10 to slow down the sending speed by waiting for an ack after almost every sent data packet instead of sending multiple packets and accepting selective acks.

I sadly forgot to make screenshots of the Wireshark sessions I analysed but it was very good visible that when downloading, the windows host usually got around 10-20 tcp data packets before sending an ack. But when uploading I got a TCP ack for each data package sent.

The solution to fix this is to disable multiqueuing on the Linux host.

ethtool -L PHYSICAL_LOCAL_INTERFACE combined 1
ethtool -L PHYSICAL_NETWORK_INTERFACE combined 1

To see if it was applied one can use

ethtool -l INTERFACENAME
Channel parameters for INTERFACENAME:
Pre-set maximums:
RX:             0
TX:             0
Other:          1
Combined:       63
Current hardware settings:
RX:             0
TX:             0
Other:          1
Combined:       1

The last line should be 1. The above command only sets this temporarily, to make it persistent the distro specific tools need to be used. For Debian it could be something like this:

cat /etc/network/interfaces
auto INTERFACE
iface INTERFACE inet static
    address IPADDR
    netmask NETMASK
    gateway GATEWAY
    # This is the relevant line
    post-up ethtool -L INTERFACE combined 1

This may create a bottleneck if the gateway doesn't have a strong CPU. We use AMD EPYC 7262 8-Core Processors and get the full 1Gbit up- & download with ~70% usage of one core.

Melkor333
  • 33
  • 8