I was tinkering around with socat and tried to use socat for creating a TUN device for tunneling between two debian stretch servers. However, throughput seemed very low and comparing with iperf against TCP/TCP-Listen on localhost, TUN has about 5 orders of magnitude less throughput.
Here is a "minimum working example" to show how speed is affected.
socat with TUN device
Server side:
# socat
socat TUN:10.10.0.2/16,iff-up TCP4-LISTEN:54321,bind=192.168.1.2,fork
# iperf service
iperf -s -p 15001 -B 10.10.0.2
Client side:
# socat
socat TUN:10.10.0.1/16,iff-up TCP4:192.168.1.2:54321
# iperf
iperf -c 10.10.0.2 -p 15001 -t 30
socat with TCP/TCP-LISTEN
Server side:
# socat
socat TCP4-LISTEN:12345,bind=192.168.1.2,fork TCP4:127.0.0.1:15001
# iperf service
iperf -s -p 15001 -B 127.0.0.1
Client side:
# socat
socat TCP4-LISTEN:54321,bind=127.0.0.1,fork TCP4:192.168.1.2:12345
# iperf
iperf -c 127.0.0.1 -p 54321 -t 30
Results
TUN device:
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-39.7 sec 640 KBytes 132 Kbits/sec
TCP/TCP-LISTEN:
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-30.0 sec 3.30 GBytes 944 Mbits/sec
If you want to reproduce the results using the lines above, you need to somehow run socat lines and the iperf serverside in the background or daemonized, I just used screen sessions.
So, while I assumed that throughput will suffer to some degree, it seems strange to me that it will degrade from the assumed gigabit (both servers on same switch) to a mere 100KBit. A quick glance at atop
shows no significant bottlenecks, so it isn't just CPU capped or eating RAM.
Why is throughput that low? Some logic error I did? Or a problem in the kernel, bad implementation in socat, or using iperf wrong?
Are there any parameters or settings (kernel, socat, anything) to improve this? Anything I could check for? And, most important, is there a way I can use the TUN device which gives me useful throughput?