I have a TCP server built with sockets in Python. The application I'm building is time-sensitive, so the integrity of the data is important, therefore we need TCP. The bandwidth is very low.
And there's a client which requests data from the server every 50 ms. The client gets as response an OK message in case the server doesn't have the data or the actual required data.
Whenever the client makes a request to the server, it sends a frame of 5 bytes (not including the 40 extra bytes that come from IP and TCP). On the other side, the server either responds with a frame of 5 bytes (in most cases) or a frame of > 70 bytes (generally every second)
On both sides the sockets are set like this:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) # this line is excluded in client's case
sock.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, 8192)
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
sock.settimeout(0.5)
Everything runs fine on the local network (no lag at all), but whenever I connect to the server from the public IP (I'm port-forwarding) it lags a lot. The lag can go up to 15 seconds (at that moment it times out), which is incredibly much. Most of the time the RTT stays at 200-210 ms. On WireShark I can see that there are lots of (spurious) retransmissions and dup ACK.
What can I do? I've already disabled the Nagle's algorithm, but with no success yet.