1

Having TCP client on CentOS 7 and TCP listener on Windows 2012 R2, I observed through wireshark, sysinternals procmon and ss -bitmonz command, that the tcp client wscale is 7 (scale factor 128) while the tcp listener wscale is 8 (scale factor 256):

skmem:(r0,rb367360,t0,tb46080,f110,w49042,o0,bl0) ts sack cubic wscale:8,7 rto:251 rtt:50.27/20.789 ato:49 mss:1388 cwnd:10 ssthresh:8 send 2.2Mbps lastsnd:43 lastrcv:43 lastack:43 pacing_rate 4.4Mbps unacked:10 rcv_space:29200

enter image description here

based on the above, TCP communication seems not tuned/aligned between the client and server (listener), please study and point out what tweaks need to be performed in order to make the client and server agrees on the window scale, btw if i used winscp to transfer data, the wscale is 7,7 (no mismatch). Currently the TCP on both OSs pure default, no tweaks made, and I perfer to tweak CentOS 7 and keep Windows 2012 R2 as default since the server accepting connections from 80 clients and its production.

Please share references that provide TCP tuning where the client is CentOS 7 and the server is Windows 2012 R2.

Jawad Al Shaikh
  • 254
  • 1
  • 3
  • 15
  • 2
    There is nothing that requires the TCP Window Scale value to be the same on both sides. Each side of a TCP connection can use different values. TCP Window Scaling factors are typically controlled by the size of the receive buffers on the host. – Mark Riddell Mar 28 '17 at 10:40
  • @MarkoPolo did you look at the win=... between the client and the server in the wireshark image? client keep using its own window size, and server win is bigger but client keep ignoring it? OR do you want to tell me there is no issues in the above TCP wireshark capture image? – Jawad Al Shaikh Mar 28 '17 at 12:27
  • I'm not entirely sure what you are asking here. The Receive Window (RWIN) It is used to tell the device at the other end how much data it can send at once before it has to stop and wait for a response (ACK). This prevents the device being overloaded with packets which it cannot buffer. Both the client and server are free to use different values here. – Mark Riddell Mar 28 '17 at 14:29
  • In your example, your two TCP hosts appear to be <1ms away so large window sizes are not going to be required due to the [Bandwidth Delay Product](https://en.wikipedia.org/wiki/Bandwidth-delay_product). So to answer your question, no, I cannot see any issue with the RWIN being used on either client or server. I can see signs of potential packet loss, but that's a separate issue. – Mark Riddell Mar 28 '17 at 14:29
  • Did you ever find the solution to this issue? I'm hitting the same bug and jumped the the same conclusion as you regarding window scaling. – Dave Snigier Jun 13 '17 at 12:05
  • @DaveSnigier the issue produced by ZeroMQ, and the network is 3g/4g. overall 3g/4g traffic is buggy compared to LAN traffic "e.g., its normal to have dup ack, failure of 1% ..etc". the ZeroMQ caused this bug due to different versions got incompatible code between windows and linux. even I still receive data, soon I will update ZeroMQ used versions or maybe i will use other lib. mainly this issue making traffic jam which lead to many messages to be sent multiple times... hope this help. – Jawad Al Shaikh Jun 13 '17 at 12:33
  • Thanks! I solved my issue as well. Turns out it was a bug in the firewall I was using which processed the packets in parallel without respect for the ordering. Good info on ZeroMQ. I've been looking into using that on another project, but clearly need to understand a bit more about its internals so I don't run into issues like that. – Dave Snigier Jun 17 '17 at 16:42
  • @DaveSnigier great to hear that, Yes, we should always verify the software aspect before concluding its H/W or OS issue! better alternative to ZeroMQ is nanomsg (but I didn't use it yet). otherwise make sure you use stable releases of ZeroMQ on Windows & *NIX. – Jawad Al Shaikh Jun 18 '17 at 16:32

0 Answers0