Is setsockopt with SO_SNDBUF and SO_RCVBUF needed for the application to avail maximum TCP buffer limits?
I have a 1Gbps network with a delay of 100ms between my hosts and I am trying to pump data at full speed between them using my custom client server C programs
When I used default host setting with small value of 124928 for wmem_max, rmem_max and same value for third column in tcp_rmem and tcp_wmem, I got a bad throughput of ~80Mbps
After changing the max values from 124928 to 12582912 as follows, I got a much better throughput of 1Gbps.
My current settings on source and target host are
[root@~]# cat /proc/sys/net/core/wmem_default
124928
[root@ ~]# cat /proc/sys/net/core/rmem_default
124928
[root@~]# cat /proc/sys/net/core/wmem_max
125829120
[root@~]# cat /proc/sys/net/core/rmem_max
125829120
[root@~]# cat /proc/sys/net/ipv4/tcp_rmem
10240 87380 125829120
[root@~]# cat /proc/sys/net/ipv4/tcp_wmem
10240 87380 125829120
I did not use setsockopt() in my programs to utilize the maximum limits in sysctl settings. My understanding of setsockopt and the sysctl memory settings is that by default all programs get the memory buffers as defined in mem_default and they can go upto mem_max values by using setsockopt() call. But, I am confused that I could hit the maximum limits even without using the setsockopt() call. Does TCP automatically tune to use mem_max settings dynamically even for sockets not configured using setsockopt to use large memory buffers?
From tcpdump output I confirmed that the (seq-ack) at my source was always lingering around 50MB which confirms that the TCP window is greater than default value in my sysctl settings.