0

I'm doing load test for 30000TPS(transaction per second) using gatling, I facing the following error

 i.n.c.u.Errors$NativeIoException: newSocketStream(..) failed:   99128 (99.43%)
Too many open files
> j.n.ConnectException: connect(..) failed: Cannot assign reques    570 ( 0.57%)
ted address

Seems I'm running out of TCP ports in my load test VM'

I tried tuning the kernel configuration in /etc/sysctl.conf.

net.ipv4.tcp_max_syn_backlog = 40000
net.core.somaxconn = 40000
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.ipv4.tcp_mem  = 134217728 134217728 134217728
net.ipv4.tcp_rmem = 4096 277750 134217728
net.ipv4.tcp_wmem = 4096 277750 134217728
net.core.netdev_max_backlog = 300000
net.ipv4.ip_local_port_range=1025 65535

Also configured ulimit -n 65k but no luck, still I struck on tcp connections issues.

Anyone could you please advise me how we can reuse the tcp ports fastly..

Reference :

ran out of tcp udp ports [closed] - https://serverfault.com › questions › ran-out-of-tcp-udp...

Debugger
  • 101
  • 2
  • Have you confirmed that you haven't actually hit the 65k limit? – hardillb Jun 30 '22 at 20:13
  • A hardillb hinted at, the error messages seem to suggest you may be running out of open files rather than tcp ports. Does the application you're testing open multiple files per connection? If it is in fact TCP port exhaustion, have you considered having the application listen on multiple IP addresses? (The 64k limit on ports is per IP address.) – Brandon Xavier Jul 01 '22 at 03:26
  • @Brandon... Thanks for quick turn around. Yes my application may open multiple files.. i tried increasing the Ulimit -n to higher but i couldn't able increase that more than 999999, not sure why .. I'm getting permission denied error while configuring more than 1Millon. – Debugger Jul 01 '22 at 04:24
  • Also any way to quickly check how many open files or TCP ports running out of configuration via command or any kernal logs ? – Debugger Jul 01 '22 at 04:25

1 Answers1

1

"Too many open files" is an obvious story, so let's leave it outside the scope of this answer - I don't feel myself that talented to be able to add something new to a case that was explained a couple of thousand times.

But "Cannot assign requested address" is a whole new case. It usually indicates that your client has actually hit the port limit on unique IP-port quadruplet (ipsrc-srcport-ipdst-dstport, and it sould be unique across the network stack given, merely to be able to distinguish one connection from the other). Since you're load testing some service from your only client IP, last two members (ipdst-dstport) are pinned and so is the first, meaning only the second member is variative. Theoretically it can vary from 0 to 65535, but in reality client ports are chosen from the net.ipv4.ip_local_port_range sysctl oid, which out of the box has value of "32768 60999", so less than half ports are available.

Possible workarounds:

  1. set net.ipv4.ip_local_port_range to "1024 65535". This will increase the available client port range (probably x2 in your case).
  2. use multiple IPs and thus multiple FIBs to start new connections to one given server IP. This will give you +64K connections from each new IP used.

As about TCP ports/connections reusing - long story short - don't. Closed/finished connections should stay in TIME-WAIT state for some time, otherwise bad things can happen (such as, but not limited to - reordered or delayed TCP packets from "older" TCP connections on that client port could cause connection disruption or event closure). Linux even once had a sysctl oid that allowed the TCP connections reusal, but the negative effect of using it was so enormous that it was once and for all removed from the kernel.

drookie
  • 8,625
  • 1
  • 19
  • 29