We are facing an issue where we see TCP backlogs gets exceeded than default value (100) on our MQ server (v7.5) running on Linux (Redhat) platform during high number of connection requests on MQ server. The ListenerBacklog is configured as 100 in qm.ini which is the default listener backlog value (maximum connection requests) for Linux. Whenever we have connections burst and TCP backlogs exceeds the queue manager stops functioning and resumes only when queue manager/server is restarted.
So we are looking whether there are attributes in Linux kernel related to socket tuning which can improve tcp backlog at network layer and cause no harm to queue manager.Does increasing these values as below in /etc/sysctl.conf will help to resolve this issue or improve performance of queue manager?
net.ipv4.tcp_max_syn_backlog = 4096
net.core.somaxconn = 1024
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216