My app uses libpcap to capture UDP packets from various sources. Occasionally the app has to do some heavy computational work (ballpark ~2 seconds)... during this time the app is not 'reading' from libpcap. This is leading to packet drops, but I'm struggling to understand why. The kernel is not dropping the packets & I think the socket buffers should be large enough.
The interface is receiving ~1000 packets per second. The max packet size is ~600 bytes. So let's call it 1MB/s (overestimate).
I've set the libpcap buffer size to 4MiB (passing to libpcap as bytes) and snapshot length to ~600 bytes.
I've also modified some OS params (Centos7):
/proc/sys/net/core/netdev_max_backlog: 10000
/proc/sys/net/core/optmem_max: 8388608
/proc/sys/net/core/rmem_default: 8388608
/proc/sys/net/core/rmem_max: 8388608
Result from netstat -i:
Iface MTU RX-OK RX-ERR RX-DRP RX-OVR
eno2 1500 558672786 0 0 0
I've tried making all of these values even larger but the number of dropped packets doesn't appear to reduce.
Something which might be worth mentioning... I also have tcpdump running on the same box capturing the same UDP packets (the packets are multicast). It does raise from questions re when do packets gets removed from certain buffers (or parts of the stack).
I'm lost here. Any help would be greatly appreciated.
Thanks.