0

I have a Linux computer which must receive streaming data from several devices (up to 30 or so) during long terms.

This computer is connected to a Fast Ethernet local area network (100 Mbps) where other devices are also connected, which means it is not dedicated.

As the streaming data is received via RTP (UDP), I have noticed that some of the packets are lost: maybe due to the switches/routers, maybe due to the current Linux OS.

In my tests I would like to optimize Linux UDP buffers so as to manage the high incoming data traffic rate, which sometimes could be higher than 50 Mbps.

Is this possible? Which parameter is more critical in this case? I have set several parameters in /etc/sysctl.conf but they may be wrong...so any help is highly appreciated.

I attach an example for this file. The computer has a 2 GB RAM card.

Thanks in advance,

#The maximum socket receive buffer size which may be set by using the SO_RCVBUF socket option: 8 MB
net.core.rmem_max = 8388608
#The maximum socket send buffer size which may be set by using the SO_SNDBUF socket option: 512 KB
net.core.wmem_max = 524288

#The default setting in bytes of the socket receive buffer: 4 MB 
net.core.rmem_default = 4194304
#The default setting in bytes of the socket send buffer: 256 KB
net.core.wmem_default = 262144

# Increase the maximum amount of option memory buffers
net.core.optmem_max = 40960

# Increase the maximum total buffer-space allocatable
# This is measured in units of pages (4096 bytes)
net.ipv4.tcp_mem = 65536 131072 262144
 net.ipv4.udp_mem = 65536 131072 262144

# Increase the write-buffer-space allocatable
net.ipv4.tcp_wmem = 8192 65536 16777216
net.ipv4.udp_wmem_min = 16384

# Disable TCP slow start on idle connections
net.ipv4.tcp_slow_start_after_idle = 0

# If your servers talk UDP, also up these limits
#This is a vector of three integers governing the number of pages allowed for queueing by all UDP sockets. man 7 udp
net.ipv4.udp = 131072   196608  262144
#Minimal size, in bytes, of receive buffers used by UDP sockets in moderation.
net.ipv4.udp_rmem_min = 1073741824

# Increase number of incoming connections backlog
#net.core.netdev_max_backlog=2048
aloplop85
  • 113
  • 1
  • 5
  • 1
    You are aware that you work under the wrong assumption that 100mbit is a lot of data for a computer to handle? I would be surprised if any optimization would be seful for some that low data throughput. THis is 2015 - most networks today are a lot faster and no tuning is needed. – TomTom Apr 06 '15 at 06:55
  • 2
    Spend $10 on a gigabit NIC. – Michael Hampton Apr 06 '15 at 06:58
  • Hello, I forgot to mention that the device is connected to the network through a M12-D connector, which imposes the 100 Mbps. The NIC could switch to Gigagbit Ethernet and the application could work correctly, but in this case it is impossible. – aloplop85 Apr 06 '15 at 07:13
  • The buffers are very unlikely to be a significant part of the issue. It's much more likely to be other parts of the network and the software. – David Schwartz Apr 06 '15 at 09:33
  • Is maybe UDP which introduces inherent losses over 50 Mbps? Above this rate the losses increase... – aloplop85 Apr 06 '15 at 12:50
  • Increasing the buffers may help a little. I have made several tests including some port mirroring. Over 37 Mbps some packets could be lost... I think it is due to some kind of collisions which I have not been able to detect neither in the switch nor in the final device with tcpdump and gulp. Any other ideas? – aloplop85 Apr 14 '15 at 06:25

0 Answers0