1

I am trying to reduce packets manipulation to its minimum in order to improve efficiency of a specific program i am working on but i am struggling with the time it takes to send through a udp socket using sendto/recvfrom. I am using 2 very basic processes (applications), one is sending, the other one receiving.

I am willing to understand how linux internally works when using these function calls...

Here are my observations: when sending packets at:

  • 10Kbps, the time it takes for the messages to go from one application to the other is about 28us
  • 400Kbps, the time it takes for the messages to go from one application to the other is about 25us
  • 4Mbps, the time it takes for the messages to go from one application to the other is about 20us
  • 40Mbps, the time it takes for the messages to go from one application to the other is about 18us

When using different CPUs, time is obviously different and consistent with those observations. There must be some sort of setting that enables some queue readings to be done faster depending on the traffic flow on a socket... how can that be controlled?

When using a node as a forwarding node only, going in and out takes about 8us when using 400Kbps flow, i want to converge to this value as much as i can. 25us is not acceptable and deemed to slow (it is obvious that this is way less than the delay between each packet anyway... but the point is to be able to eventually have a greater deal of packets to be processed, hence, this time needs to be shortened!). Is there anything faster than sendto/recvfrom (must use 2 different applications (processes), i know i cannot use a monolitic block, thus i need info to be sent on a socket)?

Brian Driscoll
  • 19,373
  • 3
  • 46
  • 65
pam_sim
  • 11
  • 2
  • Are you sending the maximum native packet size? What do you mean by your different "bps" speeds? Does that represent packet density on the network? 18-30 microseconds actually sounds pretty good (not sure how you measured it like that). It takes time for the operating system to deliver the data to the network chip. – BitBank Feb 06 '12 at 18:17
  • actually, i am sending basic messages using iperf with different flow rate (bps = bitperseconds).. am just interested in the packet making it to the other aplpication but by having higher flow rate, i could see that the time it was taking to go from one application to the other was less... so there must be some sort of kernel control variable that allows to read or write faster. I just have no idea how to explain this variance, which by the way, cannot be explained by distribution variability as the values i showed are the mean of a huge sample number of packets. – pam_sim Feb 06 '12 at 21:49
  • 18-30 us might sound good... but i am trying to trim the processing time as much as possible. Something must be missing.. otherwise, we should get the same amount of time whatever the flow rate is... since the delay between each packet is way beyond those 18-30 us..... – pam_sim Feb 06 '12 at 21:49

0 Answers0