0

I'm trying to send data over UDP point to point (looping back to a second NIC) but am losing data. Here is the code (from: sending/receiving file UDP in python):

----- sender.py ------

#!/usr/bin/env python

from socket import *
import sys

s = socket(AF_INET,SOCK_DGRAM)
host =sys.argv[1]
port = 9999
buf =1024
addr = (host,port)

file_name=sys.argv[2]

s.sendto(file_name,addr)

f=open(file_name,"rb")
data = f.read(buf)
while (data):
    if(s.sendto(data,addr)):
        #print "sending ..."
        data = f.read(buf)
s.close()
f.close()

----- receiver.py -----

#!/usr/bin/env python

from socket import *
import sys
import select

host="192.0.0.2" #second nic
port = 9999
s = socket(AF_INET,SOCK_DGRAM)
s.bind((host,port))

addr = (host,port)
buf=1024

data,addr = s.recvfrom(buf)
print "Received File:",data.strip()
f = open(data.strip(),'wb')

data,addr = s.recvfrom(buf)
try:
    while(data):
        f.write(data)
        s.settimeout(2)
        data,addr = s.recvfrom(buf)
except timeout:
    f.close()
    s.close()
    print "File Downloaded"

I create my file with dd:

dd if=/dev/urandom of=test_file bs=1024 count=100

I found that when count is over approx 100 it starts to fail. I tried different bs all the way down to 32 and it still fails. When I change bs I change buf in the code to match. I repeat creating a new file and running the command in a loop. With different combinations, sometimes it fails every transfer, sometimes it fails in a pattern (ie every 5)

I found that if I add a delay in the while loop in sender to delay by 0.0005 seconds, then it works fine and I can send any amount of data. If I bring the delay down to 0.0001, then it fails again. I'm getting approx 1.5MB/sec

I'd appreciate any recommendations to improve my performance. I am thinking that perhaps there is a receive buffer that is overflowing and I'm just not reading it fast enough.

eng3
  • 431
  • 3
  • 18
  • 1
    UDP has no error checking or automatic retries. If you try to send too quickly, you can run out of socket buffers and packets will be discarded. – Barmar Jul 11 '19 at 20:24
  • 1
    It could be happening at the sending or receiving end. – Barmar Jul 11 '19 at 20:25
  • The `bs` used by `dd` is just the number of bytes to write in each write to the output file and has no effect on the final file and `count` is just the number of those write to perform. Are you constrained to using `UDP` and not `TCP`? Communiction using `UDP` is not guaranteed to be without errors. – John Anderson Jul 11 '19 at 20:25
  • I think you are right. Your operating system drops frames because your `udp` buffer is full. Use `tcp`, if you want to avoid this from happening. If you want to improve performance I would try to buffer the read data into memory and write it to disk in bigger junks (100MiB or so). Increasing the `udp` buffer size at the same time should also help. – Ente Jul 11 '19 at 20:31
  • I must use UDP because this will eventually be a one way link. Is the UDP buffer the OS UDP buffer? or are there other buffers? I see, I could buffer the data in memory and write it out (perhaps in another thread). Hopefully python itself is not too slow. – eng3 Jul 11 '19 at 21:59
  • I tried writing everything to a queue first but it was actually slower than writing to a file. (I am testing in /tmp which is a ramdrive). Is there a different way I should be buffering that would be faster? Perhaps I should use C code instead. Anyways, increasing the OS receive buffer to 25MB got it to work sending 100mb files. I had to increase the buffer to nearly 1GB for 1GB files – eng3 Jul 12 '19 at 15:21
  • yes, I meant the operating system's udp buffer size. `net.core.rmem_max` and `net.core.rmem_default` are the parameters you want to tweak under linux. In addition I would increate the `buf` variable on the receiver side. – Ente Jul 13 '19 at 22:01

0 Answers0