3

I have the requirement of receiving a stream of UDP packets at a fixed rate of 1000 packets per second. They are composed of 28 bytes of payload, where the first four bytes (Uint32) are the packet sequence number. The sender is an embedded device on the same local network, and addresses and ports are mutually known. I am not expected to receive every packet, but the host should be designed so that it doesn't add to the intrinsic UDP protocol limitations.

I have only limited previous experience with UDP and Sockets in general (just casual streaming from sensor app on android phone to PC, for a small demo project).

I am not sure what is the most sensible way to receive all the packets at a fast rate. I imagine some sort of loop, or else some sort of self-retriggering receive operation. I have studied the difference between synchronous and asynchronous receives, and also the Task-based Asynchronous Pattern (TAP), but I don't feel very confident yet. It looks like a simple task, but its understanding is still complicated to me.

So the questions are:

  1. Is UdpClient class suitable to a scenario like this, or am I better off going with the Socket class? (or another one, by the way)

  2. Should I use synchronous or asynchronous receiving? If I use synchronous, I am afraid of losing packets, and if I use asynchronous, how should I clock/throttle the receive operations so that I don't start them at a rate that's too large?

  3. I thought about a tight loop testing for UdpClient.Available() like below. Would it be a good design choice?

while (running)
{
    if (udpSocket.Available() > 0)
    {
        // do something (what?)
    }
}
Community
  • 1
  • 1
heltonbiker
  • 26,657
  • 28
  • 137
  • 252
  • 1
    Your issue is not with UdpClient or Socket but with UDP Protocol itself. UDPClient anyway uses a Socket internally. A UDP packet is never guaranteed to reach it's destination. Synchronous and asynchronous fetching of packets also has nothing to do with loosing packets. When it is an unreliable protocol such as UDP, you should try to receive as fast as you can because you receiving the packets has no effecton the remote side. (sender). UDPClient.Available is OK, but it is blocking (block the thread) and therefore should only be used in a background thread. – Oguz Ozgul Nov 04 '15 at 13:23
  • By the way, if you still want to limit the number of incoming packets (which you can't, you can only limit the number of packets you read in that sense), you can have a stopwatch to keep the time and an integer value to keep track of bytes received and when it reaches 28000 (28 x 1000) can wait for the rest of the milliseconds (gathered using 1000 - stopwatch.Elapsed.TotalMilliSeconds) using Thread.Sleep, and then by restarting the stopwatch – Oguz Ozgul Nov 04 '15 at 13:29
  • @OguzOzgul is correct. UDP is a best-effort, fire-and-forget protocol. A host has no expectation of even receiving a UDP segment. To add any sort of synchronization to UDP, your upper-layer protocols must handle that. You can certainly send UDP this way, but there is no synchronization, or even an expectation that the UDP segment will be will be received by the other end, and the receiver has no expectation that a UDP segment is coming. You can use UDP as the transport layer protocol, but you will need to code anything beyond that. – Ron Maupin Nov 04 '15 at 15:22
  • @RonMaupin I can accept that the host has no expectation that a _specific_ packet is comming, but I think it would not beto say there is no expectation of receiving packets when there is a blocking `Receive()` method in place! I know well that I will never receive 100% of packets (or, if so happens, it's just out of luck), but the motivation of my question is to design my host so as to keep lost/unreceived packets to a minimum, and not because of design deficiencies on the host side. – heltonbiker Nov 04 '15 at 15:31
  • If you use TCP, the host has every expectation of receiving TCP segments (not packets as those are layer-3) because TCP is synchronized. UDP doesn't know or care if a segment is coming its way. It doesn't synchronize with the other end, so it has no expectation that any segment is coming. Using UDP, it is completely dependent on upper layer protocols to add any functionality like you want. – Ron Maupin Nov 04 '15 at 15:36
  • @RonMaupin I edited the title and the first paragraph. Now I am clarifying that the packets are just _sent_ at fixed rate, and received as best-effort. Now the question is: what is the best effort I can make to receive as much packets as the infrastructure conditions allow? – heltonbiker Nov 04 '15 at 15:41
  • The entire IP stack is designed to handle unreliable networks. As has been pointed out, there's no guarantee that packets will arrive at the destination, and further, no guarantee that they'll arrive in a timely manner or that they'll arrive in the order that they were sent. I would have to question using IP at all for time-sensitive issues, but that's beyond the scope of your question. – Duston Nov 04 '15 at 16:13

1 Answers1

2

Just read from your UdpClient as fast as you can (or as fast as is convenient for your application) using UdpClient.Receive(). The Receive() operation will block (wait) if there is no incoming packet, so calling Receive() three times in a row will always return you three UDP packets.

Generally, when receiving UDP packets you do not have to worry about the sending rate. Only the sender needs to worry about the sending rate. So the opposite advice for the sender is: Do not send a lot of UDP packets as fast as you can.

So my answers would be:

  1. Yes, use UdpClient.

  2. Use synchronous receiving. Call UdpClient.Receive() in a loop running in your receive thread, of in the main loop of your program, what ever you like.

  3. You do not need to check for available data. UdpClient.Receive() will block until there is data available.

  4. Do not be afraid of loosing packets when receiving UDP: You will almost never lose UDP packets on the receiving host because you do something wrong in receiving. UDP packets mostly get dropped by network components like routers somewhere on the network path, which cannot send (forward) UDP packets as fast as they receive them.

Once the packets arrive at your machine, your OS does a fair amount of buffering (for example 128k on Linux), so even if your application is unresponsive for say 1 second it will not lose any UDP packets.

Johannes Overmann
  • 4,914
  • 22
  • 38
  • Wow, that was a reassuring answer! I happen to have already solved the issue (didn't have time to come back and give some feedback), but I solved it with a mashup of lots of other answers and the final form is exactely the way you suggested (emphasis on the _receive thread_ part). And once I realized at last that packets don't just "fly away" in the ether, but instead have several layers of buffering, the threaded synchronous "pattern" is the one and only way to go! Thank you very much! – heltonbiker Nov 10 '15 at 12:37