3

I have a server and a client application where the client sends a bunch of packets to the server. The protocol used is UDP. The client application spawns a new thread to send the packets in a loop. The server application also spwans a new thread to wait for packets in a loop.

Both of these applications need to keep the UI updated with the transfer progress. How to properly keep the UI updated has been solved with this question. Basically, both the server and client applications will raise an event (code below) for each loop iteration and both will keep the UI updated with the progress. Something like this:

private void EVENTHANDLER_UpdateTransferProgress(long transferedBytes) {
    receivedBytesCount += transferedBytes;
    packetCount++;
}

A timer in each application will keep the UI updated with the latest info from receivedBytesCount and packetCount.

The client application has no problems at all, everything seems to be working as expected and the UI is updated properly every time a packet is sent. The server is the problematic one...

When the transfer is complete, receivedBytesCount and packetCount will not match the total size in bytes sent nor the number of packets the client sent. Each packet is 512 bytes in size by the way. The server application is counting the packets received right after the call from Socket.ReceiveFrom() is returned. And it seems that for some reason is not receiving all the packets it should.

I know that I'm using UDP which doesn't guarantee the packets will actually arrive at the destination and no retransmission will be performed so there might be some packet loss. But my question is, since I'm actually testing this locally, both the server/client are on the same machine, why exactly is this happening?

If I put a Thread.Sleep(1) (which seems to translates to a 15ms pause) in the client sending loop, the server will receive all the packets. Since I'm doing this locally, the client is sending packets so fast (without the Sleep() call) that the server can't keep up. Is this the problem or it may lie somewhere else?

Community
  • 1
  • 1
rfgamaral
  • 16,546
  • 57
  • 163
  • 275
  • 1
    I think you can use wireshark to check if the package is lost – llj098 Apr 10 '12 at 12:53
  • 2
    I would try by setting `ReceiveBufferSize` to a greater value such as 256K (default is 8K) – L.B Apr 10 '12 at 12:54
  • @L.B I cannot do that, they must be 512bytes. But I'm not trying to **solve** the packet loss, I'm trying to **understand it**. – rfgamaral Apr 10 '12 at 12:58
  • 1
    setting `ReceiveBufferSize` wouldn't change your program's logic. You can continue to send or receive packets of 512 bytes. it is an option of UdpClient(or of socket) – L.B Apr 10 '12 at 13:05
  • @L.B Sorry I misunderstand it... That seemed to have fixed it. But the documentation says "Consider increasing the buffer size if you are transferring large files, or you are using a high bandwidth, high latency connection" and since I'm doing this locally, it makes sense to increase. But if I'm doing this across the network (wide or local), should I reduce it a bit or it's fine to leave it at 256K? – rfgamaral Apr 10 '12 at 13:22
  • Sorry, no idea. Best way would be to test and see. – L.B Apr 10 '12 at 13:25
  • @L.B Actually, changing the buffer size didn't seem to entirely fix the "problem". After a few more tests, I realized the packet loss happens less frequently. Most of the time, the server receives all the packets, but sometimes it doesn't. If I increase the buffer to something larger, it might "fix" the problem. But my initial question remains. I'm looking for a technical explanation why the packets are being lost... – rfgamaral Apr 10 '12 at 13:47
  • 1
    Is it because I'm using UPD and the transfer is too fast that the server can't cope with that speed and doesn't catch all the packets? – rfgamaral Apr 10 '12 at 13:53
  • I guess so, I had the same problem some time ago and increaded the buffer size till it *seemed* to solve the problem. Which was 256K in my case. I know it is not a real answer but at least a solution. A harder way would be adding sequence numbers and retransmits to your protocol. – L.B Apr 10 '12 at 14:03

1 Answers1

3

'If I put a Thread.Sleep(1) (which seems to translates to a 15ms pause) in the client sending loop, the server will receive all the packets'

The socket buffers are getting full and the stack is discarding messages. UDP has no flow-control and so, if you try send a huge number of datagrams in a tight loop, some will be discarded.

Use your sleep() loop, (ugh!), implement some form of flow-control on top of UDP, implement some form of non-network flow-control, (eg. using async calls, buffer pools and inter-thread comms), or use a different protocol with flow-control built in.

If you shovel stuff at the network stack faster than it can digest it, you should not be surprised if it throws up occasionally.

Martin James
  • 24,453
  • 3
  • 36
  • 60
  • Flow control is something I'm also supposed to do but this question was not about solving the packet loss problem. Sorry if I wasn't clear on that. But your first paragraph (not counting the quote) answered my question. :) – rfgamaral Apr 10 '12 at 18:08