-1

My understanding of UDP is that it is a best effort protocol. The data is simply sent and may or may not get there. If I am sending data from A to B using UDP over the Internet and the first link runs at 100Mbps and the last at 10Mbps, why don't I simply lose 90% of the data in practice?

In other words, how is flow controlled when using UDP?

Simd
  • 101
  • 3

1 Answers1

1

While UDP is a best effort protocol, Internet router often have large buffers to absorb spikes in bandwidth usage without packet loss.

However, if you constantly push 100Mb/s over a slower link, you will lose packets, even without noticing it. This is because UDP had no ACK mechanism that can be used to track packet loss, so your PC will constantly try to send packets at 100Mb/s. The only one that can detect packet loss is the router/PC on the slower path, as their buffers will fill much faster than the slower link can handle, thus causing congestion and dropped packets.

shodanshok
  • 47,711
  • 7
  • 111
  • 180
  • Thank you. In practice does this mean that small amounts of information sent over UDP will not get lost even if the links are different speeds because of the buffering. But if I tried to stream a video say, it would just fail? If so, how does anyone stream anything large over UDP over the Internet? – Simd Apr 01 '15 at 10:21
  • @dorothy If you did it in a naive way, by blasting as fast as possible, then yes, it would be hopeless. However, there is nothing that prevents you from developing your own flow control algorithms in your application (e.g. estimating the bandwidth as you go along). – richardb Apr 01 '15 at 10:27
  • @richardb That is very interesting. Are there standard protocols for application level flow control? – Simd Apr 01 '15 at 10:28
  • @dorothy Many video streaming service embedded into browser really use TCP or RTSP (a specialized streaming protocol). Moreover, multicasting is heavily used for sites as Youtube and similar. – shodanshok Apr 01 '15 at 10:32