My understanding of UDP is that it is a best effort protocol. The data is simply sent and may or may not get there. If I am sending data from A to B using UDP over the Internet and the first link runs at 100Mbps and the last at 10Mbps, why don't I simply lose 90% of the data in practice?
In other words, how is flow controlled when using UDP?