Consider the following network:
Access Point <--> Ethernet (100 Mbit/s) <--> Router 1
Router 1 <--> 50 Mbit/s Point to Point <--> Router 2
Router 2 <--> Ethernet (100 Mbit/s ) <--> Server 1 and Server 2
Let Host1 upload a large file via FTP to Server1. Let Host2 transmit UDP packets to Server2. Now, there are a number of Clients who happen to use the first Ethernet connection as well. With increasing number of these clients, the UDP packet loss rate increases, meaning its throughput has probably decreased. And I wonder why? I mean, if the bottleneck (50 Mbit/s link) was congested, shouldn't the FTP Client (Host1) reduce its throughput? UDP doesn't care about congestion, so why would the UDP throughput decrease?