0

Consider the following network:

Access Point <--> Ethernet (100 Mbit/s)    <-->  Router 1
Router 1     <--> 50 Mbit/s Point to Point  <-->  Router 2
Router 2     <--> Ethernet (100 Mbit/s )   <-->  Server 1 and Server 2

Let Host1 upload a large file via FTP to Server1. Let Host2 transmit UDP packets to Server2. Now, there are a number of Clients who happen to use the first Ethernet connection as well. With increasing number of these clients, the UDP packet loss rate increases, meaning its throughput has probably decreased. And I wonder why? I mean, if the bottleneck (50 Mbit/s link) was congested, shouldn't the FTP Client (Host1) reduce its throughput? UDP doesn't care about congestion, so why would the UDP throughput decrease?

user503842
  • 101
  • 2
  • You can know there's no congestion by ensuring that nothing else tries to use the link at the same time. Otherwise congestion is guaranteed. So this question doesn't make much sense. – Michael Hampton Aug 03 '18 at 21:31
  • @Michael Hampton: I have edited the question quite a little bit. I hope it is not too confusing now. – user503842 Aug 03 '18 at 22:04
  • I assume that Host1 and Host2 are using the Access Point. Am I correct? Also, when you say "using the ethernet", do you know what kind of traffic (unicast versus multicast). I am thinking that losing UDP packets is common in noisy and/or congested wireless networks, depending on the 802.11 media access in use. – Pablo Aug 03 '18 at 22:31

1 Answers1

1

If you have an 8Mbps link on your server, and the setup is really a simple: Client--100Mbps--Router--8Mbps--Server there definitely will be network congestion ...

silmaril
  • 491
  • 3
  • 9
  • And can you tell me why? – user503842 Aug 03 '18 at 22:04
  • Well ... because links aren't of the same bandwidth, it doesn't take anything more than that. Except if you are able to throttle the ftp upload on the client side AND there is only one client and only this trafic on the server link – silmaril Aug 05 '18 at 14:38