-2

A bit of a big picture question here but I was wondering what was the logic behind not just having incredibly large network related buffers?

One would think that getting as much of the data as close to the target as quickly as possible, be it an external machine (TCP buffers, etc), or internally (ring buffers, etc) in a machine, would be ideal?

anonymous-one
  • 1,018
  • 7
  • 27
  • 43

1 Answers1

1

You're missing an important point: the effect that traffic for one flow/connection has on other flows. If you only ever had one flow of traffic on your link, this might work fine.

But all those packets from a big transfer rushing down to the bottleneck link (say, near your home) have to contend with all the other traffic on the link to get through the bottleneck.

And here's where Bufferbloat rears its ugly head. Traffic that needs to be responsive (gaming, VoIP, Facetime/conferencing, DNS lookups, etc) gets buried behind all those packets from the big flow. Their latency/lag can grow to several seconds as they wait their turn to be sent through the slow(-er) ISP link.

You need an intelligent router to sort out which packet to send next, and which senders to slow down. The SQM algorithms (fq_codel, cake) manage all this. There's way more information at: What Can I Do About Bufferbloat?