0

AFAIK, most reliable transfer protocols like TCP tends to use some method (like packet loss) to detect the size of the bottleneck.

In my use case, however, the bottleneck bandwidth is a known value of 100Mbps, and is not shared with any other devices.

But, this link has a very high packet loss rate. 7% to be exact, and is often concentrated in small time intervals. This means that TCP would often assume that the packets were lost due to congestion, and drop my transfer rate. But in reality it's just regular packet corruption.

As a result, I can't even use 1/10 of my 100Mbps bottleneck. It doesn't improve much even with protocols like KCP, which deals with packet losses much less aggressively than TCP.

Is there any protocol that doesn't try to "guess" the bottleneck with packet loss? It's a useless feature to me and limits my bandwidth severely.

1 Answers1

1

The sending rate in TCP is controlled by the sender's congestion control algorithm. There are multiple congestion control algorithms developed for TCP. You should look at different options and try them in your use case.

Also, there are many tunable parameters in TCP, which can help in your situation.

For example, the BBR algorithm developed by Google might be the solution for your issue. You can read more details about it at Medium article.

Tero Kilkanen
  • 36,796
  • 3
  • 41
  • 63