AFAIK, most reliable transfer protocols like TCP tends to use some method (like packet loss) to detect the size of the bottleneck.
In my use case, however, the bottleneck bandwidth is a known value of 100Mbps, and is not shared with any other devices.
But, this link has a very high packet loss rate. 7% to be exact, and is often concentrated in small time intervals. This means that TCP would often assume that the packets were lost due to congestion, and drop my transfer rate. But in reality it's just regular packet corruption.
As a result, I can't even use 1/10 of my 100Mbps bottleneck. It doesn't improve much even with protocols like KCP, which deals with packet losses much less aggressively than TCP.
Is there any protocol that doesn't try to "guess" the bottleneck with packet loss? It's a useless feature to me and limits my bandwidth severely.