0

Headline: Is there a problem with sending/receiving data to/from a server across multiple connections/sessions?

Background: One of the problems I see running apps coast-to-coast is that maximum throughput drops off dramatically. In the office, I can hit use ~90% of a line to move data; on a 10M coast-to-coast connection/session I'll only reach ~1.65 Mbps because of loss and latency. If apps were architected for parallel transfer across multiple connections/sessions it would be a different story. I could reach n x 1.65 where "n" would be the number of connections. But all too often, the apps that I see seem to be only using one connection.

So I'm wondering, why don't more apps work across more than one connection? Is it bad practice? Difficult to implement? Resource intensive? etc. etc.

Cœur
  • 37,241
  • 25
  • 195
  • 267

1 Answers1

0

This is how internet works - most companies can't have multiple dedicated routes to each client and have to depend on shared infrastructure to deliver network packets. Note that each packet may pick random route between destinations, so you already have "multiple connections" in general internet.

If you have enough resources you can lay your own networks to connect each customer with multiple dedicated connections. Cost is generally prohibitive for most cases, and such approach is likely to be used in data center-to-data center communication where enough traffic can justify cost of dedicated routes (still likely rented other than put by companies themselvs) or links to stock exchanges.

Alexei Levenkov
  • 98,904
  • 14
  • 127
  • 179