4

Note: I am asking about outbound concurrent connection limits, not inbound, which is sufficiently covered on existing questions

Modern browsers typically open a large number of simultaneous connections, to take advantage of the fact that TCP fairly shares bandwidth between connections. Of course, this doesn't result in fair sharing between users, so some servers have started penalizing hosts which open too many connections. This limit can be configured client-side (e.g. IE MaxConnectionsPerServer, Firefox network.http.max-connections-per-server), but the method differs for each browser and version, and many users aren't competent to adjust it themselves. So we turn to a squid transparent HTTP proxy for central management of HTTP download.

How can the number of simultaneous connections from squid to a remote webserver be limited, so the webserver doesn't perceive it as abuse of concurrent connections? Ideally the limit would be per source address. Squid should accept virtually unlimited concurrent requests from the client browser, and issue them sequentially to the remote server, only N at a time to each server, delaying (but not dropping) the others.

Ben Voigt
  • 473
  • 6
  • 20

1 Answers1

1

That would require some kind of request queue and inter-process communication, which would make handling requests much slower. I'm not aware of any proxy that would support this.

Please note that most users don't really change the number of simultaneous connections in their browsers, so the issue is not really that inconvenient.

FINESEC
  • 1,371
  • 7
  • 8
  • The default settings in the browser are too high already. The fact that most users don't change the number of simultaneous connections to a more reasonable level is the problem. – Ben Voigt Nov 26 '12 at 15:18
  • Furthermore, I'm fairly certain that Squid already has interprocess communications, as it both limits simultaneous connections from a single client and shares in-memory cache across multiple clients. – Ben Voigt Nov 26 '12 at 15:19
  • Yes, but requests are handled in a FIFO manner (request comes, request is handled, move to next request). What you're trying to achieve would require a request queue and synchronization between all processes that are handling requests. Typically a thread would need to wait till it can handle a request or thread would return an error. While it could be done it'd require changing some squid code and would introduce latency. – FINESEC Nov 26 '12 at 19:05
  • Dropping connections introduces a lot more latency than a queue, however. – Ben Voigt Nov 27 '12 at 03:24