7

I am planing to add more web application servers to support increasing clients, deploying HAproxy and Keepalived for load balancing and High availability.

My server usage has the following characteristic:

  1. Jobs are not CPU intensive. Message are JSON text less than 100 character.
  2. Users will send message to server through Client device Y. Usually 4-5 messages per day
  3. Client devices X keep waiting message from server. If message is available at server, client device X must be able to get it within 2 seconds. Otherwise, this message is outdated.

For this reason,

  1. Client devices X is using long polling HTTP connection in order to be responsive. Each connection will last for 5 seconds and reconnect.
  2. Client devices X and Client devices Y are connected to same server, so X and Y can send message easily

Question

If there are over 60,000 Client devices X connecting to server, my load balancer or router will be running out of TCP port. What is the best way to scale up for , say, 20,000 users?

My server is running on Ubuntu server, using tomcat and Java Servlet.

Mickey
  • 123
  • 1
  • 8
  • More than 60,000 clients will not cause your load balancer to run out of ports. Why do you think it will? – sciurus Dec 24 '13 at 18:44
  • I think you're using the wrong tool for the job. Avoid using HTTP. Setup a long running socket connection instead that device X connects with. – Matthew Ife Dec 25 '13 at 00:18
  • @sciurus Thank for your comment. I think HTTP Long polling keep using TCP port until disconnect. Total TCP port for an IP address is 65535 (IPv4). So, I think more than 60,000 long polling client will use up all TCP port. – Mickey Dec 27 '13 at 02:20
  • @MIfe Thank for your comment. In term of TCP port usage, may I know the difference between socket and HTTP? Can you point a right direction for me? – Mickey Dec 27 '13 at 02:22

2 Answers2

6

I don't think your 60k clients are the actual problem. You will more likely have problem with not enough file descriptors, but that should be easy to fix as part of OS configuration.

Here's why connections will not be your problem. Each connection is characterised by its source ip address, source port, destination ip address and destination port. Inside the network stack this quadruple is used to match packets to file descriptors(each file descriptor represents a connection). Your server has fixed destination ip address and destination port (your server is destination for their client) but source ip address and source port are variable. Port is a 16bit number therefore maximum number of connections from one client is 64K. IPv4 address is a 32 bit number which gives you 4,294,967,296 possible source addresses. Doing some basic maths, your server could have 64K * 4,294,967,296 connections mapped to a single source ip and port.

This is why you will more likely have problem with maximum number of open file descriptors then the number of clients.

markovuksanovic
  • 277
  • 1
  • 4
  • I want to verify my understanding by an example. Say, If my server accept client TCP connection from port 80 and keep replying using TCP port 50001, my server can reuse TCP port 50001 to reply another TCP socket connection? My understanding is server can only reuse TCP port after socket close and TIME_WAIT. Therefore, if long polling used up 65535 TCP port, my server cannot establish more. Please correct me if my concept is wrong – Mickey Dec 30 '13 at 03:59
  • 1
    The socket is defined by a quadruple [source ip, source port, destination ip, destination port]. Let's say your server's IP address is 176.156.53.54 and it uses port 5001. You have one client using ip address 201.1.54.32 and port 12000. You have another client using ip address 195.32.12.54 using port 11000. For each client that connects to your server a socket is created. For the first client the socket is defined by [176.156.53.54, 5001, 201.1.54.32, 12000]. The other client get socket [176.156.53.54, 5001, 195.32.12.54, 11000]. These are different sockets and each has it's own file desc. – markovuksanovic Dec 30 '13 at 05:02
  • 1
    Think of a TCP connection as being defined by the [source ip, source port, destination ip, destination port] quadruple, not a single ip and port pair. (ip,port) pair is just an "address" of your application. The connection is between a source and destination. – markovuksanovic Dec 30 '13 at 05:45
1

The most simple approach might be to implement load balancing at the DNS level.

Means: have a round robin DNS entry that balances to 2, 3, or more physical loadbalancers.

Tom
  • 11
  • 1
  • I need to apply multiple fix IP address in order to do that. Can Client devices X and Client device Y forwards from different loadbalance go to same application server? X and Y belongs to same user. – Mickey Dec 24 '13 at 04:07