-1

I'm building an application that will start small, but could grow and of course I may as well think about scalability.

I'm going to use a message queue, partly for scalability but also to process some tasks in the background.

I'll be heavily using Twitter/Facebook APIs too (although caching as best as can to reduce number of calls).

It's a PHP application using a heavy framework, and expect CPU, RAM and the DB to get some use.

The message queue is a service accessed from an external API.

Many requests could contain API calls + message queue calls, and now I'm thinking I may end up topping out with the network allowances. I'm thinking putting data on a message queue would be better than using the DB so much though.

Is there a limit in Linux with the NUMBER of connections? I'm guessing if I keep within the server's bandwidth allowance (probably 500Mb/s) then that should be fine, but surely the number of individual calls also needs to be considered.

I can fiddle around with settings + increase server size either now or in the future, but I'm thinking that making 3 external calls for every web request may be a lot and I should design the application differently from the start?

user2143356
  • 157
  • 2
  • 7

1 Answers1

0

If you're really thinking on scalability I would like to suggest you to use some non blocking I/O pattern like Node.JS uses.

But if you're already set on top of a big PHP Framework, I'm not the best guy around for telling you, but 3 external calls for every request doesn't seem a good idea for me.

However if you're still interested on the linux maximum number of connections:

https://stackoverflow.com/questions/410616/increasing-the-maximum-number-of-tcp-ip-connections-in-linux

And for the maximum number of files open:

http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/

Regards-

Pablo Recalde
  • 115
  • 1
  • 7