1

Our application is growing fast so now we need to scale the infrastructure to don't slow down the entire website independently on how many users we have

The backend system is based on many curl calls lasting 1-10 seconds and they need to be executed in parallel

We currently have a 4GB ram 4 core vps but sometime we get 503 error when all the 50 channels start in parallel (we setup 50 cron job) that execute 5 curl call/each

These numbers are going to increment fast so we need to find a solution to don't let the end user wait more than a minute to get his things done

Do we need a dedicated server, or it won't make much difference? Or set up aws lambda functions with sns queue? Amazon would be the best solution because whatever amount of sns messages we put in the queue and instantly they will get processed by enabling the trigger, but the lambda function currently don't support PHP so we would to rewrite all the code in Python

How would you manage this situation to improve the performance?

473183469
  • 1,360
  • 1
  • 12
  • 23
lorigio
  • 11
  • 2
  • scale out, add load balances, look at this site for ideas... http://nickcraver.com/blog/2016/02/17/stack-overflow-the-architecture-2016-edition/ – Jacob Evans Feb 20 '17 at 23:31
  • Have you tried spinning off the curls onto another VPS and updating your DB remotely (replication or through an API or directly)? Tried optimising your requests? – Andy Verhoef Feb 21 '17 at 00:20

0 Answers0