0

I'm trying to create a rather unusual (imo) configuration where I have:

  • nginx
  • php-fastcgi
  • mysql
  • 1000 seperate WordPress installs (with WP Super Cache). Each WP install corresponds to a separate subdomain.

Furthermore, I have 1000 cron jobs being called every hour that in turn call a WP plugin (using wget) which retrieves data from an API and posts it to the respective blog.

This is all being run on a virtual server with 1024MB of RAM, 4 shared processors, etc. The server is not doing well, especially during the times that the cron jobs are being executed. Nginx constantly throws 504 errors and the site has a significant lag.

  1. Am I crazy for having 1000 individual WP installs? Should I be using WP-MU and will this help significantly? (I have certain plugin restrictions that I prefer having seperate installs but could switch if need be.)

  2. Instead of having 1000 unique cron jobs - should be calling say a bash script that will then process the 1000 HTTP requests I need? Could this be done in a succesive order instead of a sequential one?

  3. Any other kind of suggestion you may have for optimization? Should I be proxying to Apache instead of just using nginx, etc. Any kind of advice would be appreciated.

Thanks in advance

Update: Thanks a lot for your response. I'm going to switch to WP-MU and re-do the cron jobs. I'm currently using spawn-fcgi but will switch to php-fpm. Appreciate the advice

splattne
  • 28,508
  • 20
  • 98
  • 148
GTE
  • 1

2 Answers2

2

1) 1000 unique installs will remove any benefit that caching can provide you. Ideally you would run one install (which would be cached in memory via APC or similar), with a bunch of different databases. WP-SuperCache might help with your caching issues though, as it should render everything to static HTML files. Basically, you want to remove as much dynamic content as you can from each page.

2) Running all 1000 jobs sequentially is probably a bad idea (are you sure 1000 requests can complete in an hour?). At the same time, running 1000 jobs in parallel is also a bad idea (Can your web server withstand 1000/reqs/sec?). I'd suggest something in the middle. Perhaps you start 10 processes, and each does 100 blogs with a random delay between them.

3) If you can use pure nginx + php-fastcgi, stick with it. Apache is not going to help you in any way.

Are you using php-fastcgi (ex: spawn-fcgi) or php-fpm? php-fpm would be my suggestion, as you can set it up to spawn more processes when the load is higher.

devicenull
  • 5,622
  • 1
  • 26
  • 31
0

All good advice there from devicenull. I'm in a similar position for a client, having many installs of the same software. Xcache (similar to APC) greatly improved performance but I had to give 4gbs of memory to cache.

With WP-Super-Cache the vast majority of your web requests should be for pure HTML/images which are very fast. Are you sure its not your 1000 cronjobs that are causing the general site delays? I would think they'd take most of the hour to execute!

Rafiq Maniar
  • 1,120
  • 9
  • 15