2

I am running a heavy real-time updating website. The amount of recourses needed per user are quite high, ill give you an example.

Setup

Every visit
The application is php/mysql so on every visit static and dynamic content is loaded.
Recourses: apache,php,mysql

Every second (no more than a second will just be too long)
The website needs to be updated real-time so every second there is an ajax call thats updates the website.
Recourses: jQuery,apache,php,mysql

Avarage spending for single user (spending one minute and visited 3 pages)

  • Apache: +/- 63 requests / responsess serving static and dynamic content (img,css,js,html)
  • php: +/- 63 requests / responses
  • mysql: +/- 63 requests / responses
  • jquery: +/- 60 requests / responses

Optimization

I want to optimize this process, but I think that maybe it would be just the same in the end.

Before implementing and testing (which will take weeks) I wanted to have some second opinions from you guys.

Every visit
I want to start off with having nginx in the front and work as a proxy to deliver the static content.
Recources:

  • Dynamic: apache,php,mysql
  • Static: nginx

This will spread the load on apache a lot.

Every Second
For the script that loads every second I want to set up Node.js server side javascript with nginx in te front.
I want to set it up that jquery makes a request ones a minute, and node.js streams the data to the client every second. Recources: jQuery,nginx,node.js,mysql

Avarage spending for single user (spending one minute and visited 3 pages)

  • Nginx: 4 requests / responsess serving mostly static conetent(img,css,js)
  • Apache: 3 requests only the pages
  • php: 3 requests only the pages
  • node.js: 1 request / 60 responses
  • jquery: 1 request / 60 responses
  • mysql: 63 requests / responses

Questions

As you can see in the optimisation the load from Apache and PHP are lifted and places on nginx and node.js. These are known for there light footprint and good performance.

But I am having my doubts, because there are still 2 programs extra loaded in the memory and they consume cpu.

So it it better to have less programs that do the job, or more. Before I am going to spend a lot of time setting this up I would like to know if it will be worth the while.

Saif Bechan
  • 10,960
  • 10
  • 42
  • 63

2 Answers2

0

Note that multiple copies of the same executable or shared library have their code segments shared. That means that even if you 50 php executions going on, the php executable was only read off disk once, and only appears in memory once. Each execution has its own data segment, but that will have to happen regardless of how you spread the data across multiple programs.

So the answer, as with so many optimization questions, is "it depends." The easiest way to find out for your specifics is to run a couple of benchmark tests with the configurations you are considering under moderate to heavy load, and compare the total CPU/memory/disk/network loads.

mpez0
  • 1,512
  • 9
  • 9
0

Here are a couple of tips to help you out.

  1. Check your URLs with tools.pingdom.com this will give you an idea of loadtimes, and what's taking the longest.
  2. Take a second look at http://www.webpagetest.org/ this will give you roughly the same info as PingDom, but also allows you to guage where you can improve performance even more.
  3. Move all of your static content (images, css, javascript, etc.) to a CDN. This will greatly help you out.
  4. If you use ajax libraries like jQuery, Scriptaculous, etc. I recommend offloading those to Google with http://code.google.com/apis/ajaxlibs/ This also helped me out greatly.
  5. If you haven't done so already, I'd recommend keeping MySQL isolated from your webend for performance on a different unit.
  6. Lastly, in my experiences, for apache2, I use the worker module so it's more memory-based as opposed to processor use so that's helped me some.
sublimegeek
  • 111
  • 3