1

This question bothers me, and I do not think I am going to find the answer myself, so I thought it might be best to look for help.

When I do:

root@server1:~$ ab -n 20 -c 20 http://www.testserver.com/

Duration is 4 seconds.

When I do (at the same time):

root@server1:~$ ab -n 10 -c 10 http://www.testserver.com/
root@server2:~$ ab -n 10 -c 10 http://www.testserver.com/

The combined duration is 2.5 seconds.

I should think the testserver isn't bothered by the location the requests are coming from, and I assume that testserver and server1 and server2 are not reaching their bandwidth cap (page isn't that heavy).

What is causing this? The answer will probably be really stupid, but I'll be happy regardless.

Aeolun
  • 111
  • 3

2 Answers2

3

There are some types of load where increased concurrency leads to performance drop. First come to my mind is sequential HDD read - you will get best overall performance if you have one thread of reading large file.

You need to investigate your server load and find the bottleneck.

Also 10 requests is too few to make any conclusions. Proper testing require you to monitor system in process and identify warm-up period for your system, when your load factors are stabilized. After warm-up you can run actual tests, and statistically research it's results to be sure that they are valid.

DukeLion
  • 3,259
  • 1
  • 18
  • 19
0

When you run 1 process, the whole cpu is just working for that one process. (ok, and some for the kernel). When you run 2 processes, the kernel must switch between them, and on every switch it has to store some data to ram, and then retrieve it, which causes some overhead, so - more processes - more overhead.

Also, it depends how many apache workers are currently started. If there are 10 running, they just have to server the page. If you start 20 parallel connections, 10 more workes have to be started, which uses resources (or the requests have to wait).

mulaz
  • 10,682
  • 1
  • 31
  • 37
  • The kernel is smart enough not to switch between processes so often that this switching causes appreciable overhead. – David Schwartz May 20 '12 at 21:48
  • situation where increasing concurrency reduces overall performance is an exception. More often performance is increasing with concurrency on logarithmic scale. – DukeLion May 21 '12 at 14:17
  • Yes, it increases performance as long as there are free resources. When processes are using 100% of CPUs, the performance starts falling because of overhead of switching between the (too many) processes. – mulaz May 21 '12 at 14:30