0

I have set up a cluster of 3 nodes running a LAMP-Application with HAProxy doing the load balancing. Now I would like to optimize and load test the system. Therefore I am unsing jmeter-ec2 which spinns up 15 AWS-t1.micro Instances from region Ireland running a jmeter test against the cluster located in a dedicated data center in Germany.

The problem is, that the servers hardly swet with 0.5 load while jmeter only reports about 70 tps. Now I am wondering where the bottle neck is and why the system does not serve more tps.

I am looking for help in how to aproach this problem in order to tune one service after the other. There is MySQL Galera, Apache, NginX and Solr running to serve the app, all with default configuration settings. The cluster consists out of 3 new bare mettal nodes with 32GB RAM an quad XEON CPUs inter connected via gigabit lan.

Thank you in advance for any helpful input on how to systematically tune/configure the system.

merlin
  • 2,093
  • 11
  • 39
  • 78
  • What AWS instance type did you use for the test? – hookenz Sep 27 '15 at 21:51
  • Is your 3 node LAMP cluster also on AWS or is it local? – hookenz Sep 27 '15 at 21:52
  • If it's local, how far away from you from the AWS cluster? – hookenz Sep 27 '15 at 21:52
  • So many questions I could ask. You need to give more information. The test clients need to be as near as possible to the LAMP Cluster and they need to be able to execute in parallel by multiple threads. – hookenz Sep 27 '15 at 21:54
  • I updated the question with more info. The cluster is bare mettal and located in a seperate data center which is not AWS. – merlin Sep 27 '15 at 22:04

2 Answers2

0

Try removing parts of the system to find the bottle neck. 15 test servers sounds an awful lot! you should be able to get 1000's of TPS out of a single test unit. You are not asking HAProxy to wait for a response are you? i.e. utilising the maxconns functionality and queueing functionality? Like I said try simplifying, but if you do think it is HAProxy then please post the configuration.

  • The HAProxy conf is like all the other confs pretty much default configuration. I suspect that the real problem is the test itself. I am running the test against / and two other URLs with 50 threads. I am wondering how I could generate more load, so I tried it with 15 servers. – merlin Sep 27 '15 at 22:08
0

I think the testing clients are quite obviously a problem to me.

  1. You're using t1.micro. They are basically free and free for a reason. Switch to using at least an m3.medium, large or xlarge for testing. You can shut them down when finished.

Comments from the amazon documentation T1 Micro Instances sums it up well.

"Spiky performance",

"Designed to support 10's of requests per minute"

But for a benchmark, you want to try overwealm your server. You want hundreds of requests per second. That's a bit more than these can provide.

  1. The testing cluster is not local to the testing client which introduces extra latency. Your cluster is 1600KM away in another country. That isn't going to help either. So make sure you note point 3.

  2. Make sure your test clients are running multithreaded.

  3. Use EU (Ireland) - eu-west-1 which is a better choice for testing the endpoint.

hookenz
  • 14,472
  • 23
  • 88
  • 143
  • I changed Point 1,2,4. Running on 5 instances c3.large from Frankfurt. Same results unfortunatelly: [FINAL RESULTS] total count: 20599, overall avg: 729 (ms), overall tps: 68.6 (p/sec), recent tps: 68.0 (p/sec). It apears, that the cluster starts strong, with high CPU load and then drops dramatically and so do the number of requests. How do I make sure the clients run multithreaded. I assume this is the case. – merlin Sep 28 '15 at 10:13
  • ok, regarding point 3 it appears that may be the default. I've never used jmeter so I'm unsure if this is the case for sure. But with other tools it isn't. You could also try httpbench and a few others as a comparison. jmeter will give you a result but it's only from the perspective of one jmeter instance. – hookenz Sep 28 '15 at 21:14