I kept node.js sever on one machine and mongodb sever on another machine. requests were mixture of 70% read and 30% write. It is observed that at 100 request in a second throughput is 60req/sec and at 200 requests second throughput is 130 req/sec. cpu and memory usage is same in both the cases. If application can server 130 req/sec then why it has not server 100req/sec in first case since cpu and memory utilization is same. machines are using ubuntu server 14.04
Asked
Active
Viewed 105 times
0
-
How did you measure throughput? Have you ignored the first few seconds of data to allow for latency? – slebetman Jul 14 '15 at 07:04
-
used Jmeter. In first case there are 10 users in thread group and loop count of 10. In second case there are 20 users in thread group and loop count is same 10. I have just taken readings and did not ignored first few seconds. Can you elaborate now, how should I analyse this? @slebetman – djsharma Jul 14 '15 at 17:00
1 Answers
1
Make user threads in Jmeter and use loop forever for 300 sec. Then get the values.

Gaurav Ajmera
- 134
- 5