0

So, we use JMetter for load testing our application. If I correctly understand, JMetter use trivial logic to calculate time of request executing. It get difference between time of request started and request ended.

So, if my processor is very busy this time is increases. But the server is placed on other machine.

For test I wrote the program that generate a lot of threads and do some calculations in it. Then I start JMetter tests on empty system and system with. my program running.

Time in aggregate graph is very different.

Is there are some solutions for it? Or we should rewrite our tests on other tools (like Gatling)?

TemaTre
  • 1,422
  • 2
  • 12
  • 20

1 Answers1

0

As per JMeter glossary:

Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received.

If you can come up with the better way of measuring the response time - feel free to contribute

For any CPU switching context is an expensive operation so I would recommend not having any applications running on the machine where JMeter is running to avoid mutual interference. Moreover if JMeter will not be able to send requests fast enough you may get false negative results.

I normally try to keep CPU and memory below 80% of maximum available capacity, if I need to scale the test to conduct more load and single machine cannot produce such a load - I go for Distributed Testing.

Dmitri T
  • 159,985
  • 5
  • 83
  • 133