1

I'm going to be testing several python apis with Locust. The backend uses Google appengine and uses automatic scaling. So determining the resource utilization isn't a top priority for me. My goal is only to test api response times for higher number of concurrent requests and determine any threading issues.

I need to run the tests for 1 million users. I'm going to be running the test distributed and I'm going to follow a staircase input ramp up pattern of ramping up to 100k users and keep the steady load of 100k users for 30 mins before moving to 200k concurrent users and so on, represented below:

enter image description here

So I want to ensure I'm making exactly X requests per sec at any given time. My understanding is that with Locust we can only control total number of users and the hatch rate.

So if I wanted to synchronize the requests such a way that it sends exactly X requests per sec, is there a way to achieve that?

I have gone through the Locust documentation and also some threads but I haven't found anything that satisfactorily answers my question. I don't want to rely on merely knowing there are X users sending requests, I want to ensure concurrency level is tested correctly at a specified requests per sec rate.

I'm hoping my question is detailed enough and not missing any crucial information.

Abhijeet
  • 21
  • 6
  • 1
    it is not possible – Corey Goldberg May 25 '18 at 15:26
  • Okay, thanks for confirming, Corey. If you happen to have any other suggestions that you know would work for my requirement above, please do let me know. I appreciate the response! – Abhijeet May 26 '18 at 16:42
  • 2
    well.. actually it *is* possible.. it's just not implemented in Locust :) – Corey Goldberg May 27 '18 at 11:40
  • But then generally, how can the test results be concrete without controlling RPS? Meaning, otherwise using a low config VM as load generator would generate low RPS as opposed to a better config VM. Having low RPS would result in different response times than the high RPS. – Abhijeet May 28 '18 at 03:14
  • you control RPS indirectly by adjusting number of user hatched. you just can't set RPS directly to a fixed pace. – Corey Goldberg May 28 '18 at 12:21

1 Answers1

1

Try using constant_pacing

https://docs.locust.io/en/stable/api.html#locust.wait_time.constant_pacing

Returns a function that will track the run time of the tasks, and for each time it’s called it will return a wait time that will try to make the total time between task execution equal to the time specified by the wait_time argument.

try large enough value for the input of constant_pacing according to the target response time.

via this function along with total number of simulated users you can achieve constant number of rps.

for more advanced and shaped traffics try tick function.

https://docs.locust.io/en/stable/generating-custom-load-shape.html