I wanted to make some load testing with Locust, in order to know how my system respond to parallel requests.
Let's say my system get 10 requests at the same time. My first reflex was to measure the reponse time for each of those 10 requests. I made a simple locustfile.py
to measure that:
from locust import HttpLocust, TaskSet, task
class UserBehavior(TaskSet):
def on_start(self):
pass
def on_stop(self):
pass
@task(1)
def content_query(self):
self.client.get('/content')
class WebsiteUser(HttpLocust):
task_set = UserBehavior
min_wait = 1000
max_wait = 1000
I used this file and spawned 10 locusts. I was able to get the measurement I wanted.
But then I realized that what I want to know is how fast my system has replied to ALL of those 10 requests. If each request take 20 ms to get a reply, I don't know if:
- The whole thing took 20 ms because each request was treated in parallel
- The whole thing took 200 ms because each request was treated successively
In order to measure this, I had the following idea: I want my system to be on a load of 10 requests at all time during say, 1 hour, and measure how much requests were treated during that time.
To put it another way, as soon as one of the 10 requests is successful, another request should be executed to take its place.
How can I do that with Locust?
I had the idea of using a request success handler, as described in the Locust documentation:
from locust import HttpLocust, TaskSet, task, events
def my_success_handler(request_type, name, response_time, response_lenght, **kw):
print("Success")
events.request_success += my_success_handler
This would allow me to know when one request is successful, but then what? I am not sure if it is possible to inform a specific locust that its request was successful.