0

In my testing, both min_wait and max_wait are set to 1 second, and I set users to 100, so I expect the reqs/sec to be close to 100.

I know Locust actually need to wait server respond and then send the next request. Even though, if server respond quick, like 20ms, the outcome TPS should be close to 100, like 92 maybe.

But, in actuality it is 10, as the following picture shows:

screenshot

What am I missing?

My code is below:

class UserBehavior(TaskSet):   

    @task(1)
    def list_teacher(self):
        self.client.get("/api/mgr/sq_mgr/?action=list_teacher&pagenum=1&pagesize=100")

    @task(1)
    def list_course(self):
        self.client.get("/api/mgr/sq_mgr/?action=list_course&pagenum=1&pagesize=20")


class WebsiteUser(HttpLocust):
    task_set = UserBehavior
    min_wait = 1000
    max_wait = 1000
Jcyrss
  • 1,513
  • 3
  • 19
  • 31
  • This is the exact same question as https://stackoverflow.com/q/53737188/10653038 – user372895986472 Dec 13 '18 at 01:47
  • The anwser in that post does not convince me. In my testing, you could find my server respond very quick from statistics of avrage response time. But still TPS is much lower than I expected, even it is not like 100 TPS, it should be like 90 TPS, why 10 TPS? – Jcyrss Dec 14 '18 at 06:13
  • What’s your span rate? – Siyu Dec 14 '18 at 07:39
  • Who knows @Jcyrss - it could be a lot of reasons. Most people assume their service is much faster and scalable than it is (almost all the time I see questions on this as a Locustio maintainer the service ends up as the problem). – user372895986472 Dec 16 '18 at 21:56
  • @Siyu, I tried with many hatch rates from 10 to 100 per second, the TPS could not reach close to 100 after pretty long time. – Jcyrss Mar 28 '19 at 03:21

1 Answers1

0

I replicated your scenario with a minimal service that answers after 10ms sleep and was able to reach 98 req/s.

Name                                                          # reqs      # fails     Avg     Min     Max  |  Median   req/s
--------------------------------------------------------------------------------------------------------------------------------------------
 POST call1                                                      1754     0(0.00%)      20      13      34  |      20   47.90
 POST call2                                                      1826     0(0.00%)      20      13      30  |      20   50.10
--------------------------------------------------------------------------------------------------------------------------------------------
 Total                                                           3580     0(0.00%)                                      98.00

So the parameters are fine.

What could be the reasons for lower numbers:

  • The service itself is slow to answer
  • Amount of parallel requests is limited. Maybe you have a thread pool of size 5 on critical path, that would limit your rps.
  • Network latency is not accounted. Locust will initiate the wait only after task completion, so if the service answers in 10ms, but you have 90ms roundtrip time, you are getting 100ms end to end. I bet on this one, especially if you're load testing some server from your local machine.
  • Locust itself might be slow. It's a python after all. For me it gets capped at ~550 rps (pretty low I would say), because IO event loop reaches 100% of one core.
Imaskar
  • 2,773
  • 24
  • 35