5

I want Locust to use all cores on my PC.

I have many Locust classes and I want to use Locust as a library.

Example of my code:

import gevent
from locust.env import Environment
from locust.stats import stats_printer
from locust.log import setup_logging
import time



from locust import HttpUser, TaskSet, task, between


def index(l):
    l.client.get("/")

def stats(l):
    l.client.get("/stats/requests")

class UserTasks(TaskSet):
    # one can specify tasks like this
    tasks = [index, stats]

    # but it might be convenient to use the @task decorator
    @task
    def page404(self):
        self.client.get("/does_not_exist")

class WebsiteUser(HttpUser):
    """
    User class that does requests to the locust web server running on localhost
    """
    host = "http://127.0.0.1:8089"
    wait_time = between(2, 5)
    tasks = [UserTasks]

def worker():
    env2 = Environment(user_classes=[WebsiteUser])
    env2.create_worker_runner(master_host="127.0.0.1", master_port=50013)
    # env2.runner.start(10, hatch_rate=1)
    env2.runner.greenlet.join()

def master():
    env1 = Environment(user_classes=[WebsiteUser])
    env1.create_master_runner(master_bind_host="127.0.0.1", master_bind_port=50013)
    env1.create_web_ui("127.0.0.1", 8089)
    env1.runner.start(20, hatch_rate=4)
    env1.runner.greenlet.join()

import multiprocessing
from multiprocessing import Process
import time


procs = []

proc = Process(target=master)
procs.append(proc)
proc.start()

time.sleep(5)

for i in range(multiprocessing.cpu_count()):
    proc = Process(target=worker)  # instantiating without any argument
    procs.append(proc)
    proc.start()

for process in procs:
    process.join()

This code doesn't work correctly.

(env) ➜  test_locust python main3.py
You are running in distributed mode but have no worker servers connected. Please connect workers prior to swarming.
Traceback (most recent call last):
  File "src/gevent/greenlet.py", line 854, in gevent._gevent_cgreenlet.Greenlet.run
  File "/home/alex/projects/performance/env/lib/python3.6/site-packages/locust/runners.py", line 532, in client_listener
    client_id, msg = self.server.recv_from_client()
  File "/home/alex/projects/performance/env/lib/python3.6/site-packages/locust/rpc/zmqrpc.py", line 44, in recv_from_client
    msg = Message.unserialize(data[1])
  File "/home/alex/projects/performance/env/lib/python3.6/site-packages/locust/rpc/protocol.py", line 18, in unserialize
    msg = cls(*msgpack.loads(data, raw=False, strict_map_key=False))
  File "msgpack/_unpacker.pyx", line 161, in msgpack._unpacker.unpackb
TypeError: unpackb() got an unexpected keyword argument 'strict_map_key'
2020-08-13T11:21:10Z <Greenlet at 0x7f8cf300c848: <bound method MasterRunner.client_listener of <locust.runners.MasterRunner object at 0x7f8cf2f531d0>>> failed with TypeError

Unhandled exception in greenlet: <Greenlet at 0x7f8cf300c848: <bound method MasterRunner.client_listener of <locust.runners.MasterRunner object at 0x7f8cf2f531d0>>>
Traceback (most recent call last):
  File "src/gevent/greenlet.py", line 854, in gevent._gevent_cgreenlet.Greenlet.run
  File "/home/alex/projects/performance/env/lib/python3.6/site-packages/locust/runners.py", line 532, in client_listener
    client_id, msg = self.server.recv_from_client()
  File "/home/alex/projects/performance/env/lib/python3.6/site-packages/locust/rpc/zmqrpc.py", line 44, in recv_from_client
    msg = Message.unserialize(data[1])
  File "/home/alex/projects/performance/env/lib/python3.6/site-packages/locust/rpc/protocol.py", line 18, in unserialize
    msg = cls(*msgpack.loads(data, raw=False, strict_map_key=False))
  File "msgpack/_unpacker.pyx", line 161, in msgpack._unpacker.unpackb
TypeError: unpackb() got an unexpected keyword argument 'strict_map_key'

ACTUAL RESULT: workers do not connect to the master and run users without a master EXPECTED RESULT: workers run only with the master.

What is wrong?

  • Thanks for the question! Could you elaborate what is the desired outcome when running the code, and what happens instead? Please, edit the original question if possible. – Nikolay Shebanov Aug 07 '20 at 13:40
  • Great, thanks for the edits! Could you also elaborate how exactly do you run the code, and why using python-multiprocessing is a requirement? If I understand it correctly, the preferred way to run multiple Locust workers is to [use a separate OS process](https://docs.locust.io/en/stable/running-locust-distributed.html#example) for each worker in order to fully utilize the CPU. Still, [this question](https://stackoverflow.com/questions/62173274/using-multi-cpu-platforms-with-locust) suggests that your approach is feasible too. – Nikolay Shebanov Aug 13 '20 at 10:05
  • I know how to run locust in distributed mode using bash script but my goal - is to work with locust as a library. – Alexaxndr Lyakhov Aug 13 '20 at 11:17
  • I need to use all cores of the CPU because I have a hight load performance application. – Alexaxndr Lyakhov Aug 13 '20 at 11:28

3 Answers3

1

You cannot use multiprocessing together with Locust/gevent (or at least it is known to cause issues).

Please spawn separate processes using subprocess or something completely external to locust. Perhaps you could modify locust-swarm (https://github.com/SvenskaSpel/locust-swarm) to make it able to run worker processes on the same machine.

Cyberwiz
  • 11,027
  • 3
  • 20
  • 40
1

You can use the master and worker option in locust.

Open a terminal and run the master node for your locust file:

locust -f yourLocustFile.py --master

Then open a new terminal and run this command to run the workers:

locust -f yourLocustFile.py --worker --master-host=YOUE_HOST_IP &
locust -f yourLocustFile.py --worker --master-host=YOUE_HOST_IP &
locust -f yourLocustFile.py --worker --master-host=YOUE_HOST_IP &

Repeat it for as many processors as you want.

finally you can use this script to run locust load test in multiprocess:

#!/bin/bash

SCRIP_VERSION=0.0.1
LOCUST_FILE=yourLocustFile.py
MASTER_HOST=YOUE_HOST_IP

echo $SCRIP_VERSION

locust -f $LOCUST_FILE --master &
MASTER_PID=$!

for i in {1..20}; do
  locust -f $LOCUST_FILE --worker --master-host=$MASTER_HOST &
done

trap 'kill $(jobs -p)' SIGINT SIGTERM EXIT

wait $MASTER_PID
  • you can save top code in a file formatted as `.sh` and run it. note that you first should make the file executable with `chmod +x fileName.sh` and run it with `./fileName.sh` – Reza Yadegar May 04 '23 at 15:51
0

I faced the same issue today, and since I didn't found a better option

I've add is like the following:

@events.init_command_line_parser.add_listener
def add_processes_arguments(parser: configargparse.ArgumentParser):
    processes = parser.add_argument_group("start multiple worker processes")
    processes.add_argument(
        "--processes",
        "-p",
        action="store_true",
        help="start slave processes to start",
        env_var="LOCUST_PROCESSES",
        default=False,
    )


@events.init.add_listener
def on_locust_init(environment, **kwargs):  # pylint: disable=unused-argument
    if (
        environment.parsed_options.processes
        and environment.parsed_options.master
        and environment.parsed_options.expect_workers
    ):
        environment.worker_processes = []
        master_args = [*sys.argv]
        worker_args = [sys.argv[0]]
        if "-f" in master_args:
            i = master_args.index("-f")
            worker_args += [master_args.pop(i), master_args.pop(i)]
        if "--locustfile" in master_args:
            i = master_args.index("--locustfile")
            worker_args += [master_args.pop(i), master_args.pop(i)]
        worker_args += ["--worker"]
        for _ in range(environment.parsed_options.expect_workers):
            p = subprocess.Popen(  # pylint: disable=consider-using-with
                worker_args, start_new_session=True
            )
            environment.worker_processes.append(p)

can see the rest of the code here: https://github.com/fruch/hydra-locust/blob/master/common.py#L27

and run it from command line like this:

locust -f locustfile.py --host 172.17.0.2 --headless --users 1000 -t 1m -r 100 --master --expect-workers 2 --csv=example --processes
Fruch
  • 408
  • 5
  • 18