I read this post Taurus - API Load Testing. With regards to the provided answer, I had the following question:
For the test I have been running, we have determined it is point (2). Specifically the machine not being able to send the requests fast enough. The machine has been allocated 16GB memory and 10 core CPUs. I have run various tests with different number of users 1/10/20/50/100/200/400/800. But a certain number of threads, the hits/s doesn't go above 200 hits/s, i.e. it didn't matter if the threads were doubled, the hits/s would remain the same. And the resources were not reaching the amount allocated.
This is the code from the yaml file where the provided resources have been allocated for this test execution:
modules:
jmeter:
path: ${JMETER_BIN_PATH}/jmeter
properties:
basedir: ${JMETER_HOME}
output: ${TAURUS_ARTIFACTS_DIR}/output/
memory-xmx: 15G
cpu: 8
detect plugins: true
What did work was running two instances of the Taurus test concurrently on the same machine. This showed that it wasn't the server providing the responses as causing the issue. It was able to cope with more than 400 hit/s (the total from the two tests). However, if the test were to run in a distributed mode, there are 5 scripts in total, 3 of which represent the following throughput requirements:
script 1: 100 hits/s script 2: 200 hits/s script 3: 300 hits/s
I can create two separate JMX files for script 1 & 2. However, what can I do about script 3? The scripts are using Concurrency Thread Group with a Throughput Shaping Timer. I'm wondering how this will work when it comes to reporting and ensuring the throughput has been achieved. There is also a requirement to run a stress test of the API server, which will be targeting over 300 hits/s.
Has anyone else faced this issue?