1

I want to run a stress test by continuously increasing load until the response times become unacceptable. The condition that I need to check against is that 95% of all the requests take no longer than 1 second.

  1. Can I determine this dynamically at runtime and if yes, how?
  2. How can I stop the test when this condition is achieved?

I looked at the AutoStop Listener plugin, but it does not seem to have what I need for checking this condition.

Ori Marko
  • 56,308
  • 23
  • 131
  • 233
JustNatural
  • 375
  • 7
  • 19

3 Answers3

2

The easiest is going for Taurus framework which provides flexible and powerful Pass/Fail Criteria subsystem

Example Taurus YAML file:

execution:
- scenario: simple

scenarios:
  simple:
    script: /path/to/your/test.jmx

reporting:
- module: passfail
  criteria:
  - p95.0>1s, stop as failed  

In case if 95 percentile of all requests will exceed 1 second Taurus will stop the test and return non-zero exit status code which is kind of scripting/CI friendly approach.

In JMeter it is also possible with some JSR223 Scripting, you could periodically read the .jtl results file, calculate the percentile, check it against anticipated value, etc.

Dmitri T
  • 159,985
  • 5
  • 83
  • 133
  • Thanks Dimitri! I will definitely have a look at Taurus to see how it works and how it can help me. For the other solution, "periodically read the .jtl results file", can these results still be fetched if the file is still being written to by JMeter during script runtime? – JustNatural Oct 17 '21 at 10:49
  • 1
    Yep, why not. The data is being flushed periodically into the file, if you want to always have "fresh" results you can set [`jmeter.save.saveservice.autoflush` JMeter Property](https://jmeter.apache.org/usermanual/properties_reference.html#results_file_config) to `true` via [*user.properties* file or `-J` command-line argument](https://jmeter.apache.org/usermanual/properties_reference.html) – Dmitri T Oct 17 '21 at 17:39
1

Updated the answer to fix a bug in the code and to improve the logging

This could be another solution.

enter image description here

  1. Initialize the properties. Add a SetUp Thread Group. Add a JSR223 Sampler to the SetUp Thread Group. Then add the following code to initialize the properties.
props.put("total_requests","0")
props.put("total_requests_exceeding_limit","0")
  1. Add a JSR223 Post Processor at the top level.

This will ensure the element is applied to all the samplers in the test plan

  1. Add the following parameters to the JSR223 Post Processor
${__P(stop_test_exceeding_percentile_error,true)}  ${__P(percentage_limit,95)} ${__P(response_time_limit_in_seconds,1)}

  1. Add the following script to the JSR223 Postprocessor.
long rampupTime=60000
long requestCountToStartChecking=50
long startTimeInMillis=vars.get("TESTSTART.MS").toLong()
long currentTimeInMillis = new Date().getTime()
long currentTestDurationInMillis=currentTimeInMillis-startTimeInMillis

log.info("currentTestDurationInMillis ${currentTestDurationInMillis}")

if(args[0].toBoolean() && currentTestDurationInMillis> rampupTime ){
    def total_requests_exceeding_limit=0
    def percentageOfRequestExceedingLimit=0
    
    int percentile_limit=args[1].toInteger()
    int response_time_limit_in_seconds=args[2].toInteger()
    
    long response_time_limit_in_milliseconds=response_time_limit_in_seconds*1000
    
    def totalRequests = props.get("total_requests").toInteger() + 1
    props.put("total_requests",totalRequests.toString())
    
    if(prev.getTime() > response_time_limit_in_milliseconds){
        total_requests_exceeding_limit= props.get("total_requests_exceeding_limit").toInteger() + 1
        percentageOfRequestExceedingLimit = ((total_requests_exceeding_limit/totalRequests)* 100).round()
        
        if (percentageOfRequestExceedingLimit> percentile_limit && totalRequests>requestCountToStartChecking) {
            log.info("Requests execeeding ${response_time_limit_in_milliseconds} has reached ${percentageOfRequestExceedingLimit}")
            log.info("Stopping the test")
            prev.setStopTest(true)
            log.info("Stopped the test")
        }
        props.put("total_requests_exceeding_limit",total_requests_exceeding_limit.toString())
        
    } else {
        total_requests_exceeding_limit= props.get("total_requests_exceeding_limit").toInteger()
        percentageOfRequestExceedingLimit = ( (total_requests_exceeding_limit/totalRequests)* 100).round()
        
    }

    log.info("totalRequests ${totalRequests} total_requests_exceeding_limit ${total_requests_exceeding_limit} percentageOfRequestExceedingLimit ${percentageOfRequestExceedingLimit} ")
} else {
    prev.setIgnore()
}

  1. The script will ignore the samplers during the ramp-up time. Rampup period should be set in the script at the moment
  2. The response time threshold, response percentage limit, etc can be configured through the properties/parameters
  3. The percentage will be checked after a predefined number of requests from the ramp-up time to avoid test stopping immediately when the first response time is greater than the configured value.
  4. The script can be improved further to wait for a pre-defined period with a response percentage exceeding the limit.
  5. Need to handle the synchronization issues. You can use Inter-Thread communication plugin to ensure the same values are not read by different threads.
Janesh Kodikara
  • 1,670
  • 1
  • 12
  • 23
  • 1
    It's awesome that you wrote that! I tried your solution, but it does not seem to work as intended. In the GUI mode, I created a scenario with 2000 threads, a 2h script duration, a 2h ramp-up time for the whole script duration and 2 samplers under a random controller to test your script functionality. There is a discrepancy between the values printed in the logs and the values shown in the Aggregate Report. The aggregate report is showing the 95% line with well over 1000ms while the values from the log are not even close to 95%. – JustNatural Oct 21 '21 at 19:09
  • Let me check and get back to you. – Janesh Kodikara Oct 25 '21 at 03:07
  • We need to ignore the results during the ramp-up period. Also, the request counts to start checking the percentage moved to the nested if block. Improved the logging to display the total_requests_exceeding_limit and total_requests_exceeding_limit when the current request is not exceeding the limit. – Janesh Kodikara Oct 25 '21 at 03:47
0

For stopping test if one sampler was above 1 second use Duration Assertion with 1000 ms

tests that each response was received within a given amount of time

You can stop test if assertion failed:

In Thread group select Stop Test/Stop Test Now under Action to be taken after a sampler error

Determines what happens if a sampler error occurs, either because the sample itself failed or an assertion failed

Ori Marko
  • 56,308
  • 23
  • 131
  • 233
  • Thanks, but I am interested in some way to evaluate all of the samplers at runtime, so that I can determine that 95% of them are above or below 1 second and based on that outcome stop or continue the test. The solution which you presented doesn't seem fit for my need. – JustNatural Oct 17 '21 at 10:42