2

I have created a load test using k6 for my application and used a batch request to run all my urls in parallel, however, some of my urls are identical and only differ in their JSON query. If I was running the urls in parallel it would be POSTing or GETing the same exact data without the queries to differentiate them. Is there a way to do this? Here is an example of my batch requests.

group(' elasticsearch', function () {

      group(':8000', function () {

         let responses = http.batch([
            ['POST', 'http://10.1.11.2:8000'],
            ['POST', 'http://10.1.11.2:8000'],..........

Here is an example of my JSON queries and other header information.

  response = http.post(

      "http://10.1.11.2:8000",

      '{"size":0,"query":{"bool":{"must":[],"must_not":[],"filter":[]}},"aggregations":{"1__cfgroup":{"terms":{"field":"measure_name","size":10000,"order":{"_term":"asc"}},"aggregations":{}}}}',

      {

        headers: {

          Host: "10.1.11.2:8000",

          Connection: "keep-alive",

          "User-Agent":

            "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147 Safari/537.36",

          "content-type": "application/json",

          Accept: "*/*",

          Origin: "http://chart.com",

          Referer: "http://chart.com",

          "Accept-Encoding": "gzip, deflate",

          "Accept-Language": "en-US,en;q=0.9",

          "Content-Type": "application/json",

        },

      }

    );

 response = http.post(

      "http://10.1.11.2:8000",

      '{"size":0,"query":{"bool":{"must":[],"must_not":[],"filter":[]}},"aggregations":{"1__cfgroup":{"terms":{"field":"compliance_year","size":10000,"order":{"_term":"desc"}},"aggregations":{}}}}',

      {

        headers: {

          Host: "10.1.11.2:8000",

          Connection: "keep-alive",

          "User-Agent":

            "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147 Safari/537.36",

          "content-type": "application/json",

          Accept: "*/*",

          Origin: "http://chart.com",

          Referer: "http://chart.com/",

          "Accept-Encoding": "gzip, deflate",

          "Accept-Language": "en-US,en;q=0.9",

          "Content-Type": "application/json",

        },

      }

    );

1 Answers1

1

I would take it that you want to differentiate them in one of the outputs.

Every HTTP request generates a bunch of metrics and each metric has tags. In your case, I would argue you can just use the method tag which is emitted by default.

If this is not enough you can always add your own custom tags or for this reason (and others), there is the name tag which is usually equal to the url, but can be overwritten.

The name tag has a special meaning for the k6 cloud UI and it is what is used to group requests, not the url, again ... because of the specific case you have here. You can use it any other output as well though.

If what you want is for them to be grouped at the end of test summary, I would argue you should start using an output ;), but there is a hackish way to get it to work which is to create a threshold.

import http from "k6/http";

export let options = {
    thresholds: {
        'http_req_duration{method:GET}': ['p(90) < 1000'],
        'http_req_duration{method:POST}': ['p(90) < 1000'],
        'http_reqs{method:GET}': ['count < 1000'],
        'http_reqs{method:POST}': ['count < 1000'],
    },
};

export default function () {
    http.get('https://test.k6.io/');
    http.post('https://test.k6.io/');
    http.post('https://test.k6.io/');
}

running (00m00.8s), 0/1 VUs, 1 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs  00m00.7s/10m0s  1/1 iters, 1 per VU


    data_received..............: 38 kB 49 kB/s
    data_sent..................: 951 B 1.2 kB/s
    http_req_blocked...........: avg=127.15ms min=5.72µs   med=6.14µs   max=381.46ms p(90)=305.16ms p(95)=343.31ms
    http_req_connecting........: avg=37.27ms  min=0s       med=0s       max=111.82ms p(90)=89.45ms  p(95)=100.64ms
    http_req_duration..........: avg=114.1ms  min=113.72ms med=113.9ms  max=114.68ms p(90)=114.53ms p(95)=114.61ms
    ✓ { method:GET }...........: avg=114.68ms min=114.68ms med=114.68ms max=114.68ms p(90)=114.68ms p(95)=114.68ms
    ✓ { method:POST }..........: avg=113.81ms min=113.72ms med=113.81ms max=113.9ms  p(90)=113.88ms p(95)=113.89ms
    http_req_receiving.........: avg=162.07µs min=132.62µs med=174.32µs max=179.28µs p(90)=178.29µs p(95)=178.78µs
    http_req_sending...........: avg=128.62µs min=44.28µs  med=44.83µs  max=296.75µs p(90)=246.37µs p(95)=271.56µs
    http_req_tls_handshaking...: avg=89.22ms  min=0s       med=0s       max=267.67ms p(90)=214.13ms p(95)=240.9ms
    http_req_waiting...........: avg=113.81ms min=113.5ms  med=113.72ms max=114.21ms p(90)=114.11ms p(95)=114.16ms
    http_reqs..................: 3     3.800233/s
    ✓ { method:GET }...........: 1     1.266744/s
    ✓ { method:POST }..........: 2     2.533488/s
    iteration_duration.........: avg=724.34ms min=724.34ms med=724.34ms max=724.34ms p(90)=724.34ms p(95)=724.34ms
    iterations.................: 1     1.266744/s

as you can see you need to do it for each metric/tag combination you want and this adds additional computational complexity as now these thresholds need to be calculated all the time. For small use cases like this example, it is fine, but if you need it for something larger, doing this with an output is much better and versatile.