Multiple scenarios & metrics per each

Hi,
Have created a couple scenarios, and while this is working fine i do miss metrics per each scenario.
Made an attempt to read your docs where you suggest setting tags but have not go into the details on how to actually use them in a multi scenario situation (see tags-and-groups#tags).
However output is not helpful as it all newly created metrics, do contain same data.

import http from 'k6/http';
import { check, sleep } from 'k6';
import { Trend } from 'k6/metrics';

let myTrend1506 = new Trend('my_trend_1506');
let myTrend603 = new Trend('my_trend_603');
let myTrend83 = new Trend('my_trend_83');

export let options = {
    scenarios: {
        GET_1506_Bookings: {
          executor: 'constant-vus',
          vus: 10,
          duration: '30s',
          gracefulStop: '20s',
          exec: 'liveResSearchGetBookings',
          tags: { my_tag: 'GET_1506_Bookings' },
          env: { DATE_SEARCH: '2020-10-29' },
        },
        GET_603_Bookings: {
          executor: 'constant-vus',
          vus: 30,
          duration: '30s',
          gracefulStop: '20s',
          exec: 'liveResSearchGetBookings',
          tags: { my_tag: 'GET_603_Bookings' },
          env: { DATE_SEARCH: '2020-11-25' },
        },
        GET_83_Bookings: {
          executor: 'constant-vus',
          vus: 60,
          duration: '30s',
          gracefulStop: '20s',
          exec: 'liveResSearchGetBookings',
          tags: { my_tag: 'GET_83_Bookings' },
          env: { DATE_SEARCH: '2020-05-20' },
        },
      },
}

export function setup() {
    const headers = { 'Content-Type': 'application/json' };
    let request_body = {
        "username": "username",
        "password": "password"
    }
    
    let auth_api = "token_url"
    let res = http.post(auth_api, request_body, headers);
    
    return { data: res.accessToken };
  }

export function liveResSearchGetBookings(data) {
    const options = {
        headers: {
            Authorization: `Bearer ${data}`,
        },
    };
    let hostUrl = 'https://host_url'
    let searchUrl = `/api/v1/events/search?page=1&pageSize=100&arrivalMin=${__ENV.DATE_SEARCH}`
    let theCall = http.get(hostUrl+searchUrl, options);
    check(theCall, {
        'status is 200': (r) => r.status === 200
    });
    
    myTrend1506.add(theCall.timings.duration, { my_tag: 'GET_1506_Bookings' });
    myTrend603.add(theCall.timings.duration, { my_tag: 'GET_603_Bookings' });
    myTrend83.add(theCall.timings.duration, { my_tag: 'GET_83_Bookings' });

    sleep(1);
}

The additional metrics in report are:
my_trend_1506…: avg=129.301266 min=41.9415 med=73.4241 max=970.1515 p(90)=295.72646 p(95)=403.93271
my_trend_603…: avg=129.301266 min=41.9415 med=73.4241 max=970.1515 p(90)=295.72646 p(95)=403.93271
my_trend_83…: avg=129.301266 min=41.9415 med=73.4241 max=970.1515 p(90)=295.72646 p(95)=403.93271

The goal is to retrieve metrics unique to each scenario.
Any help / suggestion is appreciated

This is because all of your scenarios are executing the same function, liveResSearchGetBookings, and in it you update the three custom metrics at the same time. Instead, consider adding them to a map and updating only the metric that corresponds to the scenario you are running, something like this:

import http from 'k6/http';
import { check, sleep } from 'k6';
import { Trend } from 'k6/metrics';

export let options = {
    scenarios: {
        GET_1506_Bookings: {
            executor: 'constant-vus',
            vus: 10,
            duration: '30s',
            gracefulStop: '20s',
            exec: 'liveResSearchGetBookings',
            env: { DATE_SEARCH: '2020-10-29' },
        },
        GET_603_Bookings: {
            executor: 'constant-vus',
            vus: 30,
            duration: '30s',
            gracefulStop: '20s',
            exec: 'liveResSearchGetBookings',
            env: { DATE_SEARCH: '2020-11-25' },
        },
        GET_83_Bookings: {
            executor: 'constant-vus',
            vus: 60,
            duration: '30s',
            gracefulStop: '20s',
            exec: 'liveResSearchGetBookings',
            env: { DATE_SEARCH: '2020-05-20' },
        },
    },
    // So we get count in the summary, to demonstrate different metrics are different
    summaryTrendStats: ['avg', 'min', 'med', 'max', 'p(90)', 'p(95)', 'p(99)', 'count'],
}

let customMetrics = {};

for (let key in options.scenarios) {
    // Add the scenario name as an environment variable
    options.scenarios[key].env['MY_SCENARIO'] = key;
    // You can customize the actual name in any way you want, by using other env
    // vars, etc.
    let customMetricName = key + '(' + options.scenarios[key].env['DATE_SEARCH'] + ')';
    // Create a new custom Trend metric for the scenario.
    customMetrics[key] = new Trend(customMetricName, true);
}


export function liveResSearchGetBookings(data) {
    const options = { headers: { Authorization: `Bearer foobar` } };
    let hostUrl = 'https://httpbin.test.k6.io/anything'
    let searchUrl = `/api/v1/events/search?page=1&pageSize=100&arrivalMin=${__ENV.DATE_SEARCH}`
    let theCall = http.get(hostUrl + searchUrl, options);
    check(theCall, {
        'status is 200': (r) => r.status === 200
    });

    // Add only to the custom metric that corresponds to the scenario
    customMetrics[__ENV.MY_SCENARIO].add(theCall.timings.duration);

    // Constant sleep() is usually not ideal, though arribal-rate executors
    // might be even better than constant-vus.
    sleep(1 + Math.random()); // sleep between 1s and 2s
}

This script would produce a summary somewhat like this:

running (32.5s), 000/100 VUs, 1809 complete and 0 interrupted iterations
GET_1506_Bookings ✓ [======================================] 10 VUs  30s
GET_603_Bookings  ✓ [======================================] 30 VUs  30s
GET_83_Bookings   ✓ [======================================] 60 VUs  30s

     ✓ status is 200

     GET_1506_Bookings(2020-10-29)...: avg=156.85ms min=134.09ms med=137.41ms max=418.98ms p(90)=148.44ms p(95)=342.25ms p(99)=393.57ms count=185 
     GET_603_Bookings(2020-11-25)....: avg=171.63ms min=133.5ms  med=136.92ms max=1.85s    p(90)=147.83ms p(95)=441.94ms p(99)=815.67ms count=552 
     GET_83_Bookings(2020-05-20).....: avg=189.96ms min=132.95ms med=137.39ms max=2.68s    p(90)=377.91ms p(95)=441.58ms p(99)=611.04ms count=1072
     checks..........................: 100.00% ✓ 1809  ✗ 0    
     data_received...................: 1.9 MB  57 kB/s
     data_sent.......................: 220 kB  6.8 kB/s
     http_req_blocked................: avg=47.69ms  min=1.69µs   med=4.23µs   max=2.03s    p(90)=4.82µs   p(95)=574.8ms  p(99)=1.38s    count=1809
     http_req_connecting.............: avg=13.96ms  min=0s       med=0s       max=510.2ms  p(90)=0s       p(95)=208.7ms  p(99)=227.19ms count=1809
     http_req_duration...............: avg=180.98ms min=132.95ms med=137.26ms max=2.68s    p(90)=334.82ms p(95)=440.03ms p(99)=463.22ms count=1809
     http_req_receiving..............: avg=2.49ms   min=38.74µs  med=233.24µs max=1.41s    p(90)=411.51µs p(95)=491.95µs p(99)=1.17ms   count=1809
     http_req_sending................: avg=79.68µs  min=27.45µs  med=73.65µs  max=1.21ms   p(90)=110.44µs p(95)=128.28µs p(99)=269.26µs count=1809
     http_req_tls_handshaking........: avg=32.42ms  min=0s       med=0s       max=1.78s    p(90)=0s       p(95)=332.79ms p(99)=1.02s    count=1809
     http_req_waiting................: avg=178.4ms  min=132.56ms med=136.89ms max=2.68s    p(90)=334.63ms p(95)=439.7ms  p(99)=459.72ms count=1809
     http_reqs.......................: 1809    55.576997/s
     iteration_duration..............: avg=1.7s     min=1.13s    med=1.67s    max=4.16s    p(90)=2.1s     p(95)=2.3s     p(99)=3.06s    count=1809
     iterations......................: 1809    55.576997/s
     vus.............................: 1       min=1   max=100
     vus_max.........................: 100     min=100 max=100

However, there’s an even better way to do this, I’ll post it as a separate post shortly

So, the better approach to this issue is to take advantage of a few of k6 features:

  1. The metrics generated by each scenario are automatically tagged with a scenario: <scenario-name> tag, without you having to specify anything: Advanced Examples).
  2. k6 can have sub-metrics based on tags; unfortunately, this currently happens only when there’s a threshold defined on the sub-metric (Thresholds), so we have to define some bogus thresholds, but we plan to improve the situation in the next few k6 versions (Add explicit tracking and ignoring of metrics and sub-metrics · Issue #1321 · grafana/k6 · GitHub).
  3. The end-of-test summary shows the submetric values by default.

Combining these facts, we can have a much nicer script like this:

import http from 'k6/http';
import { check, sleep } from 'k6';

export let options = {
    scenarios: {
        GET_1506_Bookings: {
            executor: 'constant-vus',
            vus: 10,
            duration: '30s',
            gracefulStop: '20s',
            exec: 'liveResSearchGetBookings',
            env: { DATE_SEARCH: '2020-10-29' },
        },
        GET_603_Bookings: {
            executor: 'constant-vus',
            vus: 30,
            duration: '30s',
            gracefulStop: '20s',
            exec: 'liveResSearchGetBookings',
            env: { DATE_SEARCH: '2020-11-25' },
        },
        GET_83_Bookings: {
            executor: 'constant-vus',
            vus: 60,
            duration: '30s',
            gracefulStop: '20s',
            exec: 'liveResSearchGetBookings',
            env: { DATE_SEARCH: '2020-05-20' },
        },
    },
    // So we get count in the summary, to demonstrate different metrics are different
    summaryTrendStats: ['avg', 'min', 'med', 'max', 'p(90)', 'p(95)', 'p(99)', 'count'],
    thresholds: {
        // Intentionally empty. We'll programatically define our bogus
        // thresholds (to generate the sub-metrics) below. In your real-world
        // load test, you can add any real threshoulds you want here.
    }
}

for (let key in options.scenarios) {
    // Each scenario automaticall tags the metrics it generates with its own name
    let thresholdName = `http_req_duration{scenario:${key}}`;
    // Check to prevent us from overwriting a threshold that already exists
    if (!options.thresholds[thresholdName]) {
        options.thresholds[thresholdName] = [];
    }
    // 'max>=0' is a bogus condition that will always be fulfilled
    options.thresholds[thresholdName].push('max>=0');
}


export function liveResSearchGetBookings(data) {
    const options = { headers: { Authorization: `Bearer foobar` } };
    let hostUrl = 'https://httpbin.test.k6.io/anything'
    let searchUrl = `/api/v1/events/search?page=1&pageSize=100&arrivalMin=${__ENV.DATE_SEARCH}`
    let theCall = http.get(hostUrl + searchUrl, options);
    check(theCall, {
        'status is 200': (r) => r.status === 200
    });

    // Constant sleep() is usually not ideal, though arribal-rate executors
    // might be even better than constant-vus.
    sleep(1 + Math.random()); // sleep between 1s and 2s
}

this will result in a summary like this:

running (31.9s), 000/100 VUs, 1772 complete and 0 interrupted iterations
GET_1506_Bookings ✓ [======================================] 10 VUs  30s
GET_603_Bookings  ✓ [======================================] 30 VUs  30s
GET_83_Bookings   ✓ [======================================] 60 VUs  30s

     ✓ status is 200

     checks.............................: 100.00% ✓ 1772  ✗ 0    
     data_received......................: 1.8 MB  57 kB/s
     data_sent..........................: 217 kB  6.8 kB/s
     http_req_blocked...................: avg=58.34ms  min=1.8µs    med=2.56µs   max=1.48s    p(90)=4.83µs   p(95)=755.54ms p(99)=1.4s     count=1772
     http_req_connecting................: avg=15.64ms  min=0s       med=0s       max=517.29ms p(90)=0s       p(95)=228.04ms p(99)=430.51ms count=1772
     http_req_duration..................: avg=184.79ms min=132.73ms med=138.22ms max=1.88s    p(90)=364.67ms p(95)=441.73ms p(99)=468.98ms count=1772
     ✓ { scenario:GET_1506_Bookings }...: avg=184.14ms min=134ms    med=139.18ms max=484.23ms p(90)=370.54ms p(95)=453.01ms p(99)=469.81ms count=171 
     ✓ { scenario:GET_603_Bookings }....: avg=158.44ms min=133.63ms med=137.44ms max=1.72s    p(90)=147.4ms  p(95)=299.66ms p(99)=463.83ms count=540 
     ✓ { scenario:GET_83_Bookings }.....: avg=198.31ms min=132.73ms med=138.61ms max=1.88s    p(90)=392.43ms p(95)=452.25ms p(99)=470.7ms  count=1061
     http_req_receiving.................: avg=3.2ms    min=38.2µs   med=244.76µs max=1.43s    p(90)=416.17µs p(95)=507.14µs p(99)=1.15ms   count=1772
     http_req_sending...................: avg=75.6µs   min=27.77µs  med=69.88µs  max=1.08ms   p(90)=108.13µs p(95)=121.69µs p(99)=186.6µs  count=1772
     http_req_tls_handshaking...........: avg=41.76ms  min=0s       med=0s       max=1.23s    p(90)=0s       p(95)=509.12ms p(99)=1.15s    count=1772
     http_req_waiting...................: avg=181.51ms min=132.37ms med=137.83ms max=1.05s    p(90)=363.66ms p(95)=441.54ms p(99)=467.49ms count=1772
     http_reqs..........................: 1772    55.514138/s
     iteration_duration.................: avg=1.73s    min=1.13s    med=1.7s     max=3.79s    p(90)=2.12s    p(95)=2.36s    p(99)=3.19s    count=1772
     iterations.........................: 1772    55.514138/s
     vus................................: 38      min=38  max=100
     vus_max............................: 100     min=100 max=100

For more details, read the advanced examples in the scenarios docs and my previous forum responses to similar questions: Ignore http calls made in Setup or Teardown in results? - #2 by nedyalko and Separate metrics summary for each request in default function - #2 by nedyalko

1 Like

Thank you for helping out on this issue.

I like first example more as its more explicit in why and how. Second proposed solution is a bit of a magic to make things to work.

Not really, though I understand your reluctance :grinning_face_with_smiling_eyes: Once we properly expose the sub-metrics (so they aren’t hidden and used only by thresholds), it will feel a lot less magical.

Is there a way to do the same but for http_reqs metric?

I did something like:

options.thresholds[`http_reqs{scenario:${key}, expected_response:true}`] = ['true'];
options.thresholds[`http_reqs{scenario:${key}, expected_response:false}`] = ['true'];

But the results are not the expected:

http_reqs.......................................: 63790   70.445027/s
     ✓ { scenario:vus050, expected_response:true }...: 21368   23.597262/s
     ✓ { scenario:vus200, expected_response:true }...: 21164   23.371979/s
     ✓ { scenario:vus400, expected_response:true }...: 21258   23.475786/s
     iteration_duration..............................: avg=3.07s    min=231.98ms med=2.82s    max=8.61s    p(90)=5.79s    p(95)=5.93s    p(99)=6.23s    count=63790```

I mean, seems the “http reqs per second” is taking the complete execution time to calculate its value, instead of the scenario execution time (in the above report, all the http reqs per second should be around 70/s on all the scenarios)

For other metrics (like http_req_blocked or http_req_connecting) this works ok. Seems it is working only for Trend metrics, but is not for Counter metrics.

Basically http_req is a counter metric type. so i believe we have to use below
options.thresholds[http_reqs{scenario:${key}, expected_response:true}] = [count>=0];

Hi there,

I tried your example and works, but when I sent the output to prometheus is showed only the aggregated values.
How can I send all scenarios to prometheus?