To capture k6 metrics for each request in a single script

Hi,

I am having one doubt regarding p(90),p(95), min, med and max metrices captured by k6.
As my test script has 4 API requests in it and I need the above all metrics for each API request separately, how would I get that because k6 gives u metrices for whole script not for each transaction in the script.

Please let me know, how to find p(90,p(95), min, max, med for each transaction(request) in a k6 script?

Thanks!

hi @olegbespalov, could you please help me on this? please refer below screenshot -

I need to check p(90, p(95) value for end_point_1 and end_point_2 respectively from k6 results.

How can I find it for both the endpoints.

Hi @varshagawande

It is not possible to achieve what you’re looking for at the moment with k6. We have a GitHub issue tracking this feature, but no ETA on its delivery. The text summary generated by k6 can only display aggregated values for all the http requests made in the context of a text run at the moment.

As a matter of fact, you’re not the first user asking how to do exactly that, and this very forum already has some nice pointers on possible workarounds:

In a nutshell, the most common workaround is to define tags on each of your requests and define a dummy threshold on them. That should lead to the outcome you wish. Otherwise, like ned pointed out at the time, you could use one of other k6 outputs (JSON/CSV, InfluxDB, Prometheus), and export the data there; that would allow you to make fine-grained queries on exactly the information you want.

Maybe an important piece of information to help you move forward is that the reason why you see those specific percentiles, min, max, … values, is because under the hood k6 uses a Trend metric.

Hope that’s helpful, let me know if I can be of more assistance :bowing_man:

PS: a note for the future, please refrain from pigging other k6 team members when your tickets are already assigned. This is a community forum, for the k6 open-source project, and we offer our support here on a voluntarily basis. Much appreciated :bowing_man:

Hi @varshagawande

We have two ways we achieve what you are looking for. This design was created for a few reason, these work well for our needs across our software portfolio.
The first way, we create individual k6 test scripts for each endpoint. We call this our endpoint definition file, those endpoints are packaged using NPM. (We do this so the test scripts could be used across our entire portfolio, different repo’s.) We don’t have any test conditions, thresholds, options in these test scripts… This is one example, with some changes to names.

/*
 * @param {object} additionalReqParam
 * @param {array} criteria  
 */
export function getEndpoint1(additionalReqParam, criteria) {
  const url = generateUrl('/svc/v1/endpoint1');

  // The data must be passed as search parameters rather than the body
  const urlWithParams = new URL(url);
  // Add all the fields as search parameters unless null/undefined
  if (criteria) {
    criteria.forEach((currentCriteria) => {
      const criteriaFormatted = currentCriteria.split('=');
      urlWithParams.searchParams.append(
        criteriaFormatted[0],
        criteriaFormatted[1]
      );
    });
  }

  const headers = getHeadersWithToken(additionalReqParam.token);
  const resp = execRequest('GET', urlWithParams.toString(), null, headers);
  return resp;
}

As those test scripts are developed, they can put put into the Test Scenario script that would have something like this. Again, removed some information and condensed. When we run this scenario test script, each of the groups will have their own metrics. We also set thresholds for each of these groups.

export default function () {
  // Optional wait timer to start all VUs in proper order
  preIteration();

  let token;
  
  group('get some endpoint1 type', function () {
    // Make the request to the endpoint
    getEndpoint1(requestParams);
  });

  group('get some endpoint2 type', function () {
    // Make the request to the endpoint
    getEndpoint2(requestParams, criteria);
  });

The other design we have has three components. I can’t say if this is the most performant but the collection is pretty small. There is the test script, a json data file and the test scenario.
The test script is universal, some may say a wrapper of sort. It reads the json file for each of the defined API’s and constant parameters, build that into an npm package. The Scenario test script is setup like the above, and just consumes the npm module for the test.

In either method the need you have is met.
Good Luck!
-Misty