K6 and the k6 reporter do not log accurately highly concurrent requests

I was trying to stress my server and the api on average took 6s to complete. I bombarded the server with 256 VUs over the duration of 2m (as much iterations per vu as possible). For each request, 1 row was supposed to be added in the end to the table, and when I did a db query, 92 rows were added, but, this is what the k6 reporter tells me:


how should I interpret this data? Is this a bug with k6?

Hi @highlyintrouble,

Welcome to the community forums :wave:

If I understand correctly, you are seeing in the report a total of 36 requests, while in the DB you have 92 rows. And you’d expect k6 to report 92 total requests. Is that correct?

Without more detail, I would have a look at resource consumption on the load generator (VM, docker container, etc.).

Particularly, did you observe the CPU during this test, was it close to 100%? We recommend to have at least 20% idle CPU so that k6 has enough CPU to measure the responses correctly.

If that is not your case, could you share the (sanitized) script, how you run it and the text output?

Cheers!

@eyeveebe, I actually hadn’t seen the CPU, but, I believe that shouldn’t have been the issue as I have a dedicated VM in the cloud only to run k6. I will check to see that, but, do let me know if there is something else I have in my code that I could have made a mistake on:


import http from 'k6/http';
import { check, sleep } from 'k6';
import { Counter, Rate, Trend } from 'k6/metrics';
import { htmlReport } from 'https://raw.githubusercontent.com/benc-uk/k6-reporter/main/dist/bundle.js';

import routes from './common/routes/routes';
import TokenFactory from './DataFactory/token.factory';
import exec from 'k6/execution';
import { getUniqueVenueFeature } from './utils/random';

const Bu='Feature_1';
const Index='4_parallel_run_4core16Mem_2minute_256target';
const Vus=256;
const Iteration=4;
export const options: any = {
  setupTimeout: '600s',
  scenarios: {
    [Bu]: {
      executor: 'ramping-vus',
      exec: 'FeatureRun',
      startVUs: 1,
      stages: [{
        duration: '2m', target: 256
      }],
      gracefulRampDown: '0s',
      tags: { my_custom_tag: Index },
      env: {
        bu: Bu,
        index: Index,
        Feature_creation_scenario: Bu,
      },
    },
  },
};


let customMetrics = new Trend('Response timing', true);

let lastRequestTime = new Date().getTime();
let successCount = new Counter('successful_request');
let failureRate = new Rate('check_failure_rate');


export function FeatureRun() {
  const iterationNumber = exec.scenario.iterationInTest;

  const options = {
    timeout: '1000000', //10min
    headers: {
      'Content-Type': 'application/json',
      Authorization: TokenFactory.getToken(__ENV.bu),
    },
  };

  const res = http.put(
    routes.CREATE_NEW_Feature,
    JSON.stringify(getUniqueVenueFeature()),
    options,
  );

  let checkRes = check(res, {
    'is status is 201': (r) => {
      console.log(
        `Running scenerio ${__ENV.index}\'s ${iterationNumber} iteration.TimeFromLastRequest: `,
        new Date().getTime() - lastRequestTime,
        ' response: ',
        r.body, 'res duration timing: ', res.timings.duration, 'VUid: ',
        exec.vu.idInInstance,
      );
      lastRequestTime = new Date().getTime();

      return r.status === 201;
    },
  });

  if (checkRes) {
    successCount.add(1);
  }

  failureRate.add(!checkRes);

  customMetrics.add(res.timings.duration);
}

export function handleSummary(data: any) {
  return {
    [`src/report/${Index}_single_Feature_parallel.html`]: htmlReport(data),
  };
}

Hi @highlyintrouble

We do not see anything that could cause this in the script. Discussing this with @olegbespalov he thinks, seeing the results, that it’s most probably due to the endpoint under test has performance issues for this load.

The main signal of that is the slow iteration_duration average time, which is 40 seconds. And this is the average.

Keeping in mind that you used 2 minutes as the total execution time, we think it’s possible that during the 2m + some time for the shutting down, only 36 iterations were successfully finished. And captured by k6. Some of those probably finished after the 2m period (that’s why in DB you see 92 rows). And some of the other (50%+ requests, up to 256VUs) are probably cancelled even before the step that inserts the row.

So, with the information we have, it does not seem a k6 bug. We would interpret this as the endpoint being overloaded, with high response times, and the test being very short.

Maybe you can attempt a spike test with a longer scale down period, to see if the requests finish. And how many the API endpoint can deal with. Or you can attempt a stress test to check the API endpoint limits.

The API load testing guide can also help define the test scenarios.

I hope this helps.

Cheers!