Inconsistent test results

When I run my test as
k6 run test.js
I see that some of my check are failing, 98% of them.

Then I try to dig in and discover more about the failed checks so I run the test as
k6 run --logformat=raw --http-debug=full test.js 2>http_debug

But, in this case, no checks are failing, and there are no errors in the debug file.

This pattern is consistent.

What am I doing wrong?

Hi Aleks. Weā€™ll need more info to help debug this. Can you share your script file so that we can see your checks, and the output where you see a 98% failure rate?

If you canā€™t provide this here, feel free to email support@k6.io or message me directly here on the forums.

Hi Tom, I will email it.
Thanks.,
But here is how I do the check that fails

response = response.submitForm();
check(response, {
ā€˜Invisible Login Form auto submissionā€™: (r) => r.status === 200,
});
if (response.status != 200) return;
trend003.add(response.timings.waiting);

I think I got it.

sometimes there is no form to submit, but the previous request returned status:200.

Itā€™s always a good idea to check the contents of a HTTP 200 to ensure it contains the expected response. The web developer is not obliged to send any particular status code, even when thereā€™s an error.

A HTTP 200 with an error message in the response body is essentially the equivalent of a handled exception, whereas the unhandled ones tend to be in the 4xx-5xx status code range.

1 Like

I have a question.
It may be off-topic, and if it is I will gladly create a new question.

Is there any way to introduce a delay , even better, a random delay, between iterations?
The delay should not be seen as part of the iteration, and should not affect iteration duration metric.

Unfortunately not at the moment, sorry. The only way to introduce random delay in an iteration would be to use sleep() with some Math.random() calculation, and that would alter iteration_duration.

There are easy workarounds though. You can wrap your ā€œbusinessā€ code in a group() and leave the sleep() call out of it, and then look only at the group_duration metric, ignoring iteration_duration.

Alternatively, you can define your own custom Trend metric and measure the iteration duration yourself by subtracting Date.now() values. Then you can either do the subtracting before any sleep() calls, or if you have multiple such calls sprinkled in your code, you can keep track of how long you sleep in the iteration and subtract that from the total sleep value from your measurement before you .add() it to the custom metric.

2 Likes

Ned beat me to it! But hereā€™s some sample code for generating a custom Trend that can be used to accumulate group response times without also including the sleep response times:

import http from 'k6/http';
import { group, sleep } from 'k6';
import { Trend } from 'k6/metrics';

import { randomIntBetween } from "https://jslib.k6.io/k6-utils/1.1.0/index.js";

const groupDurationTotal = new Trend("group_duration_total", true); // true indicates we're adding time values (assumed to be in milliseconds)

let total;

export default function () {
  total = 0;

  total += group("group 1", function () {
    const startTime = Date.now();

    // http.get (or whatever) - the below sleep is just for testing
    sleep(randomIntBetween(2, 5));

    const duration = Date.now() - startTime;
    console.log('group 1 duration: ' + duration);

    // sleep representing "think time" that might not be interesting to include in group_duration_total
    sleep(randomIntBetween(2, 5));

    return duration;
  });

  total += group("group 2", function () {
    // etc
  });

  total += group("group 3", function () {
    // etc
  });

  console.log('Script finished. Total time: ' + total);
  groupDurationTotal.add(total);
}

Example end-of-test summary output:

     data_received..........: 0 B 0 B/s
     data_sent..............: 0 B 0 B/s
     group_duration.........: avg=7.12s  min=4.02s  med=7.01s  max=9.01s  p(90)=9s     p(95)=9s
     group_duration_total...: avg=11.02s min=10.01s med=11.02s max=14.02s p(90)=12.21s p(95)=13.12s
     iteration_duration.....: avg=21.84s min=19.06s med=22.03s max=24.04s p(90)=23.15s p(95)=23.59s
     iterations.............: 10  0.180811/s
     vus....................: 5   min=5 max=5
     vus_max................: 5   min=5 max=5

Thereā€™s probably a cleaner way of doing this, but hopefully it gives you some ideas :slight_smile:

1 Like