K6 run not completing but still exits Jenkins as success

Hey, would be grateful for some help with this one. I’m using Jenkins and the k6 Docker image to complete a performance test cycle in a pipeline. I’m seeing this warning message:

level=warning msg=“No data generated, because no script iterations finished, consider making the test duration longer”

As the scripts haven’t finished none of the thresholds have been tested but the pipeline is exiting the stage with a success. I’d like to implement a simple solution so the stage is only successful if the thresholds pass and in all other scenarios - including this one, we exit with a fail.

What are your thresholds? Is it possible you’re hitting this bug: Metrics for which no data was recorded are not displayed or evaluated for Thresholds · Issue #1346 · grafana/k6 · GitHub

Thanks for the quick response ned. I’m not using counter and have the following thresholds:

“thresholds”: {
“failed_requests”: [“rate<=0.00”],
“simple_request_waiting”: [“p(99)<1500”]
}

failureRate.add(RegExp(‘[1-3][0-9][0-9]’).test(response.status) !== true);
simpleRequestWaiting.add(response.timings.waiting, {tags: “simple request”});

Problem is that the run takes so long that I get the following warning:

WARN[0660] No data generated, because no script iterations finished, consider making the test duration longer

This is only an issue when in a pipeline as its not had the chance to evaluate the thresholds.

Hmm which k6 version are you using? Since, even if not a single script iteration was completely finished (which is what that warning is about), if the script execution has reached a point where you’ve add()-ed data to these metrics, recent k6 versions will still evaluate the thresholds. To demonstrate, here’s a simple script that will either succeed (green checkmark next to the metric and 0 exit code) or fail (exit code 99 and red cross to the next to the metric):

import { sleep } from "k6";
import { Trend, Rate } from 'k6/metrics';

let failureRate = new Rate('failed_requests');
let simpleRequestWaiting = new Trend('simple_request_waiting');

export let options = {
    duration: "3s",
    thresholds: {
        failed_requests: ["rate <= 0.00"],
        simple_request_waiting: ["p(99) < 1500"],
    }
};

export default function (data) {
    let rand = Math.random()
    console.log(`rand is ${rand}`);
    failureRate.add(rand > 0.5);
    simpleRequestWaiting.add(rand * 2000);
    sleep(5); // this is longer than the script duration above, so no iteration will complete
}

That said, the current warning message is misleading when it claims that “No data [was] generated”. This used to be the case in very old k6 versions, but not since k6 v0.22.0. And we recently fixed the warning message as well, we just haven’t released that change in a stable k6 release :disappointed:

Also, you might want to consider running your test by specifying iterations instead of duration or stages, then k6 will execute precisely the number of script iterations you want, regardless of how long they take.

I’m having a similar problem. I have the following code for my let options and the default function is a long browser recording I converted to js from a har file.

export let options = {

maxRedirects: 0,

stages: [

    { duration: "20s", target: 5}, 

    { duration: "30s", target:5}, 

    { duration: "60s", target: 0},  

],

thresholds: {


    'httq_req_duration' : ['p(95)<500'], // 95% of requests complete within 500ms

}

};

after the test runs I get the error “WARN[0121] No data generated, because no script iterations finished, consider making the test duration longer”

Any insight into why?

It just means that your script was so long that not a single full iteration managed to finish in the 1m50s your script would be running. To be fair, the k6 warning is a little misleading, since the data from the incomplete iterations is not discarded, it still counts, but… you should really “consider making the test duration longer”, or making the script shorter :wink:

The warning message will be fixed in the next k6 release to

No script iterations finished, consider making the test duration longer

Thank you Ned, I’ll give that a try

Edited by Ned: I already answered in WARN[0040] Request Failed error="stream error: stream ID 3; INTERNAL_ERROR" - #2 by nedyalko

The problem seems to be related to Correlation and Dynamic Data. The username and password from the first page in the load test are creating tokens that are not working for the rest of the http requests