Is K6 safe from the coordinated omission problem?

Is K6 safe from the coordinated omission problem? (I am interested in both HTTP and gRPC)
For instance, for jMeter (as described here Apache JMeter - User's Manual: Best Practices) we need to calibrate number of threads, think-time and jMeter xmx value on our own otherwise the response times will be incorrect (much higher, e.g. real is 150ms but jMeter shows 2-5sec) and will not reflect the real numbers even if at the server side there are no issues. So we would like to switch to another load generator.

How does k6 behave if the specified “vus” or “rps” value is too big?
Can we still trust the response times values in this case?

Thank you in advance.

Yes, though you have to use the arrival-rate executors instead of looping ones. We have an explanation here, even though we use slightly different terms: Open and closed models

I suggest avoiding the rps option, we officially discourage its usage in the docs because it’s somewhat misleading and difficult to use correctly. It was the only way before we had support for arrival-rate executors, but now that we have them, you should use them instead.

Thank you for the quick response!
The arrival-rate open model fits our needs.
If I understand correctly, it is available only as JS-code, but would be handy to have it as a CLI option :slight_smile: Are there any plans to include it as a CLI option?

Not at this time, sorry. The scenarios configuration is quite complicated and we couldn’t find a nice way to expose it via CLI flags, so we didn’t do it when we released scenarios and arrival-rate executors in k6 v0.27.0. We might do it in the future by using some strvals-like configuration, but it’s probably not going to be soon, we have a bunch of more important configuration work we need to go through first.

All of that said, k6 is flexible enough that you can use its environment variables and configure your arrival-rate scenario with a CLI flag anyway :sweat_smile: You can reference __ENV objects in the exported script options, so you can do something like that:

import http from 'k6/http';

export let options = {
    scenarios: {
        some_name: {
            executor: 'constant-arrival-rate',
            rate: __ENV.arrival_rate,
            timeUnit: '1s',
            duration: '1m', // you can also make this configurable from __ENV
            preAllocatedVUs: __ENV.initialized_vus,
        },
    },
};

export default function () {
    http.get('http://httpbin.test.k6.io/delay/3');
}

And then run k6 like this: k6 run --env arrival_rate=10 --env initialized_vus=50 script.js to get 10 RPS with 50 pre-allocated VUs, or whatever you want for the specific test run. You can even pass whole JSON scenario config strings as environment variables and then just do something like this:

export let options = {
    scenarios: {
        some_name: JSON.parse(__ENV.scenario_config_json),
    },
};
1 Like