How to use VUs when I just want set number of RPS in each stage?

I haven’t been able to understood the concept or the purpose of VUs and also haven’t been able to correlate number of VUs to number of requests per second. rps is available to be written in .js file in options, but it isn’t what I am looking for.

The maximum number of requests to make per second, in total across all VUs

As part of my load testing, I want to send a fixed number of requests per second in multiple stages, something like a combination of smoke testing and smoke testing, for example, i want to do something like, send x number of requests per second for first 1 hour, 2x number of requests per second for the next 10 mins, 3x for next 10mins, 4x for next 10 mins, 5x for next 20mins, 3x for next 20mins, x for next 3 hours.

How do I generate a set number of rps in stages, while also being able to use checks, thresholds etc,.

VUs are just independent JavaScript runtimes, they run your test script in parallel, according to the executor type you picked for your scenario.

When you want to specify the request rate, the easiest way to do that with k6 is through an arrival rate executor. And, in your particular use case, the ramping-arrival-rate executor is the way to go. Try this examples script to see how it works:

import http from 'k6/http';
import exec from 'k6/execution';

// You can modify this to change the options, or use environment variables
// (https://k6.io/docs/using-k6/environment-variables/) to inject it as a
// parameter of the script.
const baseRPS = 1;

export let options = {
    scenarios: {
        foo: {
            executor: 'ramping-arrival-rate',
            preAllocatedVUs: 100,

            // We start at 1 iteration per second, and because we do just a
            // single request in the iteration script (the default function
            // below), that's equivalent toq 1 RPS (request/s).
            startRate: baseRPS, timeUnit: '1s',
            stages: [
                // Continue running 1 iter/s for the first 10 seconds
                { target: baseRPS, duration: '10s' },

                // Immediately jump to 2 iter/s and run that for 10 more seconds
                { target: baseRPS * 2, duration: '0s' },
                { target: baseRPS * 2, duration: '10s' },

                // Gradually ramp up to 10 iter/s over 10s and hold that for 20 more seconds
                { target: baseRPS * 10, duration: '10s' },
                { target: baseRPS * 10, duration: '20s' },

                // Gracefully slow down to 1 iter/s again over the next 20s
                { target: baseRPS, duration: '20s' },
            ],
        },
    },
};

export default function () {
    console.log(`[t=${(new Date()) - exec.scenario.startTime}ms] VU{${exec.vu.idInTest}} runs iteration ${exec.scenario.iterationInTest}`);
    http.get(http.url`https://httpbin.test.k6.io/anything?iter=${exec.vu.iterationInScenario}`);
}

I’ve scaled down the time periods and rate values, so it’s easier to see what’s happening, but you can play around with the options and get the idea.

1 Like

You can think of it like this: VUs are the workers, the iterations are the work :smile:

Something needs to execute that console.log() and http.get() statements, and any script other statements you have in your default (or other) JS function. That something is a JS runtime, i.e. the thing we call a VU (virtual user) in k6. You just can have hundreds or thousands of them running your JS code in parallel.

Different executors are basically different ways of scheduling iterations to be executed by VUs. The arrival-rate executors try to start as many iterations as you have specified at any given moment, e.g. a constant-arrival-rate executor with the rate: 1, timeUnit: '1s' options will try to start 1 new iteration every second.

However, that iteration still needs a VU to run on. That’s what the preAllocatedVUs option is for - how big is the pool of initialized VUs (ready to run any iteration) going to be. Keep in mind that a single iteration might take more than a second, so you would end up with more than 1 VU running at any given point in time. How big the value of preAllocatedVUs should be depends on how long your iteration takes to execute, and what’s the rate you want to start iterations.

Regarding your questions 3. and 4., the example you linked has thresholds per scenario and different functions that are executed for different scenarios :confused: I feel like I’m missing something, since that example precisely demonstrates what I think you are asking for :sweat_smile:

1 Like

ok thanks.

from what i know, target inside stages actually set number of VUs, not RPS; or am i missing something here?

is there a way to PRECISELY set and LIMIT the number of requests sent every second? i have had tests with 50VUs set for 2mins had 41,000+ requests sent successfully to a stateless API. I want this control , because I need to know how many concurrent requests (by users) my API can handle, before requests wait for cpu time in the backend and requests start taking long and so i can scale the resources accordingly. but, with this concept of VUs, number of requests sent is still a mystery to me, also with the fact that not all VUs kick off at the beginning, but VU count bumps up incrementally.

i actually wanted to delete my question 3, but somehow missed it. i will try writing groups and use the executor as shown in your example.

Again, this is exactly what the constant-arrival-rate and the ramping-arrival-rate executors allow you to do - PRECISELY set and LIMIT the number of iterations (and thus, requests) sent every second… I have a feeling we are talking past each other :confused:

Here’s a simpler example:

import http from 'k6/http';
import exec from 'k6/execution';

export let options = {
    scenarios: {
        foo: {
            executor: 'constant-arrival-rate',

            // 100 iterations per second, i.e. exactly 100 RPS, since each
            // iteration has just a single request
            rate: 100, timeUnit: '1s',

            // for how long do you want the scenario to run
            duration: '1m',

            // this number doesn't really matter, as long as it's high enough
            // that there is always a free VU to run an iteration on
            preAllocatedVUs: 200,
        },
    },
};

export default function () {
    console.log(`[t=${(new Date()) - exec.scenario.startTime}ms] VU{${exec.vu.idInTest}} runs iteration ${exec.scenario.iterationInTest}`);
    http.get(http.url`https://httpbin.test.k6.io/anything?iter=${exec.vu.iterationInScenario}`);
}

This script will sent exactly 100 requests per second. What exactly are you missing? Please, provide an example of a script that doesn’t work exactly how you want it to work…

1 Like