Getting very Low RPS

I have a scenario where need to achieve 500 rps in a API post request but getting only 69. The API body has around 200 fields and some are randomly generated one. Below is the code which I am using.

import { check } from 'k6';
import http from 'k6/http';
import { SharedArray } from 'k6/data';
import { vu } from 'k6/execution';
import { scenario } from 'k6/execution';
import { uuidv4 } from 'https://jslib.k6.io/k6-utils/1.4.0/index.js';
import exec from "k6/execution";

let JWT_TOKEN = 'xyz';
const url = 'http://localhost:8053/api/test';
const n = 10000;
function generateArray() {
    const arr = new Array(n);
    for (let i = 0; i < n; i++) {
        arr[i] = {
            "user": uuidv4,
            "name": "sadghtdh",
            "ttc": uuidv4,
            "atc1": "10",
            "atc2": "00",
            "currency_code": "AD",
            "currency_unit": 2,
            "amount": 3200,
            //........like this have 200 fields          
        };
    }
    return arr;
}

let data;
if (__ENV.SHARED === 'true') {
    data = new SharedArray('my data', generateArray);

} else {
    data = generateArray();
}
export let options = {
    discardResponseBodies: true,
    scenarios: {
        open_model: {
            executor: 'constant-arrival-rate',

            rate: 500,
            timeUnit: '1s',
            duration: '1m',

            preAllocatedVUs: 400,
            maxVUs: 10000,
        },
    },
};

export default function () {
    var item = data[exec.vu.iterationInScenario];
    const res = http.post(url, JSON.stringify(item), {
        headers: {
            'Content-Type': 'application/json',
            'Authorization': `Bearer ${JWT_TOKEN}`,
        },
    });
}

Below is the result after running

Hi @mdshaqib

There could be different issues at play here. However, at this time the main issue seems to be that you have requests that take on average 1 minute (http_req_duration), and your test lasts only 1 minute (duration: '1m'). It is not giving k6 enough time to even record all the results of the initial iterations, and many are even interrupted. So you don’t get to the rate. I also see a high http_req_failed in the results.

What I would do first is double-check if the API latency without load is 1 minute. If that is the case, it’s the expected response time:

  • You’ll at least need 500 rps x 60s VUs (30.000 VUs, probably more) to reach your desired rate.
    • You can see a high number of dropped_iterations, which means k6 is trying to run more requests and there are not VUs available. So k6 is actually trying to increase the load, and it can’t.
  • I would start with the preAllocatedVUs already set to the number you need, and leave maxVUs with its default value (preAllocatedVUs). k6 will need resources (CPU etc.) to provision new VUs in the middle of the test, and it will skew your results.
    • You can see that k6 started 5097 VUs. From the initial 400 in 1 minute this probably took a lot of resources. And still it was not enough to reach the required rate.
  • I would definitely increase the duration, so you have time to ramp up and especially finish the iterations.

There is another scenario here. If the response time of this endpoint should be lower than 60s, then you are probably overwhelming the endpoint with this load and you won’t reach it even with larger duration and more VUs. If that is the case:

  • Y have a look at the API metrics & logs while you run the test, to double-check this theory.
    • Are there any errors? Is there a rate limit on the API? Are there any resources limitations (CPU, memory, etc.).
  • Start the test with fewer VUs, to see what’s the usual latency (request_duration) without a high load. And ramp up users as explained here: What is Stress Testing? How to create a Stress Test in k6. It can help detect at what point you have a bottleneck and the API breaks or starts being too slow.

I can also recommend you have a look at these two entries in our documentation, if you haven’t already:

I hope this helps.

Cheers!

1 Like