Odd results calling multiple api in a single group

Hey folks I have a head scratcher I need some help with. I have a script which I’ve designed ( probably badly) to do a simple use case.

Use Case

  1. call auth api to get a token
  2. use said token to call get_api(1)
  3. get currentValue (key:value) from the json response of get_api(1)
  4. use said currentValue to pass into get_api(1)_again when certain conditions are met (wait for it)
  5. I need get_api(1)_again to run at a the rate defined in the scenario options for its exec (my script is pretty crazy for even crazier api logic under test)

Result
1, the call to get_api(1) is higher than the executor target. yes I know this is possible because the executor rate is iterations per second within the default function and HTTP rate can be different. often the HTTP - request rate (RPS) can be higher than the defined iterations per second for the group or batch or whatever
2. the call to get_api(1)_again is lower than the executor target. yes I now this is possible do to network effects anywhere downstream of the k6 client. (wait for it).
3. the average of the get_api(1) and get_api(1)_again is dang near exactly the rate defined in the scenario option for this executor and exec.
4. MIND BLOWN!?!?

Sorry can’t show the labels - the chart shows get_api(1) at 6.3 RPS, get_api(1)_again at 3.6 RPS (and change). Avg RPS ~ 5???

Question

  1. How the frack do I ensure that api(1) and more importantly api(1)_again run as close to the defined target - 5 iterations per second? When I use only 1 api call per group within the function IPS is nearly dead perfect with RPS using the arrival-rate executor(s). In this case I’m using ramping-arrival-rate. do I have to play tricks with think time? Gawd I hope not… that would damage my love affair with ramping-arrival-rate. It lets me maintain my mastery of lazy engineering :smiley:
const noDelta= '1234';
var current = null;

const accounts = new SharedArray('accounts', function () {
	switch (deploymentEnvironment) {
		case "prod":
			return JSON.parse(open(data_file + count)).IDS;
		case "stage":
			return JSON.parse(open(data_file + count)).IDS;
		case "test":
			return JSON.parse(open(data_file)).IDS;
	}
});
const accountInput = accounts[Math.floor(Math.random() * accounts.length)]; // specify value as there is no example value for this parameter in OpenAPI spec

export function get_api(api_token, sinceId) {

	var uri = `/api/${accountInput.ACCOUNTID}?sinceId=${sinceId}`
	var url = urlbase_core + uri;
	const params = {
		headers: {
			'Header': 'Header ' + api_token,
		},
		tags: {
			name: 'get_delta'
		},
		timeout: request_timeout,

	};
	return http.get(url, params);
};

export function peak_gate_rush() {

	// expired = `${new Date() - new Date(exec.scenario.startTime)}`;
	// console.log(`${exec.scenario.name}: scenario ran for ${expired}ms`);
	if (api_token === null) {
		var res = get_auth_token(perm_value);
		// console.log(JSON.stringify(res));
		var res_json = JSON.parse(res.body);
		api_token = res_json['access_token'];
	}
	else if (api_token !== null && parseInt(exec.instance.currentTestRunDuration) % token_expiration <= 10000) {
		var res = get_auth_token(perm_value);
		var res_json = JSON.parse(res.body);
		api_token = res_json['access_token'];
		// console.log("Scenario:" + exec.scenario.name + " - Scenario runtime: " + exec.instance.currentTestRunDuration / 1000 / 60 + "VU Id: " + exec.vu.idInInstance)
	}

	// Below is the actual test case for the API endpoint
	group("get_delta", function () {
		var res = get_api(api_token, noDelta);
		var body = JSON.parse(res.body);
		current = body.currentId;
		if (current == 0) {
			current = 1;
		}
		var res = get_api(api_token, current - 1); //call kents with delta conditions

		if (
			!check(res, {
				'status code MUST be 200': (res) => res.status == 200,
			})
		) {
			console.log(JSON.stringify(res));
			fail('status code was *not* 200: ' + res.status);
		}
		myTrend.add(res.timings.duration);
	});
	sleep(randomIntBetween(thinktime1, thinktime2));
};

Options ( yes I know I don’t need 4000 MaxVU’s for 5 RPS just testing stuff before I let it rip)

export let options = {
    scenarios: {
        peak: {
            // peak scenario name
            executor: 'ramping-arrival-rate',
            startRate: 0,
            timeUnit: '1s',
            preAllocatedVUs: 25,
            maxVUs: 4000,
            stages: [
                { target: peak, duration: peak_ramp },
                { target: peak, duration: peak_sustain },
                { target: 0, duration: ramp_down },
            ],
            gracefulStop: after_peak_delay, // do not wait for iterations to finish in the end
            tags: { test_type: 'peak' }, // extra tags for the metrics generated by this scenario
            exec: 'peak_gate_rush', // the function this scenario will execute
        },
        gate_rush: {
            // gate_rush scenario name
            executor: 'ramping-arrival-rate',
            startRate: 0,
            startTime: start_delay,
            timeUnit: '1s',
            preAllocatedVUs: 25,
            maxVUs: 4000,
            stages: [
                { target: gate_rush, duration: gr_ramp },
                { target: gate_rush, duration: gr_sustain },
                { target: 0, duration: ramp_down },
            ],
            gracefulStop: after_peak_delay, // do not wait for iterations to finish in the end
            tags: { test_type: 'gate_rush' }, // extra tags for the metrics generated by this scenario
            exec: 'peak_gate_rush', // the function this scenario will execute
        },
    },
    discardResponseBodies: false,
    noConnectionReuse: true,
    noVUConnectionReuse: true,
    thresholds: {
        // we can set different thresholds for the different scenarios because
        // of the extra metric tags we set!
        'http_req_duration{test_type:peak}': [{ threshold: 'med<250', abortOnFail: tags_trigger_true_false, delayAbortEval: '30s' }],
        'http_req_duration{test_type:gate_rush}': [{ threshold: 'med<300', abortOnFail: tags_trigger_true_false, delayAbortEval: '30s' }],
        // we can reference the scenario names as well
        'http_req_failed{scenario:peak}': [{ threshold: 'rate < 0.05', abortOnFail: error_trigger_true_false, delayAbortEval: '30s' }],
        'http_req_failed{scenario:gate_rush}': [{ threshold: 'rate < 0.05', abortOnFail: error_trigger_true_false, delayAbortEval: '30s' }],
    },
    summaryTrendStats: ['avg', 'min', 'med', 'max', 'p(75)', 'p(90)', 'p(99)'],
    insecureSkipTLSVerify: false,
    ext: {
        loadimpact: {
            projectID: 1
        }
    }
};

Thoughts? Thanks in advance.

Hmmm. get_api(1) and get_api(1)_again get closer together when I disable

  • noConnectionReuse: false,
    
  • noVUConnectionReuse: false,
    

Hi @PlayStay :wave:

I must admit that I struggled a bit getting to the bottom of what your exact objectives and struggles are :sweat: Yet, the gist of my current understanding is: you have two different API calls that you would like to perform at rate that is as close as possible from each other. Does that sound even remotely correct?

Based on this assumption on my part, I have no concrete solutions, but I would suggest you look into the constant-arrival-rate which is tailored for this kind of scenarios. I’ve also noticed that use a random sleep timing. It might worth experimenting with handcrafted timings to get you closer to your goal?

Hello @oleiade unfortunately I can not share the use case in this public a forum ;(. However I think I can restate my goal in a different. - generic manner. Think of it this way.

I need to make a call to get_api(noDelta) and get_api(currentRev-1) where get_api(currentRev-1) is called as a percentage of get_api(noDelta). Meaning if I call noDelta 1000 times I have a scenario of conditions where (currentRev-1) is called 1% of those 1000 calls.

I have tried using the following % (modulus) logic to get 1% of calls but it was not very accurate.

const deltaPercent=1;
const threshold = Math.floor(accounts.length / (100 / deltaPercent));

	group("get_entitlements_delta_percent", function () {
		var res = get_kents(api_token, noDeltaRevId);
		var body = JSON.parse(res.body);
		// console.log(JSON.stringify(body));
		if (
			!check(res, {
				'status code MUST be 200': (res) => res.status == 200,
			})
		) {
			console.log(JSON.stringify(res));
			fail('status code was *not* 200: ' + res.status);
		}
		if (accountInput.ACCOUNTID % threshold === 0) {  # make this call 1% of the transaction rate of get_api(noDelta)
			var currentRevId = body.currentRevisionId;
			var res = get_kents(api_token, currentRevId - 1);
			body = JSON.parse(res.body);

any thoughts on a way to make an api request from a set target rate of say 1000 iterations per sec then call the same api 1% (or some variable percentage) of the target rate (1000 IPS in this case).

thx.