Test many endpoints simultaneously

Hey all,

I am trying to test all endpoints simultaneously, for now lets say I got 3 endpoints.

If I try to do something like this, where second endpoint fails always, it seems that iteration never finishes. I suppose it fails and does not call the third endpoint ? And does the same thing again and again.

export default (data) => {
  // TODO: try to add all requests in one function
  const response = http.get(Configuration.api1, data.data);
  check(response, {
    'api1 status is 200': (r) => r.status === 200
  });
  const response1 = http.get(Configuration.api2, data.data);
  check(response1, {
    'api2 status is 200': (r) => r.status === 200
  });

  const response2 = http.get(Configuration.api3, data.data);
  check(response2, {
    'api3 status is 200': (r) => r.status === 200
  });

  timeGauge.add(new Date() - new Date(exec.scenario.startTime)); // retuns in summary the amount of time test ran

  sleep(1); // After each user pause for 1 sec to see how many calls can be done for API within duration
};

So the problem is in results, why iteration_duration MAXIMUM is less than http_req_duration MAXIMUM?

Could it mean that if a get request fails it fails the function than provides the result how long that function lasted(function that made all those calls) but still some API endpoints are being waited for until they return something?

And second question, how to do so that calls in the iteration would not stop the iteration after single failed one, and would keep going on to second call in the same iteration. :question:

Hi @sillymoomoo !

Sorry for the super-long delay in the response :man_facepalming:

If I try to do something like this, where second endpoint fails always, it seems that iteration never finishes. I suppose it fails and does not call the third endpoint ? And does the same thing again and again.

It shouldn’t be like that. The check won’t block the following requests. But what could be if the third request won’t start till the second finishes. So in other words, the order of the requests is fully synchronous. If you need to do requests in parallel you could try batch( requests ).

So the problem is in results, why iteration_duration MAXIMUM is less than http_req_duration MAXIMUM?

Is it shows the same for just k6 CLI standard output? Could you please maybe share it with all metrics?

Cheers!

1 Like

@sillymoomoo for complete independence of the requests, you could separate each into their own function and then run the functions in parallel using different scenarios set with the same start time. I needed to do this for large services with many endpoints, where many fast endpoints exist but were getting the number of executions limited by a couple of heavyweight/slow endpoints within the main function (single iteration). Grouping the fast and slow requests eases the setting of thresholds.

If I had an always failing request, I would consider it it made sense to exclude that from the load test until it was fixed. This situation would have failed my smoke test which would have prevented the main load test from running. I suppose with it excluded, the question remains if the service would be released anyway regardless of the performance and error rate of the remaining endpoints.

2 Likes