Different VUs = Same Requests per second i.e. load on the system

I have ran a simple test that starts with 60 VUs, then goes to 100, then 160, then 200. (Note I am running the same test across 20 containers - here is the options for one of the containers:

export let options = {

stages: [

{ duration: ‘1m’, target: 3 },

{ duration: ‘2m’, target: 3 },

{ duration: ‘1m’, target: 5 },

{ duration: ‘2m’, target: 5 },

{ duration: ‘1m’, target: 8 },

{ duration: ‘2m’, target: 8 },

{ duration: ‘1m’, target: 10 },

{ duration: ‘2m’, target: 10 },


However, the number of requests per second throughout the test is pretty much the same. The reason being is that the test runs the iterations and as soon as it finishes it starts a new one. When there are a lower number of VUs the response times are faster and so it can complete more iterations. However, when you increase the number of VUs, the response times start to slow a little so iterations take a little longer and the next iteration can’t start until that last one finished…so the end result is pretty much the same load in the server.

So what is the point of using different VU values if you end up with the same load on the server?

I am confused. I think I am increasing the load by increasing the number of VUs…but the end result is the same load as shown in Grafana here:

This is a drawback of using a closed workload model based only on the number of users. K6 introduced open workload models based on the arrival rate as part of k6 0.27.0.

Using the ramping-arrival-rate executor allows you to define the arrival rate for each stage. K6 will attempt to dynamically scale the number of users to meet the target arrival rate if the system under test starts to respond more slowly.

More info: https://k6.io/docs/using-k6/scenarios/executors/ramping-arrival-rate

1 Like

Thanks for the advice dan_nm - helped me greatly.

1 Like