Hi @mostafa

Thanks for that post. I have a situation, need your advice.

Scenario 1:

```
export let options = {
scenarios: {
constant_request_rate: {
executor: 'constant-arrival-rate',
rate: 300,
timeUnit: '1s', // 300 iterations per second, i.e. 300 RPS
duration: '90s',
preAllocatedVUs: 300, // how large the initial pool of VUs would be
maxVUs: 1000, // if the preAllocatedVUs are not enough, we can initialize more
}
}
}
```

With the above config, I got the below result,

```
scenarios: (100.00%) 1 scenario, 1000 max VUs, 2m0s max duration (incl. graceful stop):
* constant_request_rate: 300.00 iterations/s for 1m30s (maxVUs: 300-1000, gracefulStop: 30s)
checks.....................: 99.79% ✓ 26607 ✗ 55
data_received..............: 8.1 MB 88 kB/s
data_sent..................: 4.0 MB 44 kB/s
dropped_iterations.........: 339 3.71611/s
http_req_blocked...........: avg=52.15µs min=1.67µs med=2.85µs max=71.37ms p(90)=3.67µs p(95)=4.4µs
http_req_connecting........: avg=14.43µs min=0s med=0s max=59.58ms p(90)=0s p(95)=0s
http_req_duration..........: avg=6.98ms min=1.88ms med=3.41ms max=425.24ms p(90)=5.47ms p(95)=9.65ms
http_req_receiving.........: avg=50.38µs min=15.12µs med=36.35µs max=100.03ms p(90)=46.77µs p(95)=52.2µs
http_req_sending...........: avg=31.37µs min=6.92µs med=16.5µs max=133.07ms p(90)=22µs p(95)=26.86µs
http_req_tls_handshaking...: avg=33.35µs min=0s med=0s max=71.07ms p(90)=0s p(95)=0s
http_req_waiting...........: avg=6.9ms min=1.82ms med=3.36ms max=425.15ms p(90)=5.42ms p(95)=9.55ms
http_reqs..................: 26662 292.268228/s
iteration_duration.........: avg=1.01s min=1s med=1s max=2.56s p(90)=1s p(95)=1.01s
iterations.................: 26662 292.268228/s
vus........................: 362 min=300 max=362
vus_max....................: 362 min=300 max=362
```

You can see that I was able to get 292 tps.

Scenario 2: (Increasing the rate to 600)

```
export let options = {
scenarios: {
constant_request_rate: {
executor: 'constant-arrival-rate',
rate: 600,
timeUnit: '1s', // 1000 iterations per second, i.e. 1000 RPS
duration: '90s',
preAllocatedVUs: 300, // how large the initial pool of VUs would be
maxVUs: 1000, // if the preAllocatedVUs are not enough, we can initialize more
}
}
};
```

```
scenarios: (100.00%) 1 scenario, 1000 max VUs, 2m0s max duration (incl. graceful stop):
* constant_request_rate: 600.00 iterations/s for 1m30s (maxVUs: 600-1000, gracefulStop: 30s)
time="2020-10-01T06:47:48Z" level=warning msg="Insufficient VUs, reached 1000 active VUs and cannot initialize more" executor=constant-arrival-rate scenario=constant_request_rate
✓ status is 200
checks.....................: 100.00% ✓ 19005 ✗ 0
data_received..............: 8.1 MB 85 kB/s
data_sent..................: 3.1 MB 32 kB/s
dropped_iterations.........: 34996 367.068097/s
http_req_blocked...........: avg=4.85ms min=1.83µs med=3.03µs max=499.14ms p(90)=4.9µs p(95)=2.04ms
http_req_connecting........: avg=799.49µs min=0s med=0s max=119.84ms p(90)=0s p(95)=223.01µs
http_req_duration..........: avg=3.61s min=1.62s med=2.31s max=10.38s p(90)=6.96s p(95)=7.41s
http_req_receiving.........: avg=179.72µs min=16.36µs med=38.51µs max=518.18ms p(90)=52.1µs p(95)=60.6µs
http_req_sending...........: avg=339.92µs min=7.68µs med=18.36µs max=709.76ms p(90)=28.92µs p(95)=42.57µs
http_req_tls_handshaking...: avg=2.82ms min=0s med=0s max=217.67ms p(90)=0s p(95)=1.67ms
http_req_waiting...........: avg=3.6s min=1.62s med=2.31s max=10.38s p(90)=6.96s p(95)=7.41s
http_reqs..................: 19005 199.340758/s
iteration_duration.........: avg=4.66s min=2.69s med=3.33s max=13.4s p(90)=8.06s p(95)=8.44s
iterations.................: 19005 199.340758/s
vus........................: 1000 min=600 max=1000
vus_max....................: 1000 min=600 max=1000
```

You can see that the request rate has dropped to 199. http_req_duration has increased to 3.61sec.

I thought my backend was not able to perform under load. To double check, I ran a 600 tps load using Jmeter.

Interestingly, Jmeter was able to achieve roughly 598 tps and the downstream duration was 10 ms avg.

Then I ran 2 copies of k6 (scenario 1) and both generated 300tps with downstream latency of 15ms.

My question is what’s wrong with the Scenario 2 configuration? Thanks for your help.

Regards

Selva