Can K6 support send requests at constant throughput?

Dear K6 experts,

For gatling(or other load test tool as well), It support send requests at constant throughput. Will we have the similar feature as well?

i.e:

Allows to reason in terms of request per second and not in terms of users. Can also be defined at the scenario level. Throttle can take one to many building blocks described below.
(reachRps(target) in (dur unit))
Target a throughput with a ramp over a given duration
(jumpToRps(target))
Jump immediately to a given targeted throughput
(holdFor(duration))
Hold the current throughput for a given duration
Eg: setUp(...).throttle(reachRps(100) in (10 seconds), holdFor(10 minute))

Refer to: Gatling - Injection

Thanks,
Roy

You’re able to throttle requests to a certain request per minute as an option. You can add this as a command line argument or add it in the options variable in each test.

Since the rps option only throttles the request if it reaches that number, it doesn’t actually mean your tests will hit that limit especially if you have low number of users and high response times. If you want to know how much VUs you would need to get to a certain request per second, this blog How to generate a constant request rate in k6 with the new scenarios API? was very informative.

1 Like

wow, that’s great! Thanks!

I’m happy to say that we just released k6 v0.27.0, which solves the issue more elegantly than the old suggestions from that blog post. Now you can precisely specify the iterations per second (and thus, requests per second) you want, without any hacks with sleep(). And it will work equally well in k6 cloud and k6 run, in contrast to the --rps option, which has some caveats when running in the cloud.

Take a look at the release notes: Release v0.27.0 · grafana/k6 · GitHub
And at the documentation about the new scenarios option, especially the arrival-rate parts: Scenarios

1 Like

Hey @TotesOates and @royzhang007,

I have written a new blog post explaining the constant-arrival-rate executor to generate a constant request rate via the new scenarios API:

How to generate a constant request rate in k6 with the new scenarios API?

Good luck! :slight_smile:

wow, it’s great!!!
Thx @mostafa @ned !!!

1 Like

Hi @mostafa

Thanks for that post. I have a situation, need your advice.

Scenario 1:

export let options = {
    scenarios: {
        constant_request_rate: {
            executor: 'constant-arrival-rate',
            rate: 300,
            timeUnit: '1s', // 300 iterations per second, i.e. 300 RPS
            duration: '90s',
            preAllocatedVUs: 300, // how large the initial pool of VUs would be
            maxVUs: 1000, // if the preAllocatedVUs are not enough, we can initialize more
        }
    }
}

With the above config, I got the below result,

scenarios: (100.00%) 1 scenario, 1000 max VUs, 2m0s max duration (incl. graceful stop):
 * constant_request_rate: 300.00 iterations/s for 1m30s (maxVUs: 300-1000, gracefulStop: 30s)

    checks.....................: 99.79% ✓ 26607 ✗ 55
    data_received..............: 8.1 MB 88 kB/s
    data_sent..................: 4.0 MB 44 kB/s
    dropped_iterations.........: 339    3.71611/s
    http_req_blocked...........: avg=52.15µs min=1.67µs  med=2.85µs  max=71.37ms  p(90)=3.67µs  p(95)=4.4µs
    http_req_connecting........: avg=14.43µs min=0s      med=0s      max=59.58ms  p(90)=0s      p(95)=0s
    http_req_duration..........: avg=6.98ms  min=1.88ms  med=3.41ms  max=425.24ms p(90)=5.47ms  p(95)=9.65ms
    http_req_receiving.........: avg=50.38µs min=15.12µs med=36.35µs max=100.03ms p(90)=46.77µs p(95)=52.2µs
    http_req_sending...........: avg=31.37µs min=6.92µs  med=16.5µs  max=133.07ms p(90)=22µs    p(95)=26.86µs
    http_req_tls_handshaking...: avg=33.35µs min=0s      med=0s      max=71.07ms  p(90)=0s      p(95)=0s
    http_req_waiting...........: avg=6.9ms   min=1.82ms  med=3.36ms  max=425.15ms p(90)=5.42ms  p(95)=9.55ms
    http_reqs..................: 26662  292.268228/s
    iteration_duration.........: avg=1.01s   min=1s      med=1s      max=2.56s    p(90)=1s      p(95)=1.01s
    iterations.................: 26662  292.268228/s
    vus........................: 362    min=300 max=362
    vus_max....................: 362    min=300 max=362

You can see that I was able to get 292 tps.

Scenario 2: (Increasing the rate to 600)

export let options = {
    scenarios: {
        constant_request_rate: {
            executor: 'constant-arrival-rate',
            rate: 600,
            timeUnit: '1s', // 1000 iterations per second, i.e. 1000 RPS
            duration: '90s',
            preAllocatedVUs: 300, // how large the initial pool of VUs would be
            maxVUs: 1000, // if the preAllocatedVUs are not enough, we can initialize more
        }
    }
};
  scenarios: (100.00%) 1 scenario, 1000 max VUs, 2m0s max duration (incl. graceful stop):
           * constant_request_rate: 600.00 iterations/s for 1m30s (maxVUs: 600-1000, gracefulStop: 30s)

time="2020-10-01T06:47:48Z" level=warning msg="Insufficient VUs, reached 1000 active VUs and cannot initialize more" executor=constant-arrival-rate scenario=constant_request_rate

    ✓ status is 200

    checks.....................: 100.00% ✓ 19005 ✗ 0
    data_received..............: 8.1 MB  85 kB/s
    data_sent..................: 3.1 MB  32 kB/s
    dropped_iterations.........: 34996   367.068097/s
    http_req_blocked...........: avg=4.85ms   min=1.83µs  med=3.03µs  max=499.14ms p(90)=4.9µs   p(95)=2.04ms
    http_req_connecting........: avg=799.49µs min=0s      med=0s      max=119.84ms p(90)=0s      p(95)=223.01µs
    http_req_duration..........: avg=3.61s    min=1.62s   med=2.31s   max=10.38s   p(90)=6.96s   p(95)=7.41s
    http_req_receiving.........: avg=179.72µs min=16.36µs med=38.51µs max=518.18ms p(90)=52.1µs  p(95)=60.6µs
    http_req_sending...........: avg=339.92µs min=7.68µs  med=18.36µs max=709.76ms p(90)=28.92µs p(95)=42.57µs
    http_req_tls_handshaking...: avg=2.82ms   min=0s      med=0s      max=217.67ms p(90)=0s      p(95)=1.67ms
    http_req_waiting...........: avg=3.6s     min=1.62s   med=2.31s   max=10.38s   p(90)=6.96s   p(95)=7.41s
    http_reqs..................: 19005   199.340758/s
    iteration_duration.........: avg=4.66s    min=2.69s   med=3.33s   max=13.4s    p(90)=8.06s   p(95)=8.44s
    iterations.................: 19005   199.340758/s
    vus........................: 1000    min=600 max=1000
    vus_max....................: 1000    min=600 max=1000

You can see that the request rate has dropped to 199. http_req_duration has increased to 3.61sec.

I thought my backend was not able to perform under load. To double check, I ran a 600 tps load using Jmeter.

Interestingly, Jmeter was able to achieve roughly 598 tps and the downstream duration was 10 ms avg.

Then I ran 2 copies of k6 (scenario 1) and both generated 300tps with downstream latency of 15ms.

My question is what’s wrong with the Scenario 2 configuration? Thanks for your help.

Regards
Selva

As you can see from this warning, the number of VUs you had configured were not enough. Increase preAllocatedVUs and maxVUs.

I think you might also have sleep(1) somewhere, which is unnecessary in an arrival-rate scenario, it’s only going to tie up VUs needlessly. The reason I suspect this is because of the ~1s difference between http_req_duration and iteration_duration.

1 Like