Figuring out exact RPS

I put the duration inside the test as part of options construct. I then ran k6 command line with --VUS 100

checks.....................: 100.00% ✓ 82609 ✗ 0    
    data_received..............: 912 MB  15 MB/s
    data_sent..................: 6.3 MB  105 kB/s
    http_req_blocked...........: avg=130.44µs min=1.2µs   med=2.5µs   max=107.95ms p(90)=3.5µs   p(95)=5.3µs  
    http_req_connecting........: avg=87.88µs  min=0s      med=0s      max=73.97ms  p(90)=0s      p(95)=0s     
    http_req_duration..........: avg=72.33ms  min=70.18ms med=72.35ms max=161.4ms  p(90)=73.13ms p(95)=73.54ms
    http_req_receiving.........: avg=70.47µs  min=20.1µs  med=59.4µs  max=28.52ms  p(90)=93.3µs  p(95)=113µs  
    http_req_sending...........: avg=13.56µs  min=4.6µs   med=10µs    max=1.54ms   p(90)=23.4µs  p(95)=28.9µs 
    http_req_tls_handshaking...: avg=0s       min=0s      med=0s      max=0s       p(90)=0s      p(95)=0s     
    http_req_waiting...........: avg=72.25ms  min=70.07ms med=72.27ms max=161.33ms p(90)=73.04ms p(95)=73.43ms
    http_reqs..................: 82609   1376.813489/s
    iteration_duration.........: avg=72.55ms  min=70.26ms med=72.45ms max=184.1ms  p(90)=73.23ms p(95)=73.66ms
    iterations.................: 82609   1376.813489/s
    vus........................: 100     min=100 max=100
    vus_max....................: 100     min=100 max=100

Note that http_reqs is 1376.8/sec (is this the RPS?)

Then I ran a test where I put a stage with 1m and 100 VU’s and got totally different throughput

    checks.....................: 100.00% ✓ 41263 ✗ 0    
    data_received..............: 456 MB  7.6 MB/s
    data_sent..................: 3.1 MB  52 kB/s
    http_req_blocked...........: avg=174.74µs min=1.2µs   med=2.5µs   max=72.62ms  p(90)=3.5µs   p(95)=5.7µs  
    http_req_connecting........: avg=171.29µs min=0s      med=0s      max=72.53ms  p(90)=0s      p(95)=0s     
    http_req_duration..........: avg=72.33ms  min=70.16ms med=72.29ms max=359.61ms p(90)=73.18ms p(95)=73.69ms
    http_req_receiving.........: avg=73.79µs  min=22.7µs  med=62µs    max=9.3ms    p(90)=94.8µs  p(95)=111.6µs
    http_req_sending...........: avg=14.08µs  min=5µs     med=10.1µs  max=2.81ms   p(90)=25.3µs  p(95)=30.3µs 
    http_req_tls_handshaking...: avg=0s       min=0s      med=0s      max=0s       p(90)=0s      p(95)=0s     
    http_req_waiting...........: avg=72.25ms  min=70.11ms med=72.2ms  max=359.55ms p(90)=73.09ms p(95)=73.59ms
    http_reqs..................: 41263   687.715514/s
    iteration_duration.........: avg=72.6ms   min=70.23ms med=72.38ms max=359.69ms p(90)=73.29ms p(95)=73.86ms
    iterations.................: 41263   687.715514/s
    vus........................: 99      min=2   max=99 
    vus_max....................: 100     min=100 max=100

Note that http req is 687.7/sec

Questions:

  1. Why is the RPS so different in these cases? I am assuming they are just different ways of expressing the same thing?

  2. Is that rate value mentioned in the output next to http_reqs the actual RPS? This is not mentioned in the docs and also the rate next to the iterations

  3. If I run k6 tests across 2 different machines and route the output to influxdb, and visualize the RPS results through grafana - can that be considered an accurate representation of RPS generated against the API?

Hi @Priya,

Welcome to the forums! Regarding your questions 1 and 2, I recommend you to read this article: How to generate a constant request rate in k6?. Once you read this article, you’ll get your answer to the third question.

Ive’ read that article. I don’t want to generate constant RPS.

My first question remains same :slight_smile: I believe I did the exact same thing in both scenarios: ran same number of VU’s for 1m - its just that syntactic sugar was added in the latter. To be clear:

First Method:

k6 run script.js --VU 100 --duration 1m

Second Method

k6 run script.js
In the script I put a stage:

{duration: 1m, target:100}

These are the same kind of tests. But the second one gave me considerably much lesser RPS (50% less) than the first.

On my third question:

My goal is to run across multiple machines and for eg:
Machine 1: 100 request 50/sec
Machine 2: 200 request 75/sec

So can we assume the total number of requests that hit the endpoint to be 300 requests and total RPS is 125? (approximately?)

My second question, I think I got the answer, the number next to http_reqs is the RPS. I verified that by checking at my app’s end and comparing.

Hi @Priya,
This might not be particularly well explained in the stages documentation, but there is a comment in the example, which explains that stages go up/down to the specified target over the duration, and that the run start with 1 VU (if you haven’t changed it).

If you just want to use stages and have a constant amount of VUs you can either:
jump with 0 duration to the desired VU count and then have another stage:

stages : [
    { duration: '0s', target: 100 },
    { duration: '1m', target: 100 },
]

or just start from 100 vus through the vus option

vus: 100,
stages: [
    { duration: '1m', target: 100 },
]

In your case the one run was going from 1 to 100, linearly, over that 1 minute - it didn’t run with all 100 VUs at the same time as the previous one, which lead to producing less RPS. You can notice how the vus metric in the end is different, and yes you likely don’t get the 100 VU started as the end test, also apparently it jumps to 2 too fast (this metric is particular being “pulled” every second or so for example)

You can see a list of builtin HTTP metrics and their meaning in the documentation.

third question: … yes , but I the one test will be longer as you are doubling the request but not the RPS of them :slight_smile:

2 Likes

Thanks for the explanation. Helps a lot :slight_smile: