Arrival rate and batch for gRPC


Is it possible to set desired throughput for gRPC calls?
I tried to apply arrival_rate for gRPC calls in the same way as was shown for HTTP requests at Is K6 safe from the coordinated omission problem? but it seems this trick does not work for gRPC (details below).

Also, from documentation links below I see batch is supported only for http, is there a plan to have it in gRPC as well?

Thank you in advance, k6 is great!

I got 357.47731/s when I specified --env arrival_rate=10, my script is:

import grpc from "k6/net/grpc";

export let options = {
    scenarios: {
        http_scenario: {
            executor: "constant-arrival-rate",
            rate: __ENV.arrival_rate,
            timeUnit: "1s"

const client = new grpc.Client();
client.load([], "echo.proto");

export default () => {
  client.connect("localhost:8080", {
     plaintext: true

  const data = {"message": "Hi"};
  const response = client.invoke("go_grpc_echo_pb.Echo/Send", data);


the output is:

$ /usr/local/bin/k6 run --duration 10s  --env arrival_rate=10 --out csv=k6_grpc_kpi.csv test_grpc.js

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: test_grpc.js
     output: csv=k6_grpc_kpi.csv (k6_grpc_kpi.csv)

  scenarios: (100.00%) 1 scenario, 1 max VUs, 40s max duration (incl. graceful stop):
           * default: 1 looping VUs for 10s (gracefulStop: 30s)

running (10.0s), 0/1 VUs, 3581 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs  10s

    data_received........: 1.1 MB 107 kB/s
    data_sent............: 677 kB 68 kB/s
    grpc_req_duration....: avg=1.19ms min=797.75µs med=1.16ms max=5.22ms  p(90)=1.4ms  p(95)=1.49ms
    iteration_duration...: avg=2.78ms min=2.05ms   med=2.72ms max=12.14ms p(90)=3.14ms p(95)=3.37ms
    iterations...........: 3581   357.47731/s
    vus..................: 1      min=1 max=1
    vus_max..............: 1      min=1 max=1

This CLI flag overwrites the options.scenarios config you have in the script, CLI flags have precedence over the exported script options:

So, you are not running an arrival-rate scenario, but you are instead running a constant-vus scenario for 10 seconds. This is also what it says in the short test description you see in the k6 output:

           * default: 1 looping VUs for 10s (gracefulStop: 30s)

And because you don’t have any sleep() in your default function (nor should you, if it was actually an arrival-rate test!), you manage to produce 3581 iterations for these 10 seconds continuously executing the default function.

1 Like

Great, now I see, thank you! It works perfectly both with gRPC and HTTP!

And what about the second question about batches?

Ah, sorry, I forgot to reply to this…

Unfortunately, we don’t have a plan for supporting something like http.batch() for gRPC soon.

I think it’s more likely we’ll focus on bringing event loop support (Global JS event loops in every VU · Issue #882 · loadimpact/k6 · GitHub) in every VU, which would allow us to both have multiple concurrent unary gRPC requests from a single VU (which is what http.batch() does, just for HTTP requests), but also support for streaming gRPC calls and a bunch of other nice things.

1 Like