Grpc: Reuse connection?

The new grpc support is great. Is there a way to re-use the client connection? It seems that connecting in setup() and closing in teardown() doesn’t work.

I’m trying to figure out some strange benchmarking results: I have a service exposed via grpc and grcp-gateway. I’m using k6 to benchmark both. The grpc-gateway results are slightly faster, which doesn’t really make sense: grpc-gateway is calling the same grpc endpoint as the native grpc benchmark, and it’s all on localhost, so grpc-gateway should be at least as slow… I’m thinking the reason is that go’s http library is re-using the connection for the http benchmark, but the native grpc benchmark is repeating the TCP connection setup on each iteration.

Hi @ansel1 , welcome to the community forum.

I think you mistyped it … but k6 is slower than something else, and you think it is because k6 isn’t reusing the grpc connection? This is somewhat … not true on a different level, but you can reuse the connection if you just don’t call close at all, and connect only the first time … or when/if you disconnect, which will likely be trickier ;).

So for example if you change this sample code from the repository and change it to be:



export default () => {   
    if (__ITER == 0) {// only on the first iteration
        client.connect("127.0.0.1:10000", { plaintext: true })
    }
    const response = client.invoke("main.RouteGuide/GetFeature", {
        latitude: 410248224,
        longitude: -747127767
    })

    check(response, { "status is OK": (r) => r && r.status === grpc.StatusOK });
    console.log(JSON.stringify(response.message))

//    client.close()
}

And then remove the sleeps in the server implementation that goes with it … it gets around 2-3x faster (it barely does anything so :man_shrugging: ).

This will likely need a wrapper around the invoke call to check that the connection hasn’t disconnected for some reason, but other than that it should work fine :wink:

Hope this helps, and likely in the future we will have a better way to do this

Awesome, thanks, I will give that a try. But curious, what did you mean by “somewhat not true”?

To clarify by test setup: I have a single, golang executable. It exposes a grpc server on one port. It also exposes a web server on another port. The handler for the web server is grpc-gateway, which is a tool that makes grpc services accessible via a REST-ish interface. The handler translates REST-ish requests (JSON over http) into grpc calls. The handler directs those grpc calls to a grpc client which is connected to grpc server port. So even though grpc-gateway is running in the same process as the grpc server, it still calls the grpc server over the network, not in-process.

So my logic is, if an external client calls the grpc server directly, that should be a bit faster than an external client sending a REST request to the grpc-gateway. grpc-gateway is pure overhead.

I have two k6 scripts: one calls grpc directly, one calls the grpc gateway. The grpc gateway script was running faster (though only by a hair). That wasn’t the result I expected.

I assume that by “somewhat … not true on a different level” @mstoykov meant that the network connection for gRPC’s underlying HTTP/2-based transport might not be fully closed. I am not sure if that is the case or not, but I wouldn’t be surprised.

Did you try to not close the gRPC connection every iteration? That should put the gRPC code on at least an equal level with the HTTP requests k6 will make, since k6 will use keep-alive connections by default (though that can be disabled with the noConnectionReuse option).

The only other cause of the discrepancy you describe I can think of is the k6 gRPC marshaling and unmarshaling of messages. It’s unlikely it affects results that much, but the dynamic nature of the k6 gRPC implementation is going to be much more inefficient than a dedicated marshaling code generated by protoc.