Track transmitted data per URL with http.batch

I want to log per request response body size when using http.batch() to influxdb. Following this example works-ish but the time of the metric is incorrect as takes the .add() time (which needs to be in a loop after .batch() returns), not the request completion time. My batch takes 30-60 sec and I am trying to get per-second resolution.

Is there a way to override the time in .add() or to use a per-request callback in .batch()?

Hi @jpambrun, welcome to the forum :slight_smile:

That’s an interesting use case that’s unfortunately not currently supported with http.batch(). You might be able to workaround it by increasing the amount of VUs and using plain http.get(), as that will give you the same parallelization as batch() and you can use .add() as usual, but it won’t be as synchronized and you’d have to rewrite that part of your test, so it’s not ideal.

An improved version of the HTTP API is being planned to resolve other often requested issues, and we’ll consider making this a part of it, but we currently don’t have a timeline for it. This would probably require some type of callback as you mention.

Thanks @imiric for the quick response.

The default per-request metrics all work correctly with batch(). They are reported at the correct time, not only at the end. It’s a bit surprising that response size is not part of those. It’s also odd that byte received are counted only at the end of a VU run.

Why not add response size as a default metric? You are at least piping the stream to ioutil.Discard why not cound bytes there?

The data_sent and data_received metrics measure the bytes sent and received on the wire for each VU iteration. We can’t really measure the data for each HTTP request individually, since with HTTP/2 you can have multiplexing of multiple requests in the same connection. And the headers+body length is not actually the number of bytes that were sent because of compression anyway…

Why not add response size as a default metric? You are at least piping the stream to ioutil.Discard why not cound bytes there?

Because we don’t have a way for users to filter out metrics they don’t need or opt into metrics they do need yet (https://github.com/loadimpact/k6/issues/1321), each new metric has some overhead. Can you share your use case and why precisely you need to know the response body length in your load tests?

I am working on an medical image viewer. Its not uncommon for CT scans to have 2-3000 images each about 2-300kb in size and those need to be downloaded as quickly as possible. I am trying to use k6 to benchmark server performance. Typical users would have have 24-40 concurrent downloads.

I was trying to set batch/batchperhost to what a typical user would use and then scale VUs to estimate the number of users the service can handle. I am streaming results to influxdb and I wanted to have the dowload throughput as well. Unfortunately, I just can’t have the download throughput because I can only have timepoints at the end of a user run which are 15-30 sec apart. I would have to average over 10min or more to have a somewhat accurate results.

I am not super familiar with the codebase, but isn’t resp.Body here (and below in the other response type) the compressed response stream? Can we just count bytes from that stream?

In the end I followed @imiric’s advice and am now using http.get() in a loop. I am treating VUs as concurrent connection. It’s aslo a bit easier to ramp up. With that said, I think it would be nice to have a per-request lengh metric if it’s possible to count from the body stream.