I’m currently trying to figure out the best solution for a problem we’ve been seeing in some of our tests.
The situation specifically is we have GET tests that target endpoints with very large file downloads. This is by design, we wanted to test how the application handles requests for large files at scale and see how long it takes to process. We always measure TTFB here so we’re not actually bothered too much about the file download itself except for the fact it’ll be putting realistic strain on the server.
We also measure RPS though and in this situation it’s affected drastically but the available bandwidth between the box running k6 and the server. The lower the bandwidth the worse RPS is as it waits for the downloads to finish and this leads to inconsistent tests between setups, i.e. it’s the network that “failed” the test and not the application.
Has anyone come across anything like this before? Any best practice advice?
I suggest using the discardResponseBodies option globally, or setting responseType: "none" on individual requests. This way k6 will read the whole response, but it will not store it in memory.
Other than that, I guess it depends on how your service works. @pawel’s suggestion could work, but, depending on how your backend works, if it might be the case that interrupting the requests midway through won’t stress it in the same way as when you read through the whole response. If that case, I don’t think you can find a workaround.