I’m currently trying to figure out the best solution for a problem we’ve been seeing in some of our tests.
The situation specifically is we have GET tests that target endpoints with very large file downloads. This is by design, we wanted to test how the application handles requests for large files at scale and see how long it takes to process. We always measure TTFB here so we’re not actually bothered too much about the file download itself except for the fact it’ll be putting realistic strain on the server.
We also measure RPS though and in this situation it’s affected drastically but the available bandwidth between the box running k6 and the server. The lower the bandwidth the worse RPS is as it waits for the downloads to finish and this leads to inconsistent tests between setups, i.e. it’s the network that “failed” the test and not the application.
Has anyone come across anything like this before? Any best practice advice?