K6's http_req metric undercounting actual RPS

Hi all, first time posting in this community. I’m currently using k6 to load test a service, and it appears the http_req metric is undercounting the amount of rps hitting the service. Using prometheus to calculate the per second change in http_req, I get a rps of about 1.2 million. However, nginx (sitting in front of my service) says that the rps is 1.54 million. I run these load test for more than 30 minutes, and throughout the whole duration the rps is consistently undercounted. Does anyone know what could possibly be the problem?

Please provide some more information. Things like k6 version, OS, and the execution options you’re running k6 with - are you using duration / stages, or some of the new arrival-rate executors?

Also, k6 doesn’t natively output to Prometheus yet, so how exactly are you calculating your RPS? What does the end-of-test summary directly from k6 say about your http_reqs metric? It should be something like <total> <per-second>/s

Hi ned, apologizes for the lack of information.

k6 version: v0.26.2
os info: 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1+deb9u5 (2019-08-11) x86_64 GNU/Linux
execution options:

    export const options = {
      stages: [
        // simulate ramp-up of traffic from 0 to target users over rampup seconds.
          duration: 15,
          target: 50,
          duration: 10000000,
          target: 50,
        { duration: 15, target: 0 }, // ramp-down to 0 users
      setupTimeout: '30s',
      discardResponseBodies: true,
      rps: 5000,
      batchPerHost: 10,
      batch: 10,

We’re outputting to statsd, then using statsd to prometheus exporter to port the metrics to prometheus. From there, we use irate and sum to calculate the per second change in http_reqs like so:

We typically run hundreds of kubernetes pods with k6 on it to generate millions of qps, so unfortunately we don’t run the test all the way until k6 stops executing and look at the end of test summary. I can try to do so if you think that will help with the debug

unfortunately we don’t run the test all the way until k6 stops executing and look at the end of test summary

No worries, you’d have to go an manually sum the summary for all of the pods anyway, which would be very tedious… Out of curiosity, how do you abort your test run? Because if it’s something like Ctrl+C (or straight up killing the pods), k6 is not going to count any in-flight requests at that time, while nginx likely will, even if they are interrupted before fully being done.

Something else I noticed, your duration configuration is using ints, not strings (e.g. "30s", "5m", etc.), and k6 interprets it as nanoseconds ( 10000000 ns == 0.01s). Not sure if those are the actual values you’re using, but while k6 can currently work with integer values for duration in the options, we haven’t documented it anywhere, since we consider it undefined behavior, almost a bug. We plan to soon fix that bug by unifying every place to consider int duration as milliseconds and also accept stringy values as well. More details at https://github.com/loadimpact/k6/issues/1305

I’d also suggest that you upgrade k6 to the latest v0.27.1 and try running your tests with that. There were very substantial changes done to the way VUs are scheduled in k6 v0.27.0, including the addition of new options like gracefulStop and changes in how metrics are processed. gracefulStop and gracefulRampDown don’t exist in previous k6 versions, including v0.26.2 that you’re using, so k6 immediately stops a VU when the stages config instructs it to ramp-down, or at the end of the test, and has a strict cut-off that doesn’t emit the metrics of any in-flight requests after that point. You can check an old issue of someone with a similar problem: https://github.com/loadimpact/k6/issues/898

Finally, I’m not familiar with Prometheus at all, but skimming the documentation of irate and seeing things like “This is based on the last two data points.” don’t fill me with confidence that you’re going to get an exact number with your current approach… :confused:

Apologies, I simplified the code a bit and omitted the s for seconds in the duration configuration - we usually have that in there, these tests run for usually an hour.
I chanced upon that github issue as I was debugging actually. Unfortunately the miscounting doesn’t seem to be a uniquely end of test thing, but rather a throughout the whole test thing.
I’ll look into upgrading to v0.27.1, thanks for your help so far, and do let me know if you happen to think of a possible reason.