K6 is memory hungry when using several modules

Hi folks, hope you are all doing great.
We’ve been facing extraordinary memory usage by k6 in one of our load tests.
The load test has the following configuration:

  • VUs = ~ 8k
  • CPU = 10 cores
  • Memory = 60 Gb
  • Compatibility mode = base
  • No output
  • no Summary
  • no Thresholds
  • imported a couple of internal modules (containing test data CSV, and functions to hit HTTP endpoints to be used by the LT to compose calls)

Even though we allocated 60Gb our pod gets OOMKiller as it surpasses the memory limits.

We have tried tweak several options (even GOGC) and no good result so far.

Our last guess would be something k6/goja is messing out when libs are imported by the LT.

Any help is appreciated.

Did you use SharedArray for these? Otherwise, every VU will have a copy of the data in memory and that tends to add up very quickly.

Yes, we do use SharedArray. We have just tried copy and paste the lib code directly in the load test and it reduces significantly the amount of memory used.

Hmm I don’t understand exactly what you mean, can you sketch what the difference was with a script sample or something? Without any sensitive data, of course, just a demonstration of what the difference was.

Sure!
This is an example with the test importing libraries:

import { SharedArray } from ‘k6/data’;

import execution from ‘k6/execution’;

import { Rate } from ‘k6/metrics’;

// eslint-disable-next-line @typescript-eslint/no-var-requires

const { randomItem } = require(‘https://jslib.k6.io/k6-utils/1.1.0/index.js’);

import { parseDataFile } from ‘@ifood/k6-loadtest-helpers’;

import { AuthBody, authenticate, getSession, User } from ‘@ifood/k6-resource-account’;

import { MERCHANT_LIST } from ‘@ifood/k6-resource-discovery’;

const users = new SharedArray(‘Users’, () => {
return parseDataFile(usersFile) as any;
});
doSometing()…

The test above consumes a lot of memory (+60GB) BUT if we copy and paste all the code from the referenced libraries directly to the test, the execution drops the memory usage to half (~30GB).
Seems there is something weird going on k6/goja to have this kind of behavior when importing libraries. Although copy and paste the code kind of solves the problem, it invalidates the reusage of the same modules on other tests.

if we copy and paste all the code from the referenced libraries directly to the test, the execution drops the memory usage to half

This generally shouldn’t happen, so I am trying to understand why it does, since it could be some sort of a weird k6 bug :confused:

const users = new SharedArray(‘Users’, () => {
return parseDataFile(usersFile) as any;
});

Is usersFile just a path to the file with users data, or does it contain the whole contents of the file?

looking at this I would expect you use webpack to generate the final script? It’s very likely that this adds a ton of additional code. Are you using it if you copy the scripts and if not - what happens if you do. If you are using it in both cases you can check at least to see if there is a difference in the size of the script to begin with.

Also, I would recommend not mixing commonjs and ESM, so changing

const { randomItem } = require(‘https://jslib.k6.io/k6-utils/1.1.0/index.js’);

to

import { randomItem } from “https://jslib.k6.io/k6-utils/1.1.0/index.js”;

While this currently works they are in practice incompatible :wink:

It’s just the path, the parseDataFile in fact it’s encapsulating the papaParse lib.

Yes, we are using webpack to generate the final script. The difference is around 200KB more when importing it instead of copying and past it. But worth mentioning the memory usage while running the test is not linear when compared with the final artifact size.
I would imagine it would be something like 200 KB * Number of VUs to estimate the increase in the memory usage, but unfortunately, it’s not. It just explodes the memory consumption when using imports.

I didn’t get if you are using webpack in both cases?

I would imagine it would be something like 200 KB * Number of VUs to estimate the increase in the memory usage

This is very … optimistic :). In reality this depends heavily on what those 200kb of javascript code do, but if they are polyfills (likely) they create a bunch of objects per each VU. Given that copying the code seems to work for you this is likely not needed. As an anecdote the code for corejs that k6 was shipping with was 81.7kb but removing it dropped around 2MB from the memory usage per VU.

Did you use one of GitHub - grafana/k6-template-typescript: Template to use TypeScript with k6 or GitHub - grafana/k6-template-es6: Template repository for bundling test projects into single test scripts runnable by k6 to make your scripts? I remember that we had discussions long time ago of whether by default importing a bunch of stuff so it works for everybody is better. Or should we keep it simple so that it takes less resources. I think we went with keeping it simple, but to be honest I could be wrong :person_shrugging: .

Sorry to not made it clear. I’m using webpack in both cases.
The weird thing is that most of the libraries is just function wrappers to k6/http to improve ergonomics and reusage by other developers. But we just found out that the corejs is bundled by default by babel in every single module… More or less like the scenario you’ve just described… :man_facepalming:

We will exclude and run the tests again and let you know. Thanks for the support so far!

1 Like

Hey, We’ve just rerun the tests with a mix of good and bad news.
The good news is the reduction of the initial memory consumption, the bad news is that we still see signs of memory leak, after one hour the test still reached 60GB and got OOMKilled.

What are the test options? Are you increasing the VUs or iteration rates as the test progresses in a similar manner to the memory increase?

the options being used:

export const options = {
  scenarios: {
    dinnerHome: {
      executor: 'ramping-vus',
      startVUs: 0,
      stages: [
        { duration: RAMP_UP_DURATION, target: MAX_VUS },
        { duration: CONSTANT_DURATION, target: MAX_VUS },
        { duration: RAMP_DOWN_DURATION, target: 0 },
      ],
      tags: {
        service: 'loadtest-dinner',
        testid: __ENV.TEST_ID,
      },
      exec: 'dinnerHome',
    },
    dinnerCheckout: {
      executor: 'ramping-vus',
      startVUs: 0,
      stages: [
        { duration: RAMP_UP_DURATION, target: MAX_VU_CHECKOUT },
        { duration: CONSTANT_DURATION, target: MAX_VU_CHECKOUT },
        { duration: RAMP_DOWN_DURATION, target: 0 },
      ],
      tags: {
        service: 'loadtest-dinner',
        testid: __ENV.TEST_ID,
      },
      exec: 'dinnerCheckout',
    }

The max VUs is 8640 and the ramp-up time is 30 minutes.

@bbarin,

you say that you aren’t using thresholds and summary, does that mean that you actually provide the options --no-thresholds and --no-summary.

Also in another question you are asking about handleSummary are you using that in this test?

Exactly we explicitly provide --no-thresholds and --no-summary. We are not using handleSummary for now.

I can’t really come up with anything else but for you to try to do “binary search” about what is doing this:

  1. run only one of the scenarios - figure out which one is making the memory go up (both can that is not a problem) - but remove one.
  2. go to 1 with half the code that you are left with.

At some point the code should be small enough for you to be better equipped to tell “this is what the problem is” or if not make a script you can share with us, so we can look at it.

I don’t remember us having memory leaks ever, if you don’t count collecting metric forever, which should be solved by --no-thresholds --no-summary. And we do run a lot of tests and I would expect we would notice if suddenly some of the long-running were using up a lot of memory. But it obviously isn’t impossible :person_shrugging: .

I have though seen user scripts where they were “caching” something, but it turned out that they neither used it later, not did they have it bound, so they were just adding to an array on each iteration. Maybe something similar is happening here as well.

Hope this helps you, and if anyone else has some other suggestion - please say so.

Hey, some news here…

We found out that memory leak is in the URLSearchParams function…

image

The green line is using the function and the subsequential lines are without it. I think it’s worth considering removing it completely from the k6 utils.

Hope this helps other people :slight_smile:

Hi @bbarin , glad you fixed it :tada:

Can you provide a script that leaks for you? Maybe we can fix it :thinking:

Just to clarify: the URLSearchParams from https://jslib.k6.io/url/1.0.0/index.js is the cause of the memory leak. We removed its usage and the memory leak has gone.