Out of Memory with More Virtual users

Hello Team,
I am using a simple script to post a transaction with 1000 VU’s. I am using a data store with 250k records.(single column and 250k rows )
With 100 vu’s it worked. but with 1000 VU’s - I am getting Out of memory error.
What do I do ?
Please let me know if you need any additional info.
We were running a test with a datastore that had 1.7 million records in Load Impact(Version 3.0) with out any issues.


1 Like

Unfortunately, at the moment, when you open a file in k6, each VU has its own individual copy of that file in memory. There’s also no way at the moment to not read a whole file in memory at once, but read it line by line, in a streaming manner. Both of these things are high in our roadmap: https://github.com/loadimpact/k6/issues/532, https://github.com/loadimpact/k6/issues/592, https://github.com/loadimpact/k6/issues/1021#issuecomment-493155860

What type of a data store are you using? I’m asking because we’ve noticed that CSV parsing with some popular JS libraries like papaparse takes up a surprisingly large amount of RAM, so if that’s the case with you, directly loading JSON files or plain text files might be a partial short-term workaround.

There are other tricks you can use to reduce k6 memory usage (like discardResponseBodies and the upcoming --compatibility-mode=base option), but these won’t fully make up for a huge static file loaded in each VU. Unfortunately, until we fix the underlying issues, we’re unlikely to support millions of datastore records with lots of VUs on the same machine :disappointed: So until then, you’d need either a bigger machine and/or smaller datastore files and/or less VUs per machine… :disappointed:

Thanks Nedyalko. I am using a CSV file. I will try using a JSON file. Even when you use a JSON file, each VU will have its own copy of that file in the memory right ? correct me if I am wrong. Could you please tell if you have any estimate by when will we have this fixed ?
Mean while I will try to work with your tricks.

I was able to run a test with 250 VU’s using a JSON file as of now - which is taking 60GB of my memory. Eventually we will need to run a test with 7500 VU’s.

Yes, that’s unfortunately true, for now.

In the next few months. The current priority is finally getting k6 v0.26.0 released (next Monday) and then finishing #1007 (hopefully early January). One of us will probably start working on the shared and streaming read-only memory (i.e. data stores) immediately after that. It’s probably going to take at least a few weeks, since as I pointed out in the CSV issue, there are some complexities involved and we need to design the APIs to be composable.