Long setup times when implementing unique VUs

Hi,

So currently, I have a test that is setup so that the setup phase will grab X amount of user data from my json file, log them into my app to grab a unique and dynamic auth token, and toss that token in an array so that I can use them to perform actions during the actual test.

However, I notice that when running the test with lots of users (100-1000 VUs) that the setup phase can take up to 18 minutes for 1000 VUs. Is there anyway to decrease the time it takes to setup?

Hi @TotesOates,

From what you are explaining you have something like

// imports
var data = JSON.parse(open("data.json"));
export setup function() {
  // processing
  return array_of_tokens;
}

export default function(array_of_tokens){
  var  token = array_of_tokens[__VU];
  // do something with the token
}

(the code was written in here, so there might be syntax errors :wink: )

Each VU is it’s own JS VM with every variable, function, and so on, and so for that data there will as many copies as VUs (as well as 1 more) in memory. Depending on how big your data is, this might be the reason as there is not only a copy of it, k6 also needs to parse that file for each VU (also all the js files you have). You can run with -v to see how long the initializations take. This, in general, doesn’t really get solved in any other way then to reduce the amount of work k6 needs to do - either by having smaller/less files or at least running in compatibility-mode=base, which will reduce core-js@v2 polyfills which also slow down the initialization(and take memory) for the price of needing to use es5.1 syntax instead of ES6. You can look at k6-es6 as a way to transpile ES6->ES5.1 outside of k6 which might save you some time and … rewriting.

If in setup you are doing multiple (or slow) requests this can also be why setup itself takes a while. This can be alleviated by using http.batch

Other strategies might include:

  1. getting the data.json only in setup through an additional HTTP call so it isn’t loaded in each VU, will help again if the data is somewhat big (big is very relative, but if it exceeds 1mb, that will balloon once it gets translated to structs from the JSON)
  2. move the getting of the token inside the first iteration of each VU, which will automatically parallelize it. This though means that each VU will still have a copy of the data.
var http = require("http");

exports.setup = function() {
  return  http.get(url_to_data).json(); // this should probably check the status code
}

var token;

exports.default = function(data){
   if (__ITER == 0) { // or token ===null?
     var relevant_data = data[__VU];
     // login in 
     token = new_token;
     return
  }
  // usual default function
}

This code was still written in here, so it might have syntax errors as well :wink:

This is what ES5.1 looks like and how you can combine some of the above, good luck!

Thank you for the all the info! It was very helpful!

I was able to batch the login requests and it practically cut down my setup time for 100 VUs from 2 minutes to < 2 seconds. Haven’t tried 1000 VUs yet but I’m pretty sure I"ll see the same sort of gains. I also minified and transpiled to es5, I noticed around a 25% decrease in RAM usage!