Shared state or unique sequential vu index per scenario

Hello,

I wish to run scenarios where I first set up the users, data sources, etc. in the system. This setup is typically external to the k6 run.

This follows by running k6 with parallel scenarios ramping up various numbers of these pre-existing users, datasources, etc. as different types of VU’s. For this, I need to be able to read the config of the existing users and assign a separate config to each VU. Optimally, I would be able to share a some list of available users/data source/etc and just pick an available config for a new VU.

However, I do not see any way to share data across VU’s. Is this possible?

If not (I see there is 2+ years old open Github issue one this), is it possible to get the VU “index” value within the scenario? Right now, I believe the set of VU’s is global, and picking a specific list item for the data sources, users, or whatever is not possible as I have no idea what user id 10 is in the context of overall VU’s (maybe VU 7 is a data source, so index 10 is off by one for users, for example).

Shared state would allow me to address all these nicely by just reading a list of available types and picking one when the list is non-empty.

If this is not possible at this time, do you have any suggestion to address this need?

Thanks

Hi @k6jonne,

As far as I understand you want to load a file from disk that will be some representation of an array. And have VUs take one element from the array?

Depending on whether you are fine with either having as many VUs as elements in the array or VU getting a fixed amount of rows instead of each VU getting the “next” unused element, k6 can help you … or not currently.
If you are fine with that you can use the __VU which goes from 1 to the maximum number of VUs in the instance [1](see at bottom of post).
The basic code will be something like:

var data = JSON.parse(open("./file.json")); // this is the array as json
const maxVUs = 10; // this can be data.length;
var let options = {
   duration: "10m",
   vus: maxVUS,
}

export default function() {
  var el = data[__VU-1]; // maxVUs is data.length
  // else you can do some simple math using __ITER and __VU to
  //  get the next element for each iteration and 
  // either loop over the array or sleep when the last element is used. 

  //  do something with el
}

If neither of this is fine with you - you likely need this - I highly recommend reading the whole thread especially the comments after that.

Hope this helps you and that I don’t have any script errors ;).

[1] This is not shared between instances in the k6 cloud and you will have problems if you use arrival rate and have preallocatedVUs smaller then maxVUs … as k6 will only allocate more then the preallocatedVUs once you need them which means that you will first run with only preallocatedVUs and then they will increase.

Thanks for the pointers!

We currently use the [__VU-1] indexing approach, but this is what is causing the issues. Say we run two parallel scenarios, where

  • scenario 1 uses VU’s of type1 and wants to index array “users1”.
  • scenario 2 uses VU’s of type2 and wants to index array “users2”.

As far as I can see, the two scenarios will share the VU numbers, which will cause the indices to mismatch and there is no way for me to know what is the correct index to array1 and what is the correct index to array2. Since both users in scenario1 and scenario2 increment the same VU counter.

I guess the only option is to run the scenarios as separate k6 instances, although for metrics, reports, and probably various other issues this is not quite optimal.

The thread you reference points to issue Improve execution information in scripts · Issue #1320 · grafana/k6 · GitHub, which I think seems useful. If we could also have execution context information on which scenario we are in, the VU number inside the scenario, etc.

I guess I have to try and simplify the approach and run the parallel instances for now. For long term some approach to have shareable state, even if just simple primitives to start with, would be nice. Although I can see in discussions it would be complicated from the cloud perspective, so I understand your reluctance :slight_smile: