I’ve been given some specs that look something like:
Role 1 performs actions A, B, C at 10k times a day
Role 2 performs actions D, E, F at 15k-30k times a day
The network runs any combination of A, B, C, D, E, F in any order throughout the day.
Though I am running multiple requests, I am thinking I should be using the RPS method as opposed to the VU method? Because this is just arbitrarily hitting API endpoints, not some sequential UI flow. Role 1 could be doing AB, while Role 2 is doing D, and while another Role 2 is doing EF, all at the same time for all I know.
I think the first thing I do is break the “a day” into minutes, so let’s say Role 1 performs at 10k a day => ~7 per minute.
Now I’d like to see how things run as close to “at the same time” for the roles as possible, so how the network would react if everyone does everything they tend to do throughout the day (or is that a bad idea?). I assume this means I should probably http.batch([A, B, C]) the requests for a role? By the way, groups don’t run parallel, do they?
Now how would I actually handle the different roles? If I’d like to see how they stress the network when they are all hitting it? Should Role 1 and Role 2 in different scripts and I try to run them in parallel via bash & or docker-compose, or should they fit in one script and I can assign different VUs/RPS to them to somehow split them up?