Hi @wandal, welcome to the community forum !
I take it that by
1G you mean 1 gigabit per second?
If so I have also been able to that when I was testing how many requests k6 can make in my local network to an nginx server. This though also meant I was doing A lot of requests and the nginx was just serving static files. Both of those things are really “light” so they take no time at all, so if you are testing just over a single switch, then k6 gets responses back really fast … and it is just suppose to loop over and do more, so this is what it does.
So this will be as slow as the combination of all the things and having a somewhat fast connection just means you will be able to get more traffic, but this still means that you can saturate it. After all, 1Gbps is still 125 megabytes per second so if you have 10 megabytes bodies and you 3 VUs at 4 iterations per each VU you are 120 megabytes per second.
I would argue that this isn’t what really will happen if you are load testing an API call where some custom logic will need to generate the response (even if it’s a lot smaller) and possibly will need to ask a database for a response and so on. Although arguably your app might be responding fast enough to a few calls and saturating the network, but with many separate concurrent users it might actually start slow down and not even saturate the network. Or just that there are more users means they have less network bandwidth each. I can’t even guess which one you are hitting without the metrics k6 produces and even then it will depend on knowledge of your app, so you will need to figure out what is going wrong on your own here .
Hope this helps you and good luck.