K6 CPU use during a test

We have a very basic test (main bit pasted below), nothing but plain old HTTP requests and sleeping inbetween on the example journey.

When we run this test we are seeing CPU spikes and a drop in memory usage. Then the memory immediately climbs back to its old usage level. This is when the test is in steady state and not during init or ramp up. Lokos like GC but GC that isn’t needed as it then fills the RAM back up (plus RAM is level).

What is happening there and can we tune GC if it is that?

Many thanks,


import { exampleJourney } from "./exampleJourney.js";

export let options = {
    stages: [
        { duration: "7m", target: 3500 },
        { duration: "8m", target: 3500 }

export default function(){

Hi @RobbieD,
This does look like GC.
Can you show me a graph of the memory and cpu usage to maybe try to see something strange there.
Also does the CPU spike also make the outputted date worse ?
You can also run k6 with gctrace=1 as in GODEBUG=gctrace=1 k6 run scripts.js (with whatever flags you are using. This will produce gctrace which will show you how much GC is happening when and how … well bad it is :)The golang documentation explains what the output means (search for gctrace as I can’t link it directly :frowning: ) If you can paste some amount of it (not more than 20-30 lines) here I would be able to tell you my opinion as well :slight_smile:

There are also very good chances that this is because of your script. Every VU is a separate JS VM so you will be having 3500 of those which both take a lot of memory (as you’ve probably noticed) and will take a lot of CPU to run in parallel.Especially if you have long sleeps somewhere when a lot of the VMs are inside the sleep … the CPU load will be low, while outside it will be high.

If your case allows it I would recommend lowering the sleep amount and decreasing the amount of VUs :wink:

Thanks for getting back to me so quickly.
I’ve finished for the day but will get you some graphs etc on Friday.
If we decrease the sleep and VUs unfortunately we lose session length and therefore it’s not realistic in terms of memory load on the target.
There’s very little in the way of logic etc in the scripts, so there’s not much I can do to alter it. It’s the model we need unfortunately.

HI @RobbieD,

Additionally, in case you are not using it ,you should probably use discardResponseBodies(will need to search it as I can’t link you inside the table :man_facepalming: ) This will significantly reduce the memory k6 allocates for big responses that you don’t need the body of.
For the ones you do need them you can use responseType (same deal … can’t link you to inside the table) on the requests that you need their bodies :).

If you feel adventurous we do have a brand new functionality to be released with the next k6 release (0.26.0) … hopefully next week. Until then you will either need to compile it from the source directly or if using docker to use the latest image tagged as master.

Both me and @nedyalko did some performance testing (1 2) with it and the results are promising. Using my (or similar) approach with webpack means that you also don’t need to rewrite your code (hopefully), but will always be slightly less performant . But still a good first step.

We intend on writing a lot better documentation and I am probably going to work some more on the webpack settings before/around the release, but if you have any questions ask away and we will gladly list you as the first “success story” if it works for you and your up for it :wink:

1 Like

Hi @mstoykov, thanks for the additional info. I’ve discarded the bodies and I can see a drop in RAM but I still get the CPU spikes. I also built the lastest version and tried webpack but I was getting an error on the app.bundle.js when I ran the test so i wasn’t able to try this unfortunately, I was running outside of Docker as I’m on Windows.

Hi @RobbieD,
Can you share the error so maybe I can try and give some pointers on ways of fixing it (possibly).
As long as you’ve downloaded and compiled the latest k6 from master it shouldn’t matter that you didn’t use docker :slight_smile: , it’s just somewhat easier.
I take it you haven’t run it with GC tracing enabled ? For the record as far as I know this “tracing” is practically free … it just writes metrics that it collects either way to the stderr :slight_smile: