K6 pod goes to "Evicted" status on the Kubernetes cluster after running the load test for few minutes

Hi,

I have around 30Gi of memory on one of the nodes on my Kubernetes cluster. Even with this, k6 pod gets evicted due to “memory pressure”. I find this really strange. All that k6 is doing is firing multiple HTTP requests in batch to my server. The memory usage keeps increasing over time until the pod crashes. I tried the option “–discard-response-bodies”, but doesn’t seem to work. Could someone help me out on this? I’m attaching the CPU usage and memory consumption graphs for reference.

Regards,
Sharath

Try running k6 with --no-summary --no-thresholds (see Optimize memory consumption of Trend Metrics · Issue #1068 · grafana/k6 · GitHub for details). Also, are you outputing metrics to InfluxDB? We recently found a memory leak in the code that does that: influxdb output seems to leak memory · Issue #1081 · grafana/k6 · GitHub

1 Like

Oh ok. No, I haven’t set any thresholds and also, I am not outputing metrics to any DB. So, as of now, it is not possible to get the overall summary or atleast important metrics for long hours load test without any memory issue. Is that correct?