K6 spends a long time at "status 9"

I’m trying to use K6 to run some performance tests. My setup is four different servers to ensure everything as enough resources (32 CPU/64GB Memory per server), running Grafana, InfluxDB, K6 and the application being tested on each one of them. I’m sending the results from K6 to InfluxDB and then use Grafana to visualize it.

Problem is that once the K6 test run has completed (CLI reports: default ✓ [ 100% ] 10000 VUs 10s), it sits still for 1-2 minutes before finally exiting. I first thought it was a data ingestion issue, but my InfluxDB instance is almost idling and seems to not be receiving that much data from the K6 intsance.

If I run k6 status while it’s in this “hanged” state, I get the following:

status: 9
paused: "false"
vus: "0"
vus-max: "10000"
stopped: false
running: false
tainted: false

So it’s not paused, not stopped and not running, but still not making any progress. Only thing I can observe, is that it’s stuck in “status 9” for these minutes. But I’m not finding any documentation on what the different status numbers actually mean.

Any guidance how to troubleshoot this issue is very much welcome :slight_smile:

Tracing through the code I found k6/status.go at master · grafana/k6 · GitHub

type Status struct {
	Status lib.ExecutionStatus `json:"status" yaml:"status"`

	Paused  null.Bool `json:"paused" yaml:"paused"`
	VUs     null.Int  `json:"vus" yaml:"vus"`
	VUsMax  null.Int  `json:"vus-max" yaml:"vus-max"`
	Stopped bool      `json:"stopped" yaml:"stopped"`
	Running bool      `json:"running" yaml:"running"`
	Tainted bool      `json:"tainted" yaml:"tainted"`
}

Which points to k6/execution.go at master · grafana/k6 · GitHub

type ExecutionStatus uint32

// Possible execution status values
const (
	ExecutionStatusCreated ExecutionStatus = iota
	ExecutionStatusInitVUs
	ExecutionStatusInitExecutors
	ExecutionStatusInitDone
	ExecutionStatusPausedBeforeRun
	ExecutionStatusStarted
	ExecutionStatusSetup
	ExecutionStatusRunning
	ExecutionStatusTeardown
	ExecutionStatusEnded
)

And counting the rows, it seems K6 is stuck at ExecutionStatusTeardown, which is very weird, my test script doesn’t have any teardown part!

Here is the script I use:

import http from 'k6/http'

import { check, group, sleep } from 'k6'

export let options = {
  discardResponseBodies: true
}

export default function () {
  const nameKey = __ENV.TEST_NAME.trim()
  const addressKey = __ENV.TEST_ADDRESS.trim()

  if (nameKey === undefined || nameKey === '') {
    throw new Error('no TEST_NAME env var provided')
  }
  if (addressKey === undefined || addressKey === '') {
    throw new Error('no TEST_ADDRESS env var provided')
  }
  group(nameKey, () => {
    const res = http.get(addressKey)
    check(res, {
      'is status 200': (r) => r.status === 200
    })
    sleep(0.1)
  })
}

The quest continues to figure out why K6 refuses to exit quickly when the tests have finished.

Seems I’m hitting a bug in K6, adding --no-teardown --no-setup (since my test doesn’t use those anyway) seems to fix this issue, and once added to the test command, makes K6 exit after just 4-5 seconds instead of 1-2 minutes.

Hmm, this is quite weird :thinking: I have a few quesions:

  • Which k6 version are you using?
  • What are the execution options you are running k6 with? k6 run --vus 10000 --duration 10s?
  • Can you enable verbose mode (with k6 run --verbose) and share the logs that are shown?