Sharing scripts between setup and execution?

I’d like to be able to share some scripting between setup() and execution.

I’m trying to set up a repo where we have a set of stories that consume a bunch of library functions that behave reasonably either in setup() code or during the default execution (when the VUs are running the actual load test). On the face of it this seems easy, but error handling doesn’t make a lot of sense without completely abstracting what’s built in to k6 as during execution you want to just check to mark the failure and move on but during setup() if something fails, we’re losing critical structure we need for the test to make sense so we want a proper failure.

The other problem I’ve run in to is is that throw new Error(...) in setup() code doesn’t give a useful stack trace.

Has anyone else tried to share code in multiple modes and solved this already?

1 Like

Example of throw in setup():

$ cat foo.js
import bar from "./bar.js";
export function setup() {
        bar();
};
export default function() {
};
$ cat bar.js 
export default function () {
        baz();
}
function baz() {
        throw new Error("baz");
}
$ docker run -it --rm -v "$PWD:/src" -w /src loadimpact/k6 run foo.js

          /\      |‾‾|  /‾‾/  /‾/   
     /\  /  \     |  |_/  /  / /    
    /  \/    \    |      |  /  ‾‾\  
   /          \   |  |‾\  \ | (_) | 
  / __________ \  |__|  \__\ \___/ .io

  execution: local
     output: -
     script: foo.js

    duration: -,  iterations: 1
         vus: 1, max: 1

ERRO[0000] Engine error                                  error="setup: Error: baz at baz (/src/bar.js:6:8(4))"
ERRO[0000] Engine Error 

Because of the missing stack trace, any more complex stack is virtually impossible to debug.

The k6 check is pretty ambiguous in setup() too e.g:

$ cat foo.js 
import { check } from "k6";
export function setup() {
        check(false, { "setup": (t) => t })
};
export default function() {
        check(false, { "test": (t) => t })
};
$ docker run -it --rm -v "$PWD:/src" -w /src loadimpact/k6 run foo.js
          /\      |‾‾|  /‾‾/  /‾/   
     /\  /  \     |  |_/  /  / /    
    /  \/    \    |      |  /  ‾‾\  
   /          \   |  |‾\  \ | (_) | 
  / __________ \  |__|  \__\ \___/ .io

  execution: local
     output: -
     script: foo.js

    duration: -,  iterations: 1
         vus: 1, max: 1

    done [==========================================================] 1 / 1

    ✗ test
     ↳  0% — ✓ 0 / ✗ 1

    checks...............: 0.00% ✓ 0   ✗ 2  
    data_received........: 0 B   0 B/s
    data_sent............: 0 B   0 B/s
    iteration_duration...: avg=38.31µs min=27.43µs med=38.31µs max=49.2µs p(90)=47.02µs p(95)=48.11µs
    iterations...........: 1     0/s
    vus..................: 1     min=1 max=1
    vus_max..............: 1     min=1 max=1

Note that:

  • only the “test” check shows as failed explicitly
  • two checks failed overall: checks...............: 0.00% ✓ 0 ✗ 2 (but we’re left to guess at what the other failure was)
1 Like

Looked into the stacktraces as I am currently working on upgrading babel and looked at sourcemaps briefly around it.
They were previously enabled and apparently added stacked traces but have since been disabled for speed reasons. Unfortunately just setting that flag to true doesn’t fix it for me :frowning:
So can you open an issue for this and another one for the check issue.
I don’t have any good suggestion at this moment … I suppose you can differentiate between setup/teardown and default through checking for if __ITER is undefined ?
Any suggestion on how to make this better are welcome :slight_smile:

Sorry for the slow reply, I think you read my mind about finding a way to figure out whether we were in setup/teardown as I realised afterwards I hadn’t asked that. Although checking __ITER doesn’t work for me, checking __VU === 0 does.

I ended up with this for now:

/**
 * Define our own check() so that we can automatically change the exit code for
 * failing checks in CI.
 * 
 * Make sure to import this check in top level scripts with:
 *   import { check, thresholds } from "./lib/check.js";
 *   export let options = { thresholds: thresholds };
 * so that the thresholds get configured.
 */
import { check as k6check } from "k6";
import { Rate } from "k6/metrics";

var failedChecks = new Rate("failed checks");

// Configure thresholds to exit non-zero if any checks fails, but only if we're
// in CI.
// GitLab CI at least will always define the CI env var, most other CIs probably
// do too.
export let thresholds = {};
if (__ENV.CI) {
	thresholds = {
		"failed checks": ["rate==0"],
	}
}

export function check(data, checks) {
	for (var check in checks) {
		if (k6check(data, { [check]: checks[check] })) {
			continue;
		}
		failedChecks.add(1);
		// if __VU == 0 then we are in setup or teardown so onlya throw will get
		// through
		if (__VU !== 0) {
			continue;
		}
		throw new Error("Failed check during setup: " + check);
	}
};

Stack traces would still be nice, but it’s a start.

I’ve lodged the weird setup check behaviour as Checks failing during setup aren't reported individually · Issue #993 · grafana/k6 · GitHub and the stack trace problem as Provide decent stack traces in setup code · Issue #994 · grafana/k6 · GitHub

1 Like

Just now realised that the lack of stacks is only in setup :man_facepalming:

 ✗ is status 200
  ↳  85% — ✓ 8500 / ✗ 1500

 checks.........................: 85.00% ✓ 8500     ✗ 1500
 data_received..................: 1.7 GB 8.6 MB/s
 data_sent......................: 8.6 MB 43 kB/s

yeah, it really pain. who is imposter :smiling_face_with_tear: