How to get failure statistics

I have a locust script and I ran It. It generated a report which had failure statistics section like in the attached screenshot. How to achieve the same with K6. I can try to get count of a particular status code but it doesn’t seem like a good coding practice to add counter for each code in every script. What is the best way to go about it?

This is similar to what we have in the cloud, see https://k6.io/docs/results-visualization/cloud and https://k6.io/docs/cloud/analyzing-results/overview#result-tabs

Unfortunately, for local k6, things are a bit more complicated… :disappointed: There are multiple ways to achieve what you want, but all of them require a bit of extra work right for now. I’ve opened a new issue so we improve our documentation on the topic, but for now I’ll try to briefly explain the different options below.

One option is to output the raw k6 metrics to InfluxDB and then use a Grafana dashboard to display the information you want. I can’t point you to a ready-made example for this though, sorry. The only Grafana dashboard I know with this feature is this one, but it is built on top of TimescaleDB, not InfluxDB, and we haven’t merged the pull request for that in core k6 yet. Still, it should be relatively easy to adapt, I think.

For a lower-complexity solution, you can export your raw k6 metrics to a JSON or CSV file and then write a small script to parse the results and generate the HTML for you. I can’t give you a ready-made example for this either, but it should be relatively easy to do.

Finally, if you want a turnkey solution that’s completely built into k6, without requiring you to do anything external, that’s unfortunately not possible yet. The first missing part this pull request that is hopefully going to be shipped with k6 v0.30.0 next week. It allows users to programatically generate and save any end-of-test summary reports, including XML and HTML ones.

The other missing piece is that, if you don’t export the metrics to an external output, k6 only passes information like errors and HTTP statuses to the end-of-test summary only if you have defined a submetric based on the tag:value combination. And the only way to currently do that is to define a threshold based on them. Which is impractical for your use case, and something we plan to address in the future, but I can’t promise exactly when.

I’ve mentioned this in the relevant issue, but until it’s implemented, the only alternative to using an external output (Cloud / InfluxDB / JSON / etc.) would be to use Counter variables for them. And I agree, I can’t say that’s a good solution, so my advice is to keep an eye on that issue and use an external output for now.

1 Like

@ned we use Datadog for external output. I’ll see what I can do there