K6 Detailed Log Size

K6 produces a detailed point by point log every second which has been very useful for us to compare against some of our apps in Azure Application Insights. But the problem here is say I use the open model with ramping executors in a k6 test and run the test for 10m producing a load of 5000 rps, the detailed log bloats upto 20+GB.

I believe K6 hasn’t intended this log to provide this measure for large tests - but as I said earlier this has been incredibly useful for us. Is there a way to

  • stream these logs than generate it for such a size?
  • A way to compress the size of these logs? I believe every line is a multiline JSON with VU details and http-req details - Is there any solution to how we can get a compressed log here? Say you could aggregate this data every minute? Or provide more granular timeframes to aggregate the point by point http-req data?

Again the detailed log has been super useful to detect many specific aspects of our applications like time taken to add more servers vs ramp up VU time etc.

1 Like

Hi @Priya,

can you elaborate on which lo this is ? Is this the normal k6 log with/without --http-debug and possibly -v ?

Or are you speaking about the metrics that k6 emits ? From your comments, I believe it’s the second one and specifically the json output ?

If it’s the json output, this is not a log, but you can just add .gz to the end of the fiel name as in -o json=test.json.gz and your json will be automatically gzipped by k6 while it’s writing it.

If you are talking about the actual logs of k6 … I guess you can pipe it to gzip and just put it in a file as in k6 run -v script.js |& gzip > test.out.gz

Thanks @mstoykov - yes talking about the json output. Good to know that it will automatically gz it. But is there a way it can be a rotating log (logging - Rotated log file - what is it? - Software Engineering Stack Exchange)? That way we can keep uploading the older to any external analytics system as the younger log gets generated.

If there is a way to aggregate them per minute etc, then we could use that to continuously stream into an analytics system to compare. Typically what we compare is timestamp and the number of http_req in that specific timestamp - its given us very useful insights that way.

There is no JSON output file rotation or aggregation yet, sorry. It’s just the raw metric measurements for now, in a single file, though you should still be able to use a streaming JSON parser to go through them. Aggregation especially is something we want to add eventually, but I can’t give you even a vague ETA at this point in time, sorry.