How can I get microtimestamps in csv output

When I use --export cvs=results.csv the timestamp has very low resolution (1s), so I cant plot timeseries.
I need to convert it into arrays, but some requests have way fewer entries in scenarios, so it is hard to compare.


The first 1000+ entries have the same timestamp

>>> datetime.datetime.fromtimestamp(1672766933).strftime('%Y%m%d %H:%M%S.%f')
'20230103 18:2853.000000'

I use matplotlib and it has no problems whatosever with micro timestamps:

>>> datetime.datetime.fromtimestamp(1672766933.123456).strftime('%Y%m%d %H:%M%S.%f')
'20230103 18:2853.123456'

I can use --out json=results.json, that that microtimestamps:


Not good you support different output based on format, I like csv because I can easily manipulate and minimize gigabytes, then json bloat is way bigger.

Hi @mortenb123

Welcome to the community forums :wave:

Many thanks for bringing this to our attention. I can see someone already opened an issue yesterday for this and we have some comments on use same timestamp in csv results as in json · Issue #2839 · grafana/k6 · GitHub. Not sure if it was you or someone from your team. Thanks!


1 Like

Yes it was me, hopefully it is a very easy fix, or an option 1 second is very much when you have 10K responses.

1 Like