Reporting Issue with Modules

Hi Team,

I am generating a HTML report and summary report on CLI using handle summary function but facing challenges in generating discrete information for each API call.

I am utilizing different modules where diff API would be called with different url details.
Used trend feature and able to get the response time details on CLI summary.
But it isn’t working when I am parsing ‘n’ number of different urls from csv file. All the results are added to one trend.
I am looking for a solution which can help in generating unique information for each URL call during the test and when multiple iterations are executed for same url it should get added to that response.

Eg. I have different reusable modules defined for each method type(GET, PUT, POST). The challenge is we are not able to see individual response time of each API request.

Other Options tried: Used different group names and name tags in request definition with thresholds but that didn’t help.

At the moment, what you describe (setting thresholds on sub-metrics) is the only way to expose metrics for a specific URL in the default end-of-test summary or to handleSummary(). We haven’t finished documenting it fully yet, but here is a collection of other forum topics where we’ve given examples with it: Add example for sub-metrics by scenario (or other tags) in summary · Issue #205 · grafana/k6-docs · GitHub

If that is not enough for your needs, you can make k6 emit all of the metric measurements to an external output, e.g. a JSON or CSV file or even something like InfluxDB. For more details: