Exporting rate to json looks faulty

Hi,
I’d be grateful if you could explain why a rate with one added digit 1/true value looks like below when exported to json? I’m using a rate (rate name is #Team#api.my.service1-ListAccounts_errors#) to calculate the error percentage (adding 1 to the rate if a request fails) and in the k6 output report it looks correct but in the json output it says the other way around i.e. fails: 0, passes: 1. This isn’t logical/interpretable to the system receiving the json output for calculation and presentation of error percentage. It looks like the request passed instead of failed to the receiving system.

Thank you in advance for letting me know what I’m doing wrong.

	"#Team#api.my.service1-ListAccounts_errors#": {
		"type": "rate",
		"contains": "default",
		"values": {
			"fails": 0,
			"rate": 1,
			"passes": 1
		},
		"thresholds": {
			"rate<0.05": {
				"ok": false
			}
		}
	},

Hi @Lightlore

Could you possibly share your (sanitized) script? I want to make sure I understand the context (particularly the bit around adding 1 to the rest when a request fails and the differences in the stodout output report vs. JSON).

Thanks in advance.

Cheers!

Hi,
Sorry, I’m adding true, not 1 to the rate.

Sure, below are the function my main script (located in another file containing the default function, scenarios etc) calls. The k6 report printed out to std out is further down and the exported json at the end of this post. For the summary report to std out I unfortunately have to use your, now no longer maintained by you, k6-summary-0.0.1.js (because to be able to export the json I have to override the function handleSummary(data) and by that disabling your better working/looking built in report). So that is why the checkmark and cross are not showing correctly for pass and fail, just code is shown instead making the report a bit tough to read :laughing: (even tougher in Azure DevOps pipeline result which scrambles or encodes a lot of the characters in the report :grimacing:). I haven’t taken the time to rewrite the report script on my own yet, which I suspect is the only solution.

import { group, check } from 'k6';
import http from 'k6/http';
import { Trend, Rate, Counter} from 'k6/metrics';
import { environment } from '../Utils/environment.js';
import { getRandomTestUser } from '../utils/util-functions.js';

let trendMyService1ListAccounts = new Trend("api.my.service1-ListAccounts");
let counterMyService1ListAccounts = new Counter('api.my.service1-ListAccounts_count');
let errorRateMyService1ListAccounts = new Rate('api.my.service1-ListAccounts_errors#');


// *** ListAccounts function ***
export function ListAccounts() {
    group("MyService1.ListAccounts", function() {
        let res, result, params;
        const user = getRandomTestUser();
       
        params = 
        {
            headers: {
                        "User": user,
                        "accept": "application/json",
                    },  
        };
        res = http.get(environment.environmentBaseURL + "/services/api.user/accounts", params);
        
        trendMyService1ListAccounts.add(res.timings.duration);
        counterMyService1ListAccounts.add(1);
        
        result = check(res, {
            'api.my.service1-ListAccounts response status code was 200': (res) => res.status == 200,
            'api.my.service1-ListAccounts response content ok (includes testAccount)': (res) => res.body.includes("testAccount")
        });
        if(!result) {
            errorRateMyService1ListAccounts.add(true);
        } else {
            errorRateMyService1ListAccounts.add(false);            
        }
    });
}

Executing above will result in 100% failure (because the end-point is made-up for discretion) but my point will still be visible in the json and report.

Report

      /\      |‾‾| /‾‾/   /‾‾/
 /\  /  \     |  |/  /   /  /
/  \/    \    |     (   /   ‾‾\

/ \ | |\ \ | (‾) |
/ __________ \ |__| _\ ____/ .io

execution: local
script: .\MyServices.load.js
output: -

scenarios: (100.00%) 1 scenario, 10 max VUs, 1m0s max duration (incl. graceful stop):
* ListAccounts: 1.00 iterations/s for 30s (maxVUs: 5-10, exec: ListAccountsExec, gracefulStop: 30s)

running (0m30.0s), 00/05 VUs, 31 complete and 0 interrupted iterations
ListAccounts ✓ [======================================] 00/05 VUs 30s 1.00 iters/s
INFO[0031] Preparing the end-of-test summary… source=console
â–ˆ setup

 â–ˆ MyService1.ListAccounts

   ✗ api.my.service1-ListAccounts response status code was 200
    ↳  0% — ✓ 0 / ✗ 31
   ✗ api.my.service1-ListAccounts response content ok (includes testAccount)
    ↳  0% — ✓ 0 / ✗ 31

 api.my.service1-ListAccounts...........: avg=5.437294 min=3.9552 med=5.1641 max=7.8944  p(90)=7.032   p(95)=7.48855 p(99)=7.77554
 api.my.service1-ListAccounts_count.....: 31      1.032703/s
 api.my.service1-ListAccounts_errors#...: 100.00% ✓ 31     ✗ 0
 checks.................................: 0.00%   ✓ 0      ✗ 62
 data_received..........................: 26 kB   852 B/s
 data_sent..............................: 15 kB   488 B/s
 group_duration.........................: avg=12.01ms  min=4.52ms med=5.47ms max=78.70ms p(90)=34.47ms p(95)=40.87ms p(99)=68.52ms
 http_req_blocked.......................: avg=6.21ms   min=0.00ms med=0.00ms max=72.44ms p(90)=26.98ms p(95)=33.46ms p(99)=61.93ms
 http_req_connecting....................: avg=1.23ms   min=0.00ms med=0.00ms max=12.49ms p(90)=4.91ms  p(95)=8.04ms  p(99)=11.75ms
 http_req_duration......................: avg=5.44ms   min=3.96ms med=5.16ms max=7.89ms  p(90)=7.03ms  p(95)=7.49ms  p(99)=7.78ms
 http_req_failed........................: 100.00% ✓ 31     ✗ 0
 http_req_receiving.....................: avg=0.51ms   min=0.00ms med=0.00ms max=4.69ms  p(90)=1.01ms  p(95)=2.85ms  p(99)=4.68ms
 http_req_sending.......................: avg=0.03ms   min=0.00ms med=0.00ms max=0.49ms  p(90)=0.00ms  p(95)=0.15ms  p(99)=0.43ms
 http_req_tls_handshaking...............: avg=4.95ms   min=0.00ms med=0.00ms max=61.56ms p(90)=22.07ms p(95)=24.18ms p(99)=50.57ms
 http_req_waiting.......................: avg=4.90ms   min=0.51ms med=4.85ms max=7.55ms  p(90)=6.52ms  p(95)=7.01ms  p(99)=7.53ms
 http_reqs..............................: 31      1.032703/s
 iteration_duration.....................: avg=11.67ms  min=0.00ms med=5.49ms max=78.70ms p(90)=34.33ms p(95)=40.49ms p(99)=68.18ms
 iterations.............................: 31      1.032703/s
 vus....................................: 5       min=5      max=5
 vus_max................................: 5       min=5      max=5

Extract of the error rate part from the json output file showing 31 requests as passed instead of fail as I think would be more correct/readable for the system receiving the json.

  "api.my.service1-ListAccounts_errors#": {
  	"type": "rate",
  	"contains": "default",
  	"values": {
  		"rate": 1,
  		"passes": 31,
  		"fails": 0
  	}
  },

Thank you for any feedback you can give me regarding this.

1 Like

Hi @Lightlore

Thanks for sharing a script. I built a similar one, and used a random function to decide when the user account is correct or not. See it below:

import { group, check } from 'k6';
import http from 'k6/http';
import { Trend, Rate, Counter } from 'k6/metrics';
import { textSummary } from 'https://jslib.k6.io/k6-summary/0.0.2/index.js';
import { randomIntBetween } from 'https://jslib.k6.io/k6-utils/1.2.0/index.js';

let trendMyService1ListAccounts = new Trend("api.my.service1-ListAccounts");
let counterMyService1ListAccounts = new Counter('api.my.service1-ListAccounts_count');
let errorRateMyService1ListAccounts = new Rate('api.my.service1-ListAccounts_errors#');

export const options = {
    vus: 10,
    iterations: 10,
    thresholds: {
        'api.my.service1-ListAccounts_errors#': [
            // more than 10% of errors will abort the test
            { threshold: 'rate < 0.1', abortOnFail: false, delayAbortEval: '10s' },
        ],
    },
};

export default function () {
    group("MyService1.ListAccounts", function () {
        let res, result;

        res = http.get('https://httpbin.test.k6.io/anything');

        trendMyService1ListAccounts.add(res.timings.duration);
        counterMyService1ListAccounts.add(1);

        result = check(res, {
            'api.my.service1-ListAccounts response status code was 200': (res) => res.status == 200,
            //'api.my.service1-ListAccounts response content ok (includes testAccount)': (res) => res.body.includes("testAccount")
            'api.my.service1-ListAccounts response content ok (includes testAccount)': randomIntBetween(0, 1), // make it random
        });
        if (!result) {
            console.log("failed account");
            errorRateMyService1ListAccounts.add(true);
        } else {
            console.log("correct account");
            errorRateMyService1ListAccounts.add(false);
        }
    });
}

export function handleSummary(data) {
    return {
        'stdout': textSummary(data, { indent: ' ', enableColors: true }), // Show the text summary to stdout...
        'summary.json': JSON.stringify(data), // and a JSON with all the details...
    };
}

I believe you should be able to get both test outputs, the stdout with the coloring, and the json file.

If I run the script above, I get the following:

And this is the summary.json:

{
    "root_group": {
        "checks": [],
        "name": "",
        "path": "",
        "id": "d41d8cd98f00b204e9800998ecf8427e",
        "groups": [
            {
                "name": "MyService1.ListAccounts",
                "path": "::MyService1.ListAccounts",
                "id": "078ec6cf6c764261a9dcbb1eb54a285c",
                "groups": [],
                "checks": [
                    {
                        "name": "api.my.service1-ListAccounts response status code was 200",
                        "path": "::MyService1.ListAccounts::api.my.service1-ListAccounts response status code was 200",
                        "id": "df25ea4a97db37eae0cf92d216af9016",
                        "passes": 10,
                        "fails": 0
                    },
                    {
                        "name": "api.my.service1-ListAccounts response content ok (includes testAccount)",
                        "path": "::MyService1.ListAccounts::api.my.service1-ListAccounts response content ok (includes testAccount)",
                        "id": "0e5d8efd060f7c195a3f2197ea5ca646",
                        "passes": 4,
                        "fails": 6
                    }
                ]
            }
        ]
    },
    "options": {
        "summaryTrendStats": [
            "avg",
            "min",
            "med",
            "max",
            "p(90)",
            "p(95)"
        ],
        "summaryTimeUnit": "",
        "noColor": false
    },
    "state": {
        "isStdOutTTY": true,
        "isStdErrTTY": true,
        "testRunDurationMs": 462.359
    },
    "metrics": {
        "iteration_duration": {
            "type": "trend",
            "contains": "time",
            "values": {
                "avg": 453.81361239999995,
                "min": 439.7815,
                "med": 455.4631875,
                "max": 461.801916,
                "p(90)": 461.79374129999997,
                "p(95)": 461.79782865
            }
        },
        "api.my.service1-ListAccounts_errors#": {
            "type": "rate",
            "contains": "default",
            "values": {
                "rate": 0.6,
                "passes": 6,
                "fails": 4
            },
            "thresholds": {
                "rate < 0.1": {
                    "ok": false
                }
            }
        },
        "http_req_tls_handshaking": {
            "type": "trend",
            "contains": "time",
            "values": {
                "min": 125.353,
                "med": 140.8875,
                "max": 149.595,
                "p(90)": 148.1271,
                "p(95)": 148.86105,
                "avg": 139.59390000000002
            }
        },
        "group_duration": {
            "type": "trend",
            "contains": "time",
            "values": {
                "avg": 453.56348349999996,
                "min": 439.769875,
                "med": 455.13435400000003,
                "max": 461.502834,
                "p(90)": 461.4968337,
                "p(95)": 461.49983385
            }
        },
        "http_req_connecting": {
            "type": "trend",
            "contains": "time",
            "values": {
                "p(95)": 106.3496,
                "avg": 106.0907,
                "min": 105.896,
                "med": 105.9825,
                "max": 106.364,
                "p(90)": 106.3352
            }
        },
        "data_received": {
            "type": "counter",
            "contains": "data",
            "values": {
                "count": 60510,
                "rate": 130872.33080787872
            }
        },
        "http_req_waiting": {
            "type": "trend",
            "contains": "time",
            "values": {
                "avg": 105.28280000000002,
                "min": 103.029,
                "med": 104.94749999999999,
                "max": 108.248,
                "p(90)": 107.02759999999999,
                "p(95)": 107.6378
            }
        },
        "iterations": {
            "values": {
                "count": 10,
                "rate": 21.628215304557713
            },
            "type": "counter",
            "contains": "default"
        },
        "http_req_receiving": {
            "values": {
                "p(95)": 0.05074999999999999,
                "avg": 0.027200000000000002,
                "min": 0.012,
                "med": 0.024,
                "max": 0.062,
                "p(90)": 0.0395
            },
            "type": "trend",
            "contains": "time"
        },
        "api.my.service1-ListAccounts_count": {
            "type": "counter",
            "contains": "default",
            "values": {
                "count": 10,
                "rate": 21.628215304557713
            }
        },
        "api.my.service1-ListAccounts": {
            "type": "trend",
            "contains": "default",
            "values": {
                "avg": 105.34559999999999,
                "min": 103.076,
                "med": 104.99000000000001,
                "max": 108.354,
                "p(90)": 107.076,
                "p(95)": 107.715
            }
        },
        "http_reqs": {
            "type": "counter",
            "contains": "default",
            "values": {
                "count": 10,
                "rate": 21.628215304557713
            }
        },
        "http_req_duration{expected_response:true}": {
            "type": "trend",
            "contains": "time",
            "values": {
                "p(90)": 107.076,
                "p(95)": 107.715,
                "avg": 105.34559999999999,
                "min": 103.076,
                "med": 104.99000000000001,
                "max": 108.354
            }
        },
        "data_sent": {
            "type": "counter",
            "contains": "data",
            "values": {
                "count": 4620,
                "rate": 9992.235470705664
            }
        },
        "http_req_blocked": {
            "type": "trend",
            "contains": "time",
            "values": {
                "avg": 348.0201,
                "min": 333.44,
                "med": 349.192,
                "max": 358.268,
                "p(90)": 356.6237,
                "p(95)": 357.44584999999995
            }
        },
        "http_req_failed": {
            "type": "rate",
            "contains": "default",
            "values": {
                "fails": 10,
                "rate": 0,
                "passes": 0
            }
        },
        "http_req_duration": {
            "type": "trend",
            "contains": "time",
            "values": {
                "avg": 105.34559999999999,
                "min": 103.076,
                "med": 104.99000000000001,
                "max": 108.354,
                "p(90)": 107.076,
                "p(95)": 107.715
            }
        },
        "http_req_sending": {
            "type": "trend",
            "contains": "time",
            "values": {
                "max": 0.124,
                "p(90)": 0.08080000000000001,
                "p(95)": 0.10239999999999998,
                "avg": 0.03560000000000001,
                "min": 0.01,
                "med": 0.020999999999999998
            }
        },
        "checks": {
            "contains": "default",
            "values": {
                "passes": 14,
                "fails": 6,
                "rate": 0.7
            },
            "type": "rate"
        }
    }
}

What we find is that it might look a bit counterintuitive, though I see the same in the stdout output. Maybe I’m interpreting your issues incorrectly, so feel free to correct me :smile:

In the test run I have 6 failed accounts:

INFO[0001] correct account                               source=console
INFO[0001] failed account                                source=console
INFO[0001] failed account                                source=console
INFO[0001] failed account                                source=console
INFO[0001] failed account                                source=console
INFO[0001] failed account                                source=console
INFO[0001] correct account                               source=console
INFO[0001] failed account                                source=console
INFO[0001] correct account                               source=console
INFO[0001] correct account                               source=console

What we see next is that the checks fail 6 out of 10 for the response content. Which is coherent with the above logs.

     █ MyService1.ListAccounts

       ✓ api.my.service1-ListAccounts response status code was 200
       ✗ api.my.service1-ListAccounts response content ok (includes testAccount)
        ↳  40% — ✓ 4 / ✗ 6

And the rate metric shows:

   ✗ api.my.service1-ListAccounts_errors#...: 60.00% ✓ 6         ✗ 4 

That is, since we add true when the error occurs, it counts 6 true’s and 4 false’s. Being an error rate, the are actually the errors. And the rate is calculated correctly, 60% of my requests failed.

And this is the same we later see in the json summary:

"api.my.service1-ListAccounts_errors#": {
            "type": "rate",
            "contains": "default",
            "values": {
                "rate": 0.6,
                "passes": 6,
                "fails": 4
            },
            "thresholds": {
                "rate < 0.1": {
                    "ok": false
                }
            }
        }

It is a bit counterintuitive that it will say 6 passes and 4 failures in the json summary. However, I think it’s the same we see in the stdout output. With an error rate defined like this, the pass will mean an error was found, and be in the count for that metric.

That said, I always find this a bit counterintuitive, as my mind associates with pass and “good”, and with fail and “not good”.

Unless I interpreted this wrong, the results are coherent between stdout and json. Let me know if otherwise and I’ll have another look.

Cheers!

2 Likes

Hi,
Thank you VERY much for a great explanation of this. Actually the same explanation my colleague gave me right AFTER I posted this and he thinks it’s logical :slightly_smiling_face: so I guess we all think and understands differently when it comes to logic. Like you said it becomes a bit confusing when pass means something negative (in this case fail) when it comes to test results.

Now we just have to convince the Splunk guys why they should “reverse” the passes to fails :joy: and it will be shown correctly.

Regarding the checkbox and “X-symbol” I’ve tried to execute the test both in windows command prompt and in VS Code terminal window (powershell) but they only show up as strange symbols, so I don’t know why that is. You seem to get the correct output when you do exactly as I do, only difference is that I have to keep a local copy (renamed from index.js to k6-summary-0.0.3.js) of the index.js instead of linking to https://jslib.k6.io/k6-summary/0.0.3/index.js because the machine running the tests does not have Internet connection.

import {textSummary, jUnit } from ‘./utils//k6-summary-0.0.3.js’;
.
.
export function handleSummary(data) {
console.log(‘Preparing the end-of-test summary…’);

return {
  'stdout': textSummary(data, { indent: ' ', enableColors: true }), 													
  ['.\\K6_Result_' + data.setup_data.startTimeOfTest + '.xml'] : jUnit(data), 										
  ['.\\' + 'K6_Result_' + data.setup_data.startTimeOfTest + '.json'] : JSON.stringify(data),

}
}

1 Like

Hi @Lightlore

Let me check if this weird characters in the output can be related to executing in Windows, since I have been testing on my Mac. I’ll get back after some more investigations. Thanks for reporting this and your patience!

Hi @Lightlore

Thanks for your patience on this.

@mstoykov pointed me to an already identified issue: Visual glitches on Windows terminals · Issue #1748 · grafana/k6 · GitHub. Can you have a look and double-check if this could be your case? Have you tried running this in other terminals like recommended in the last comment there Visual glitches on Windows terminals · Issue #1748 · grafana/k6 · GitHub?

If this still happens, a screenshot of the test run will help us understand if it’s the same issue. Also share the Windows OS version.

Cheers!

Hi,
Sorry for the delay in my answer. The issue you refer to seems to be at least in part what I am experiencing as well. I get checkmark ok in “progressbars” for the scenario runs but invalid characters for trend values and trend threshold status (no checkmarks or “crosses”), I’ve “blurred out” some parts due to sensitive info.

image

When executing a command like this in a .ps1 file:

k6 run --env TEST_DURATION=$testDuration --env TEST_NAME=$testName --env TEST_ENVIRONMENT=$testEnvironment --env DEBUG=$testDebug --logformat raw myk6Script.js > C:\K6LoadTest\API\K6_Result.log

…the output in the K6_Result.log file is like this for a check result:

and like this for the trend, counter and rate results

Unfortunately in AzureDevOps when running in a pipeline it is even worse, everything almost unreadable/useless. Disregard the hashtags they are part of the trend names, so they should be there but all other strange codes are wrong encodings or such in k6 standard output report using the textSummary function.

Windows version is Windows Server 2019.

Thanks for your patience with my late answer.

ps. as non-common third party programs are pretty much not ok in my company I can’t use anything else than powershell and command prompt (and Visual Studio codes power shell terminal)

1 Like

Thanks @Lightlore, this is very valuable feedback for us. I added the reference in Visual glitches on Windows terminals · Issue #1748 · grafana/k6 · GitHub, so we can take this into account.

Cheers!

1 Like

The unicode glitches unfortunately have to wait for someone to address the issue @eyeveebe linked to.

However, you can fix the issues with AzureDevOps immediately. For some reason, k6 seems to detect that as a TTY terminal with color support, so you can add the following environment variables (to AzureDevOps, so the k6 process, not k6 --env):

  • TERM=dumb will force k6 to consider the terminal a non-TTY one
  • NO_COLOR=true (or K6_NO_COLOR=true) will prevent k6 from trying to emit colorful output, i.e. all of the [0m and [2m, etc. escape codes you see.
1 Like

Hi,
Thank you for the hint. I added below to the task group in AzureDevops, but it made no difference to the resulting k6 report, maybe I’m doing it wrong, or in the wrong place?

Ah, I am sorry, I didn’t read the whole thread history! :man_facepalming:

I see now that the problem was not during the test execution itself, but in the end-of-test summary. You are not using the default k6 end-of-test summary, but have defined a custom handleSummary() function instead, and in that function you have textSummary(data, { indent: ' ', enableColors: true }). The problem comes from the enableColors: true - you are manually forcing that JS function to emit colorful output. Instead, you should use enableColors: false to disable that.

Alternatively, you can use enableColors: (!data.options.noColor && data.state.isStdOutTTY), to automatically determine if colors are enabled based on the environment, this is what k6 does by default if you don’t define your own handleSummary() function.

1 Like

Hi,
Ah, I see, I’ll try that and see if the report will look atleast a bit better in Azure.
Thank you very much for your help.
Have a great day.

2 Likes

Hi again
I’ve now tried in Azure DevOps and the color codings artifacts" are gone, so one step closer to getting rid of the “annoying artifacts”. Thank you again for that suggestion.

1 Like