Using compatibility-mode base


I can’t understand, what am I doing wrong???

  "name": "k6-test",
  "version": "1.0.0",
  "main": "src/main.js",
  "devDependencies": {
    "@babel/core": "^7.14.6",
    "@babel/plugin-transform-block-scoping": "^7.14.5",
    "@babel/plugin-transform-classes": "^7.14.5",
    "@babel/preset-env": "^7.14.7",
    "babel-loader": "^8.2.2",
    "prettier": "2.3.2",
    "webpack": "^5.40.0",
    "webpack-cli": "^4.7.2"
  "scripts": {
    "webpack": "webpack",
    "test": "echo \"Error: no test specified\" && exit 1"
  "presets": [
  "plugins": [
var path = require("path");
var webpack = require("webpack");
module.exports = {
    mode: "production",
    entry: "./src/main.js",
    output: {
        path: path.resolve(__dirname, "build"),
        libraryTarget: "commonjs",
        filename: "app.bundle.js",
    module: {
        rules: [
                test: /\.js$/,
                loader: "babel-loader",
    stats: {
        colors: true,
    target: ["web", "es5"],
    externals: /^(k6|https?\:\/\/)(\/.*)?/,
    devtool: "source-map",
asinotov@fedora ➜  k6-test git:(k6) ✗ npm run-script webpack && k6 run --compatibility-mode=base build/app.bundle.js

> k6-test@1.0.0 webpack
> webpack

asset app.bundle.js 4.07 KiB [compared for emit] [minimized] (name: main) 1 related asset
orphan modules 5.87 KiB [orphan] 9 modules
runtime modules 997 bytes 4 modules
./src/main.js + 9 modules 7.14 KiB [not cacheable] [built] [code generated]
webpack 5.41.1 compiled successfully in 719 ms

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

ERRO[0000] Line 2:1 Unexpected reserved word, Line 3:3 Unexpected token return, Line 9:1 Unexpected reserved word, Line 9:8 Unexpected token function, Line 9:44 Unexpected token {, Line 10:3 Illegal return statement, Line 14:3 Illegal return statement, Line 21:3 Illegal return statement, Line 27:5 Illegal return statement, Line 32:5 Illegal return statement, Line 34:3 Illegal return statement, Line 37:1 Unexpected end of input, Line 37:1 Unexpected end of input
        at reflect.methodValueCall (native)
        at file:///home/asinotov/CLionProjects/gws-graph/k6-test/build/app.bundle.js:1:681(65)
        at file:///home/asinotov/CLionProjects/gws-graph/k6-test/build/app.bundle.js:1:4125(2)  hint="script exception"

Hm… Ok, webpack doesn’t pack Can I use k6-utils with base mode?

Hi @BratSinot ,

Hm… Ok, webpack doesn’t pack . Can I use k6-utils with base mode?

Not without transpiling it through babel.

To be honest, --compatibility-mode=base should be practically not a thing you should set from v0.31.0, when we dropped core-js.

Previous to that it was both dropping core-js and transpiling babel.

The core-js was adding around 2mb mem usage per VU and was taking some milliseconds to run per each VU. Now that we don’t have core-js the only thing --compatibility-mode=base does is not letting you use the in k6 babel transpilation. Which will not kick in unless you need it - your script doesn’t work without it. And then only kick in for the files that need it.

So while this can prevent you from adding non --compatibility-mode=base compatible code it does not actually make already compatible code any faster, which it was as core-js was added to everything.

So long story short if you use k6 >=v0.31.0 you should probably not add --compatibility-mode=base

Em… Ok, no basic. But k6 using too much RAM. My test basically do nothing, but k6 using about 20-28GiB of RAM for ~28k WS connections.

That seems a bit much and given that I would expect around 1/4-1/2 of that. As I mentioned above, compatibility-mode=base will at this point no longer help you (a lot) with this problem).

I would ask you whether:

  1. You are using the latest k6 version, (v0.33.0 was released yesterday) if not - upgrade to it.
  2. Are you reading any files that you need as input, if yes - use SharedArray
  3. Are you importing a lot of js code? Js code needs to be copied and ran per each VU as they are completely separate JS VMs. So if you are importing some big library this can be a problem.
  4. (maybe this should be 1) if the memory usage isn’t a problem for you, you should probably not optimize it, just saying :wink:

Also please read running large test for other general. For the record I ran 10k VUs with less than 3.5GB for one minute.

This part for example is especially relevant for long running tests as k6 otherwise needs to keep all the metrics locally, while you already output them to some output for future investigation.

Hope this helps you, and if not a representative code sample, will help with a more relevant advice :wink:

  1. yes;
  2. no;
  3. k6/ws, k6/http, check from k6;
  4. Memory usage is a problem, because when I run 25k+25k connections on my AWS / Kuber memory consumption getting so high, so I just get 137 exit code (SIGKILL) for my k6 pods.

Does this happen after some time? If so does runnign with --no-summary --no-thresholds fix it ?

Nope, still a lot of memory.

I can show you one of graphs.

Without some additional info about the script I can only guess that:

  1. you are generating too many metrics (although the graph will continue growing in that case)
  2. your script is using too much memory for some reason, probably one of the above mentioned ones.
  3. you are running into errors and k6 starts busy looping as it has nothing else to do. Do you have the output of k6? Does it has any errors in it.

Now, for testing, I run k6 locally (so as not to build the container every time).

  1. I tried to do something like remove extra vars and other stufs, but nothing changed (I’m not JS developer, so I know nothing about JS memory optimization).
npm run-script webpack
/usr/bin/time -v k6 run build/app.bundle.js

up to date, audited 177 packages in 755ms

19 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities

> k6-test@1.0.0 webpack
> webpack

asset app.bundle.js 3.52 KiB [compared for emit] [minimized] (name: main) 1 related asset
orphan modules 4.93 KiB [orphan] 9 modules
runtime modules 937 bytes 4 modules
./src/main.js + 9 modules 6.05 KiB [not cacheable] [built] [code generated]
webpack 5.41.1 compiled successfully in 550 ms

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: build/app.bundle.js
     output: -

  scenarios: (100.00%) 1 scenario, 25000 max VUs, 1m0.6s max duration (incl. graceful stop):
           * default: Up to 25000 looping VUs for 1m0.625s over 126 stages (gracefulRampDown: 0s)

running (1m01.1s), 00000/25000 VUs, 0 complete and 25000 interrupted iterations
default ✓ [======================================] 06148/25000 VUs  1m0.625s
WARN[0067] No script iterations finished, consider making the test duration longer

     ✓ WebSocket /graphql status is 101

     █ setup

       ✓ status of POST /login is 200
       ✓ have token

     checks.........................: 100.00% ✓ 2           ✗ 0
     data_received..................: 115 MB  1.9 MB/s
     data_sent......................: 36 MB   590 kB/s
     http_req_blocked...............: avg=171.69µs min=171.69µs med=171.69µs max=171.69µs p(90)=171.69µs p(95)=171.69µs
     http_req_connecting............: avg=137.2µs  min=137.2µs  med=137.2µs  max=137.2µs  p(90)=137.2µs  p(95)=137.2µs
     http_req_duration..............: avg=4.98ms   min=4.98ms   med=4.98ms   max=4.98ms   p(90)=4.98ms   p(95)=4.98ms
       { expected_response:true }...: avg=4.98ms   min=4.98ms   med=4.98ms   max=4.98ms   p(90)=4.98ms   p(95)=4.98ms
     http_req_failed................: 0.00%   ✓ 0           ✗ 1
     http_req_receiving.............: avg=51.26µs  min=51.26µs  med=51.26µs  max=51.26µs  p(90)=51.26µs  p(95)=51.26µs
     http_req_sending...............: avg=59.06µs  min=59.06µs  med=59.06µs  max=59.06µs  p(90)=59.06µs  p(95)=59.06µs
     http_req_tls_handshaking.......: avg=0s       min=0s       med=0s       max=0s       p(90)=0s       p(95)=0s
     http_req_waiting...............: avg=4.87ms   min=4.87ms   med=4.87ms   max=4.87ms   p(90)=4.87ms   p(95)=4.87ms
     http_reqs......................: 1       0.016358/s
     iteration_duration.............: avg=5.3ms    min=5.3ms    med=5.3ms    max=5.3ms    p(90)=5.3ms    p(95)=5.3ms
     vus............................: 7655    min=7655      max=25000
     vus_max........................: 25000   min=25000     max=25000
     ws_connecting..................: avg=5.73s    min=2.51ms   med=6.51s    max=13.95s   p(90)=8.06s    p(95)=9.63s
     ws_msgs_received...............: 240760  3938.356627/s
     ws_msgs_sent...................: 33968   555.649186/s
     ws_sessions....................: 25000   408.950472/s

        Command being timed: "k6 run build/app.bundle.js"
        User time (seconds): 75.75
        System time (seconds): 39.11
        Percent of CPU this job got: 168%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 1:08.25
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 14627396
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 3914087
        Voluntary context switches: 1205399
        Involuntary context switches: 86503
        Swaps: 0
        File system inputs: 0
        File system outputs: 0
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

My test create WS connection, start GraphQL subscription that’s all.

I see that you have 126 stages and I guess you are reconnecting as mentioned in Mass reconnect test , so can you just for the test not reconnect but have 1 stage ?

Also on a related note … given

WARN[0067] No script iterations finished, consider making the test duration longer

and the fact that only 25000 iterations were interrupted, I think the trick with gracefulRampDown is not working :thinking:

You can send me a redacted copy of the script on a DM if you don’t want to make it public.

4k msgs received and sent don’t sound enough for what I would consider like 2.5x more memory usage over 1 minute. And the data received sent also doesn’t seem as bad :thinking:.

I guess if you are using somekind of unicode in your messages the utf-8<->utf-16 can do something, but even that seems unlikely, so I am out of guess :man_shrugging:

also is there any difference between running it with and without webpack? I would guess webpack will make things worse in this particular case as it mainly makes what babel does internally … externally, as I mentioned earlier

About stages. I create stages like this:

        "duration": "5ms",
        "target": 200
        "duration": "5ms",
        "target": 400
        "duration": "5ms",
        "target": 28000
        "duration": "1m",
        "target": "28000"

I need this method for connecting TOTAL_NUM clients by portions (connecting N-clients, wait M-time).

Also, no difference between with webpack or without it.