Getting long weird errors while running on EC2

Hello,
while running tests, the logs started getting printed at great speeds. Here’s a screenshot:

Hi, can you provide the output of k6 version and a sample script that reproduces this issue?

It’s the latest version: k6 v0.31.1 (2021-03-17T13:23:23+0000/e9d8349, go1.15.8, linux/amd64)

Installed on an amazon linux instance(installed k6 using yum)

I think it’s related to the instance resources, because with small VUs it works, when I increased it from 200 to 1000 it kept crashing.

The instance type is t2 micro. So I guess i’m going to have to change it’s type to a better suited one.

Thanks for the version, but it would be great if you could provide a small test script that can reproduce this issue.

That long error is a Go panic stack trace, which k6 should never produce, so this might be an issue introduced in v0.31.1. From your screenshot I see that you’re using the constant-vus executor, and one of the goroutines was running an http.batch() call within a group(), but that might not be relevant to the root cause, and it would really help us out with fixing this if we could reproduce it.

Another good test would be to try older k6 versions and see if it happens there too, to pinpoint which version might have introduced this.

The instance type/size shouldn’t matter, as k6 shouldn’t output this even if it hits resource limits.

This is all of the code. You just need to replace the graphql query, endpoint, access token.

import http from "k6/http";
import { check, sleep, group } from "k6";
import { Trend } from "k6/metrics";

function getItems(userId, where, orderBy, numberOfRecords) {
  const variables = {};
  if (where) {
    variables.where = where;
  }
  if (orderBy) {
    variables.orderBy = orderBy;
  }
  if (numberOfRecords) {
    variables.take = numberOfRecords;
  }

  return {
    variables,
    query: `query GetProperties($orderBy: [ItemOrderByInput!], $where: ItemWhereInput,$take: Int) {
      items( where: $where, orderBy: $orderBy,take:$take) {
        id
        title
        description
      }
    }`,
  };
}

const trend1 = new Trend("get items 1", true);
const trend2 = new Trend("get items 2", true);
const trend3 = new Trend("get items 3", true);
const trend4 = new Trend("get items 4", true);
const trend5 = new Trend("get items 5", true);

export let options = {
  vus: 1000, // virtual users
  duration: "1s",
  //httpDebug: 'full',
};

const accessToken = "";

const headers = {
  Authorization: `Bearer ${accessToken}`,
  "Content-Type": "application/json",
};

const endpoints = {
  graphql: "",
};

const SLEEP_DURATION = 5;

const q1 = getItems(
  null,
  {
    ignoredBy: {
      none: {
        id: {
          equals: undefined,
        },
      },
    },
    status: {
      in: ["ACTIVE", "ACTIVEUPDATED"],
    },
  },

  { score: "desc" },
  4
);

const q2 = getItems(
  null,
  {
    category: {
      equals: "FARM",
    },
    type: {
      equals: "VALUE",
    },
    ignoredBy: {
      none: {
        id: {
          equals: undefined,
        },
      },
    },
    status: {
      in: ["ACTIVE", "ACTIVEUPDATED"],
    },
  },
  { createdAt: "desc" },
  4
);

const q3 = getItems(
  null,
  {
    category: {
      equals: "TYPE",
    },
    type: {
      equals: "SALE",
    },
    ignoredBy: {
      none: {
        id: {
          equals: undefined,
        },
      },
    },
    status: {
      in: ["ACTIVE", "ACTIVEUPDATED"],
    },
  },
  { createdAt: "desc" },
  4
);

const q4 = getItems(
  null,
  {
    category: {
      equals: "TYPE",
    },
    type: {
      equals: "VALUE",
    },
    ignoredBy: {
      none: {
        id: {
          equals: undefined,
        },
      },
    },
    status: {
      in: ["ACTIVE", "ACTIVEUPDATED"],
    },
  },
  { createdAt: "desc" },
  4
);

const q5 = getItems(
  null,
  {
    category: {
      equals: "LAND",
    },
    type: {
      equals: "SALE",
    },
    ignoredBy: {
      none: {
        id: {
          equals: undefined,
        },
      },
    },
    status: {
      in: ["ACTIVE", "ACTIVEUPDATED"],
    },
  },
  { createdAt: "desc" },
  4
);

export default function () {
  group("home page", () => {
    const responses = http.batch([
      [
        "POST",
        endpoints.graphql,
        JSON.stringify({
          query: q1.query,
          variables: q1.variables,
        }),
        { headers },
      ],
      [
        "POST",
        endpoints.graphql,
        JSON.stringify({
          query: q2.query,
          variables: q2.variables,
        }),
        { headers },
      ],
      [
        "POST",
        endpoints.graphql,
        JSON.stringify({
          query: q3.query,
          variables: q3.variables,
        }),
        { headers },
      ],
      [
        "POST",
        endpoints.graphql,
        JSON.stringify({
          query: q4.query,
          variables: q4.variables,
        }),
        { headers },
      ],
      [
        "POST",
        endpoints.graphql,
        JSON.stringify({
          query: q5.query,
          variables: q5.variables,
        }),
        { headers },
      ],
    ]);

    check(responses[0], {
      "status was 200 (get items 1)": (r) => r.status == 200,
    });
    trend1.add(responses[0].timings.duration);

    sleep(SLEEP_DURATION);

    check(responses[1], {
      "status was 200 (get items 2)": (r) => r.status == 200,
    });
    trend2.add(responses[1].timings.duration);

    sleep(SLEEP_DURATION);

    check(responses[2], {
      "status was 200 (get items 3)": (r) => r.status == 200,
    });
    trend3.add(responses[2].timings.duration);

    sleep(SLEEP_DURATION);

    check(responses[3], {
      "status was 200 (get items 4)": (r) => r.status == 200,
    });
    trend4.add(responses[3].timings.duration);

    sleep(SLEEP_DURATION);

    check(responses[4], {
      "status was 200 (get items 5)": (r) => r.status == 200,
    });
    trend5.add(responses[4].timings.duration);

    sleep(SLEEP_DURATION);
  });
}

Hhmm strange, I’m not able to reproduce the panic on my machine with that script. Even with 10,000 VUs and increasing the duration, at which point my laptop struggles a bit, the script runs fine, save for some dial tcp errors, but no panic. :confused:

I also tried running it inside a centos:7 container, which is similar to the Amazon Linux distro, but it ran fine there as well.

The script is pretty straightforward and I don’t see anything wrong with it.

The only difference for me locally is that I’m testing against a dummy HTTP server and not your GraphQL instance, so the amount of data and the response bodies are different, but I doubt that could be relevant.

At this point I can think of just a few more things:

  • Is there anything peculiar about your test or environment? Are you running in a container? Are you using any outputs? The full k6 command you use to run it would be helpful.
  • Can you share the full log output of your k6 run, so that we can see the entire stack trace? You can run k6 with k6 run script.js 2>&1 | tee log.txt. It would probably be too large to paste here, so share it in a GitHub gist or pastebin.com.
  • If you don’t mind trying older k6 versions and confirming from which version the error starts happening, it would help us determine when the issue was introduced. You can install older versions with e.g. yum downgrade --nogpgcheck k6-0.30.0.

Well, to answer your questions:

1- No, nothing special about the env. Just a plain old t2 mico EC2 instance. I just used the command k6 run loadtest.js

2- Here’s a link to the gist: k6 panic error · GitHub

I tried the command above to downgrade. It showed a lot smaller panic logs. And it says “out of memory”:

  execution: local
 script: loadtest.js
 output: -

  scenarios: (100.00%) 1 scenario, 1000 max VUs, 31s max duration (incl. graceful stop):
       * default: 1000 looping VUs for 1s (gracefulStop: 30s)

fatal error: runtime: out of memory
Init      [==========>---------------------------] 0296/1000 VUs initialized
runtime stack:-----------------------------------]
runtime.throw(0x129fd5d, 0x16)
	runtime/panic.go:1116 +0x72
runtime.sysMap(0xc030000000, 0x4000000, 0x1e76d78)
	runtime/mem_linux.go:169 +0xc6
runtime.(*mheap).sysAlloc(0x1e5aa20, 0x400000, 0x7fffffffffff, 0x185edf0)
	runtime/malloc.go:727 +0x1e5
runtime.(*mheap).grow(0x1e5aa20, 0x1, 0x0)
	runtime/mheap.go:1344 +0x85
runtime.(*mheap).allocSpan(0x1e5aa20, 0x1, 0x600, 0x1e76d88, 0x1fe)
	runtime/mheap.go:1160 +0x6b6
runtime.(*mheap).alloc.func1()
	runtime/mheap.go:907 +0x65
runtime.systemstack(0x0)
	runtime/asm_amd64.s:370 +0x66
runtime.mstart()
	runtime/proc.go:1116

goroutine 15 [running]:
runtime.systemstack_switch()
	runtime/asm_amd64.s:330 fp=0xc003010c78 sp=0xc003010c70 pc=0x46b240
runtime.(*mheap).alloc(0x1e5aa20, 0x1, 0x1000000014f0106, 0x0)
	runtime/mheap.go:901 +0x85 fp=0xc003010cc8 sp=0xc003010c78 pc=0x427985
runtime.(*mcentral).grow(0x1e6b4d8, 0x0)
	runtime/mcentral.go:506 +0x7a fp=0xc003010d10 sp=0xc003010cc8 pc=0x418b7a
runtime.(*mcentral).cacheSpan(0x1e6b4d8, 0x7fcfd6246728)
	runtime/mcentral.go:177 +0x3e5 fp=0xc003010d88 sp=0xc003010d10 pc=0x418905
runtime.(*mcache).refill(0x7fcffed0c108, 0x106)
	runtime/mcache.go:142 +0xa5 fp=0xc003010da8 sp=0xc003010d88 pc=0x4182a5
runtime.(*mcache).nextFree(0x7fcffed0c108, 0xc02fffc706, 0x120, 0x110, 0x9515d3)
	runtime/malloc.go:880 +0x8d fp=0xc003010de0 sp=0xc003010da8 pc=0x40d24d
runtime.mallocgc(0x20, 0x1105540, 0x169cc6cc9a135901, 0xc02fffc7e0)
	runtime/malloc.go:1061 +0x834 fp=0xc003010e80 sp=0xc003010de0 pc=0x40dc34
runtime.growslice(0x1105540, 0xc02fffa8e0, 0x1, 0x1, 0x2, 0xc02fffa800, 0x0, 0x1)
	runtime/slice.go:230 +0x1e9 fp=0xc003010ee8 sp=0xc003010e80 pc=0x44e489
github.com/dop251/goja.(*baseObject)._put(...)
	github.com/dop251/goja@v0.0.0-20210111190058-952c20e23c35/object.go:808
github.com/dop251/goja.(*baseObject)._putProp(0xc02ffdda40, 0xc0041960fa, 0x1, 0x14f37c0, 0x1e7366a, 0x10101, 0xc02fff7800, 0x9a4ffa)
	github.com/dop251/goja@v0.0.0-20210111190058-952c20e23c35/object.go:828 +0x1ae fp=0xc003010f48 sp=0xc003010ee8 pc=0x9547ce
github.com/dop251/goja.setProp1.exec(0xc0041960fa, 0x1, 0xc02efedce0)
	github.com/dop251/goja@v0.0.0-20210111190058-952c20e23c35/vm.go:1087 +0xe2 fp=0xc003010f98 sp=0xc003010f48 pc=0x99ce82
github.com/dop251/goja.(*setProp1).exec(0xc003753330, 0xc02efedce0)
	<autogenerated>:1 +0x4f fp=0xc003010fc0 sp=0xc003010f98 pc=0x9d75cf
github.com/dop251/goja.(*vm).run(0xc02efedce0)
	github.com/dop251/goja@v0.0.0-20210111190058-952c20e23c35/vm.go:307 +0xa3 fp=0xc003011000 sp=0xc003010fc0 pc=0x9984e3
github.com/dop251/goja.(*funcObject).call(0xc02fe4ab60, 0x14f3180, 0xc02fe63950, 0xc02fe600b0, 0x3, 0x1f3, 0x0, 0x0, 0x0, 0x0)
	github.com/dop251/goja@v0.0.0-20210111190058-952c20e23c35/func.go:161 +0x33a fp=0xc003011078 sp=0xc003011000 pc=0x94e13a
github.com/dop251/goja.(*funcObject).Call(...)
	github.com/dop251/goja@v0.0.0-20210111190058-952c20e23c35/func.go:129
github.com/dop251/goja.(*funcObject).Call-fm(0x14f3180, 0xc02fe63950, 0xc02fe600b0, 0x3, 0x1f3, 0x14a6f68, 0x0)
	github.com/dop251/goja@v0.0.0-20210111190058-952c20e23c35/func.go:128 +0x7b fp=0xc003011100 sp=0xc003011078 pc=0x9d037b
github.com/dop251/goja.(*Runtime).functionproto_call(0xc02fa8f500, 0x14f3180, 0xc02fe44c60, 0xc02fe600a0, 0x4, 0x1f4, 0x17e, 0x17f)
	github.com/dop251/goja@v0.0.0-20210111190058-952c20e23c35/builtin_function.go:109 +0x165 fp=0xc0030111b0 sp=0xc003011100 pc=0x8c35a5
github.com/dop251/goja.(*Runtime).functionproto_call-fm(0x14f3180, 0xc02fe44c60, 0xc02fe600a0, 0x4, 0x1f4, 0x14f3101, 0xc02fe44c60)
	github.com/dop251/goja@v0.0.0-20210111190058-952c20e23c35/builtin_function.go:102 +0x48 fp=0xc003011200 sp=0xc0030111b0 pc=0x9c6488
github.com/dop251/goja.(*vm)._nativeCall(0xc02efedce0, 0xc02fdfdec0, 0x4)
	github.com/dop251/goja@v0.0.0-20210111190058-952c20e23c35/vm.go:1818 +0x2c2 fp=0xc003011288 sp=0xc003011200 pc=0x9a3882
github.com/dop251/goja.call.exec(0x4, 0xc02efedce0)
	github.com/dop251/goja@v0.0.0-20210111190058-952c20e23c35/vm.go:1790 +0xaeb fp=0xc003011348 sp=0xc003011288 pc=0x9a348b
github.com/dop251/goja.(*call).exec(0x1c46100, 0xc02efedce0)
	<autogenerated>:1 +0x45 fp=0xc003011368 sp=0xc003011348 pc=0x9d4ce5
github.com/dop251/goja.(*vm).run(0xc02efedce0)
	github.com/dop251/goja@v0.0.0-20210111190058-952c20e23c35/vm.go:307 +0xa3 fp=0xc0030113a8 sp=0xc003011368 pc=0x9984e3
github.com/dop251/goja.(*vm).run-fm()
	github.com/dop251/goja@v0.0.0-20210111190058-952c20e23c35/vm.go:299 +0x2a fp=0xc0030113c0 sp=0xc0030113a8 pc=0x9d11ca
github.com/dop251/goja.(*vm).try(0xc02efedce0, 0xc0030114b0, 0x0)
	github.com/dop251/goja@v0.0.0-20210111190058-952c20e23c35/vm.go:413 +0x163 fp=0xc003011498 sp=0xc0030113c0 pc=0x998b63
github.com/dop251/goja.(*vm).runTry(0xc02efedce0, 0xc02fd6f501)
	github.com/dop251/goja@v0.0.0-20210111190058-952c20e23c35/vm.go:418 +0x4e fp=0xc0030114d0 sp=0xc003011498 pc=0x998c6e
github.com/dop251/goja.(*Runtime).RunProgram(0xc02fa8f500, 0xc0033487e0, 0x0, 0x0, 0x0, 0x0)
	github.com/dop251/goja@v0.0.0-20210111190058-952c20e23c35/runtime.go:1220 +0x1f8 fp=0xc003011540 sp=0xc0030114d0 pc=0x97d2b8
github.com/loadimpact/k6/js.(*Bundle).instantiate(0xc003a1e580, 0x14f7540, 0xc0001c6150, 0xc02fa8f500, 0xc02fd85950, 0x12b, 0x10fa5e0, 0xc002646000)
	github.com/loadimpact/k6/js/bundle.go:286 +0xa33 fp=0xc003011660 sp=0xc003011540 pc=0xfaf213
github.com/loadimpact/k6/js.(*Bundle).Instantiate(0xc003a1e580, 0x14f7540, 0xc0001c6150, 0x12b, 0xc003011dc0, 0x0, 0x0)
	github.com/loadimpact/k6/js/bundle.go:242 +0x106 fp=0xc003011be0 sp=0xc003011660 pc=0xfae166
github.com/loadimpact/k6/js.(*Runner).newVU(0xc000b46780, 0x12b, 0xc003a21ce0, 0x4069bd, 0xc000d36180, 0xc003a442a0)
	github.com/loadimpact/k6/js/runner.go:137 +0x68 fp=0xc003011e58 sp=0xc003011be0 pc=0xfb1c68
github.com/loadimpact/k6/js.(*Runner).NewVU(0xc000b46780, 0x12b, 0xc003a21ce0, 0x12, 0xc003a44238, 0xc000d361d8, 0x1)
	github.com/loadimpact/k6/js/runner.go:127 +0x45 fp=0xc003011e98 sp=0xc003011e58 pc=0xfb1b85
github.com/loadimpact/k6/core/local.(*ExecutionScheduler).initVU(0xc001fa8580, 0xc003a21ce0, 0xc000133180, 0x14d5220, 0xc02fc73560, 0x0, 0x0)
	github.com/loadimpact/k6/core/local/local.go:162 +0x7b fp=0xc003011f38 sp=0xc003011e98 pc=0xfe1a3b
github.com/loadimpact/k6/core/local.(*ExecutionScheduler).initVUsConcurrently.func1(0xc000d36180, 0xc001fa8580, 0xc003a21ce0, 0xc000133180, 0xc003a441e0)
	github.com/loadimpact/k6/core/local/local.go:201 +0xa8 fp=0xc003011fb8 sp=0xc003011f38 pc=0xfe4d68
runtime.goexit()
	runtime/asm_amd64.s:1374 +0x1 fp=0xc003011fc0 sp=0xc003011fb8 pc=0x46ce81
created by github.com/loadimpact/k6/core/local.(*ExecutionScheduler).initVUsConcurrently
	github.com/loadimpact/k6/core/local/local.go:199 +0xc5

goroutine 1 [select]:
github.com/loadimpact/k6/core/local.(*ExecutionScheduler).Init(0xc001fa8580, 0x14e28e0, 0xc000f0b300, 0xc003a21ce0, 0x0, 0x0)
	github.com/loadimpact/k6/core/local/local.go:252 +0x54c
github.com/loadimpact/k6/core.(*Engine).Init(0xc000acec00, 0x14e28e0, 0xc000f0b280, 0x14e28e0, 0xc000f0b300, 0xc002f8bc30, 0x0, 0x3e8, 0x1)
	github.com/loadimpact/k6/core/engine.go:128 +0xcd
github.com/loadimpact/k6/cmd.getRunCmd.func1(0xc0001cb900, 0xc00016f010, 0x1, 0x1, 0x0, 0x0)
	github.com/loadimpact/k6/cmd/run.go:256 +0x1573
github.com/spf13/cobra.(*Command).execute(0xc0001cb900, 0xc00016efe0, 0x1, 0x1, 0xc0001cb900, 0xc00016efe0)
	github.com/spf13/cobra@v0.0.4-0.20180629152535-a114f312e075/command.go:762 +0x47c
github.com/spf13/cobra.(*Command).ExecuteC(0xc0001adb80, 0xc0006d5f00, 0xc, 0xc)
	github.com/spf13/cobra@v0.0.4-0.20180629152535-a114f312e075/command.go:852 +0x2fe
github.com/spf13/cobra.(*Command).Execute(...)
	github.com/spf13/cobra@v0.0.4-0.20180629152535-a114f312e075/command.go:800
github.com/loadimpact/k6/cmd.Execute()
	github.com/loadimpact/k6/cmd/root.go:198 +0x571
main.main()
	github.com/loadimpact/k6/main.go:28 +0x25

goroutine 9 [select]:
github.com/loadimpact/k6/cmd.showProgress(0x14e28e0, 0xc000f0b3c0, 0x0, 0x3e8, 0x1, 0x3b9aca00, 0x1, 0x0, 0x0, 0x0, ...)
	github.com/loadimpact/k6/cmd/ui.go:339 +0x467
github.com/loadimpact/k6/cmd.getRunCmd.func1.1(0xc001fa8580, 0x14e28e0, 0xc000f0b3c0, 0xc00274d100, 0xc0001c6150, 0xc002daf740)
	github.com/loadimpact/k6/cmd/run.go:190 +0x1e5
created by github.com/loadimpact/k6/cmd.getRunCmd.func1
	github.com/loadimpact/k6/cmd/run.go:185 +0xdab

goroutine 8 [select]:
io.(*pipe).Read(0xc000064c00, 0xc000552000, 0x1000, 0x1000, 0x10abfe0, 0x1, 0xc000552000)
	io/pipe.go:57 +0xe7
io.(*PipeReader).Read(0xc00000e528, 0xc000552000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	io/pipe.go:134 +0x4c
bufio.(*Scanner).Scan(0xc000718f38, 0x0)
	bufio/scan.go:214 +0xa9
github.com/sirupsen/logrus.(*Entry).writerScanner(0xc0001c6310, 0xc00000e528, 0xc00016f020)
	github.com/sirupsen/logrus@v1.6.0/writer.go:59 +0xb4
created by github.com/sirupsen/logrus.(*Entry).WriterLevel
	github.com/sirupsen/logrus@v1.6.0/writer.go:51 +0x1b7

goroutine 10 [IO wait]:
internal/poll.runtime_pollWait(0x7fcfd80a2398, 0x72, 0x0)
	runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc000e77598, 0x72, 0x0, 0x0, 0x1290187)
	internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Accept(0xc000e77580, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	internal/poll/fd_unix.go:394 +0x1fc
net.(*netFD).accept(0xc000e77580, 0xc003676090, 0x40e0d8, 0xc0024f0400)
	net/fd_unix.go:172 +0x45
net.(*TCPListener).accept(0xc000d8c360, 0xc000719cf0, 0x40e0d8, 0x30)
	net/tcpsock_posix.go:139 +0x32
net.(*TCPListener).Accept(0xc000d8c360, 0x11c9740, 0xc003676090, 0x10ed420, 0x1e22320)
	net/tcpsock.go:261 +0x65
net/http.(*Server).Serve(0xc0013ced20, 0x14deb20, 0xc000d8c360, 0x0, 0x0)
	net/http/server.go:2937 +0x266
net/http.(*Server).ListenAndServe(0xc0013ced20, 0xc0013ced20, 0xc000f0b400)
	net/http/server.go:2866 +0xb7
net/http.ListenAndServe(...)
	net/http/server.go:3120
github.com/loadimpact/k6/api.ListenAndServe(0x1297e87, 0xe, 0xc000acec00, 0x14f7540, 0xc0001c6150, 0x1, 0x1)
	github.com/loadimpact/k6/api/server.go:53 +0x337
github.com/loadimpact/k6/cmd.getRunCmd.func1.2(0xc0001c6150, 0xc000acec00, 0xc0001cb900)
	github.com/loadimpact/k6/cmd/run.go:221 +0xf5
created by github.com/loadimpact/k6/cmd.getRunCmd.func1
	github.com/loadimpact/k6/cmd/run.go:219 +0x1fcf

goroutine 12 [syscall]:
os/signal.signal_recv(0x46ce81)
	runtime/sigqueue.go:147 +0x9d
os/signal.loop()
	os/signal/signal_unix.go:23 +0x25
created by os/signal.Notify.func1.1
	os/signal/signal.go:150 +0x45

goroutine 14 [chan receive]:
github.com/loadimpact/k6/cmd.getRunCmd.func1.3(0xc003a21d40, 0xc0001c6150, 0xc002f8bc40, 0xc002f8bc30)
	github.com/loadimpact/k6/cmd/run.go:242 +0x45
created by github.com/loadimpact/k6/cmd.getRunCmd.func1
	github.com/loadimpact/k6/cmd/run.go:241 +0x1499

goroutine 16 [select]:
github.com/loadimpact/k6/core/local.(*ExecutionScheduler).initVUsConcurrently.func2(0xc000d36180, 0x3e8, 0x14e28e0, 0xc000f0b600)
	github.com/loadimpact/k6/core/local/local.go:213 +0x127
created by github.com/loadimpact/k6/core/local.(*ExecutionScheduler).initVUsConcurrently
	github.com/loadimpact/k6/core/local/local.go:210 +0x125

Ah, there’s the issue in the first line: runtime: out of memory. I was able to reproduce it in a memory-limited VM.

So I was wrong above saying that k6 shouldn’t output this if it hits resource limits. This is one of the few instances where if the process runs out of memory, the behavior of the Go runtime is to dump the stack traces of all running goroutines. Since you’re testing with 1000 VUs, that’s a lot of goroutines, which leads to that large output.

Unfortunately, AFAIK there’s no way for a Go program to catch and supress this output, unlike with other panic events.

So the suggestions in this case apply to any OOM situation:

  • Keep in mind that a single VU is a separate JavaScript VM, which needs to load all your JS dependencies and can have memory leaks. A single VU needs approximately 1-5MB, but in complex scripts, doing file uploads, etc. this can be upwards of 10MB or more. A 1000 VU test in a t2.micro instance should’ve been a hint for me :slight_smile:
  • Consider using the discardResponseBodies or responseType: 'none' options if response bodies are not needed. In your script you’re only checking for response status, so this should be safe to enable.
  • A relatively simple way to reduce memory usage is to stick to the --compatibility-mode=base. Either by writing your script with ES5.1 features only, or previously running it through Babel, which will give you a base ES5.1 compatibility.
    We’ve made substantial efforts in optimizing --compatibility-mode=extended recently, so this difference is not as pronounced in recent versions, but it would still be noticeable in most cases.
  • If after all optimizations you’re still having issues, then the only option is to reduce the amount of VUs or scale up the hardware.

Good luck!

1 Like