K6 new relic gitlab integration

As I gathered around but still not find a solid solution so asking for the great help from the community.

My context would be following:

docker run \
  -d --restart unless-stopped \
  --name newrelic-statsd \
  -h $(hostname) \
  -e NR_ACCOUNT_ID=YOUR_ACCOUNT_ID \
  -e NR_API_KEY=YOUR_INSERT_API_KEY \
  -p 8125:8125/udp \
  newrelic/nri-statsd:2.0.0
  • However, the issue is with Gitlab CI, I could not manage to start that localhost:8125 on gitlab, hence the results could not be send to new relic.

I would appreciate if anyone experienced the same and found a solution for it.

Hi there, welcome to the forum :slight_smile:

I’m not deeply familiar with Gitlab CI, but couldn’t you start the New Relic container as part of the services key in the .gitlab-ci.yml? See their documentation for examples.

You should be able to specify all arguments and environment variables you need there, and then target the container in k6 with the StatsD environment variables. Though I’m unsure how Gitlab CI names the services or if you’ll have to reference it by IP, some testing is needed there.

HTH,

Ivan

hey, so I am not familiar with GitLab but if you can run the New Relic integration remotely and doesn’t need to be on your localhost. Have you tried that? I think it should work

This should in theory work by just pointing the output to a remote host running the New Relic StatsD integration.
Based at least on this guide and in the script section you would just have to append the script to be like K6_STATSD_ADDR=<AWS Public IP or hostname>:8125 k6 run ./loadtests/performance-test.js as that will override the localhost which is set by default.

I have just tried this by spinning up the integration on an AWS t2.micro to act as a relay between my k6 client on my laptop to output the metrics to New Relic StatsD running on the AWS instance, relaying those metrics into my New Relic account.

Here’s what I did:

  1. Spin up an EC2 t2.micro, configure it. I used Amazon Linux 2.
  2. Set a security policy that you can SSH into it to install the integration (port 22), allow incoming/outgoing traffic on port 8125 (StatsD)
  3. Install Docker
sudo amazon-linux-extras install docker
sudo service docker start
sudo usermod -a -G docker ec2-user
sudo chkconfig docker on
  1. Run the New Relic StatsD Integration
  2. docker ps -a check docker container is there
  3. When running my k6 test from my local client set the env var like: K6_STATSD_ADDR=1.22.33.44:8125 k6 run --out statsd test.js
  4. In New Relic you should be seeing the hostname of my AWS instance like so

So you just have to switch it up in step 6, set the env var to point at your remote host with the integration running.

There may be a smarter way of doing this by having GitLab run the docker container (?? no idea sorry)

Let me know if that solves it for you :smiley:

1 Like

@imiric @gspncr Thanks all for your inputs, appreciated that much. With some helps from devops team, I managed to make it work on Gitlab CI. So probably I post it here and anyone who may stuck like me could find a way out.

All right, so in my k6 project, I created a Dockerfile, something like this:

FROM peternguyentr/node-java-chrome:latest

ADD . / k6-test/

WORKDIR k6-test/

RUN CI=true

RUN npm install

And docker-compose.yml

version: "3.3"
services:
  k6-test:
    build: .
    command: npm run k6-test
    depends_on:
      - "statsd"
    network_mode: host
    tty: true

  statsd:
    image: newrelic/nri-statsd:latest
    environment:
      - NR_ACCOUNT_ID=xxxx
      - NR_API_KEY=NRII-xxxxx
    network_mode: host

Last but not least is the .gitlab-ci.yaml

...
  stage: test
  image: docker:19
  services:
    - docker:19-dind
  before_script:
    - apk add docker-compose --update-cache --repository http://dl-3.alpinelinux.org/alpine/edge/testing/ --allow-untrusted

  script:
    - docker-compose up --abort-on-container-exit
...

Then the pipeline is up and running :slight_smile: Hope this helps.

2 Likes