Different AWS regions and latency measurement

We want to design prototype. For this, we already have AWS EC2 instances in different regions like Ireland, Asia Pacific etc. Currently, these are t2-small instances. As its just prototype, we would call API with 1-2 VU for 60 seconds from different regions.

1 - However, Is it possible to hit API from different AWS regions (without load impact) in K6 ?

2- If yes, then what challenges, we might face in setting up this in AWS EC2 - like Infra ?

Hi @vishal.lanke,

k6 is a AGPL3.0-licensed FOSS product, so you can use it as you like, according to the license. Yet, we have a cloud offering, aka. k6 Cloud, that provides more feature that are not available out-of-the-box in k6 (well, k6 is just a tool), like distributed execution, results/performance analysis, dashboards, …

Regarding your FAQ:

  1. You can hit your API from wherever you like, without the k6 Cloud offering, using Docker or the CLI tool, which is portrayed in installation and running k6 docs.
  2. When you want to run your own infra with the k6 tool, you are the one who manage the infra, and you’re the one responsible for its challenges. But, as I said, you have the Docker image and the CLI tool available, and you just need to feed them your script, and you’re good to go. You can also visualize your results with InfluxDB and Grafana and others.

Thanks for the reply.

Is there any way like configuration, where I will just configure regions and APIs will get called from that region ?

Lets say, my server is in Asia Pacific and I will configure region(“Ireland”), region(“Europe”) and APIs will be triggered from that region …

AFAIK, the option to specify AWS regions only works in our Cloud infrastructure, and k6 has no internal mechanism to control regions. This means that you should run your own instance of k6 in your own instance of EC2 or ECS from your pre-selected region.

For the record here is the documentation on how to configure k6 to run in multiple regions in the cloud.

If you decide to use the cloud, it also provides a test builder, a UI way of generating scripts, which also has a UI way of configuring which regions the script should run from.

There is also a mechanism for recognizing (The table with env variables at the end of the section) in which region the script is running from inside the script so that you can change behavior if required.

But k6 in of itself doesn’t have a way of running anything else, but itself locally, at this point.

You can try the Free trial, but multiple regions in the same test are limited to paid subscriptions, but you can at least test your site from multiple regions one after another in different tests.

I will check cloud infrastructure to see if it suits to us. Thanks…

Out of context topic - you wrote ECS. We have hosted InfluxDB and Grafana in ECS.
Can we load K6 container in ECS? Then, what about script execution, how can I trigger my scripts in ECS ?

In our scripts, test data, common library , environment etc. etc is separated, means to execute a script there is dependency on other files …

@vishal.lanke, again k6 is just an executable (with no practical dependencies, hurray Golang :tada:) Practically your question is “how do I run an executable in AWS?” … which is not a k6 question it is a AWS one :smiley: . I would recommend using the official docker image if possible :smiley: .

I would recommend you use the k6 archive script.js command, which will generate you an archive.tar (unless you specify something else, see k6 archive --help) , which will have everything needed to execute the script. You can move this tar between machines and use k6 run archive.tar to run it. This is in part what happens when you say k6 cloud script.js :smiley: .

Another way is to create a custom image from k6 image and push it to ECR, along with your scripts and data files. Then you can start a cluster containing the custom k6 image, InfluxDB and Grafana.