Baselining individual microservices

Hi, does anyone have any advice about using k6 for benchmarking individual microservices (ideally in a CI pipeline)?

Also any advice on using k6 as part of a chaos engineering toolset against a EKS cluster would be brilliant!

  1. CI pipeline? ok, try to investigate about blue green testing. in order to make benchmarking.

  2. chaos? look, I love k6 and I would try to give it a chance applying this same strategy that Gremlin did with jmeter. basically the strategy applies to another load testing tools: >>here

1 Like

Hi @sstratton,

does anyone have any advice about using k6 for benchmarking individual microservices (ideally in a CI pipeline)?

What are your requirements? Just benchmarking individual microservices is kinda vague. :sweat_smile: If youā€™re looking to get separate timings for each microservice, I might have some ideas, but it would definitely help with some more context.

Also any advice on using k6 as part of a chaos engineering toolset against a EKS cluster would be brilliant!

Did you have anything in particular in mind? For instance, a particular chaos engineering tool?

As long as youā€™re able to connect to the control plane via kubectl, something like xk6-chaos or chaostoolkit-k6 might be suitable for the task. Both are highly experimental though, so filing issues in the repos as you bump into issues will definitely be necessary (and highly appreciated).

If there is anything else I could assist you with reg. your chaos endeavours, just let me know. :+1:

Best
Simme

1 Like

Hi @simme,

Thanks - Iā€™ll try both those links to chaos engineering, I love the xk6-chaos extension, which is a really cool approach. The toolkit will be useful also.

With benchmarking microservices, Iā€™m taking the approach that:

  1. Devs can use benchmarking tools in their code for performance optimization at a unit test level. It might be possible to do this in CI also for certain languages (e.g. with Golang you can build binaries of your tests so you can compare changes) so it might be worth looking at this in future.
  2. Iā€™d like to be able to profile the CPU (in the CI pipeline for each service) then compare with a previous run to see if any new code adds significant CPU time. Google Cloud Profiler looks good for this. I donā€™t know if there are any other options?
  3. Iā€™d like to be able to watch for Memory Leaks in CI.
  4. Iā€™d like to be able to watch for any language-specific issues (e.g. goroutine leaks in Golang) in CI.
  5. Iā€™d like to be able to monitor latency in CI.

3-5 would need k6 running against the service (with mocks) in order to get results.

With 3-4 Iā€™m not sure how to do the ā€˜comparison with a previous runā€™ bit. Maybe Cloud Profiler alsoā€¦ I havenā€™t tried this yet or found any other frameworks.

For 5 (latency/response times) Iā€™m guessing:

  1. Do some ā€˜baselineā€™ runs
  2. Set up Thresholds in k6 based on those
  3. Leave it be, the CI job will fail if things get worse << is there any danger that a randomly slow CI runner or cloud network will cause a false positive here?
  4. (optionally) Export results to InfluxDB so we can capture historical results.

Does this seem like the best approach?

1 Like

Ultimately Iā€™d like the services in high-risk areas to be:

  1. As fast as humanely possible
  2. Not consume large amounts of CPU
  3. Not leak memory
  4. Not get any worse

And for the solution to:

  1. Not be a bit overhead for the devs
  2. Any problems to be picked up early (in CI)
  3. To be easily interpreted by the devs and not need specialist knowledge
1 Like

I want to express my thoughts
i m afraid that you will need to coordinate with dev ā€™ ops team, in order to start up and down instances on demand, by gitops or circleCI , weaveworkā€¦ etcā€¦,
where each one should contains a respective code/feature version.
To profile the CPU, i think you will need to have a monitoring tool like a sidecar running along with the CI, wherein it can read CPU thresholds reached and then arise a flag.(which allows the job to reject the MERGE Request, PullRequest.

In all executions you will run, you ought to save / dump the metrics in influxDB to analyze the metrics by grafana after tests are done. (observability)

You could for instance use Prometheus in go to provide metrics from within the actual execution path of the code youā€™re testing. This would in turn allow you to work with the dev team to add metrics for internal timings, which could prove useful while benchmarking.

If youā€™re running GCP, using Google Cloud Profiler definitely makes sense. Worst case, youā€™d able to export boh RAM and CPU usage through the prometheus go client as well, although it takes a little bit more tinkering to set up.

I agree with what youā€™ve listed here. Maybe @nicole might be able to provide some additional insights from the perspective of good performance testing practices.

Yes. Adding a threshold for the runners CPU usage is a good way of staying on top of this. It will of course still fail the build, which I personally prefer to the alternative, but at least youā€™ll get a clear indicator of the cause, allowing you to just rerun the test.

1 Like

Sounds like a solid approach. I wouldnā€™t be too concerned about the job failing or getting a false positive. As long as you have ways to find out why it failed (such as by using monitoring metrics and thresholds in k6, as Simme suggested), a failure can still be useful information.

And on that note, I just wanted to add that I donā€™t think capturing some historical results is optional. Itā€™s not always immediately obvious from just the last result that thereā€™s a problem. You might ignore a failure once as an outlier, but if you later spot a pattern of failures (slow response times every month/quarter at a certain time), youā€™ll be glad you kept data for what you thought were ā€œrandomā€ failures.

1 Like