K8s Operator - Tried patching k6 kind

Hello :slight_smile: , I am new to k6 and I recently tried installing k8s operator to my EKS cluster. I know this is still in experimental phase but wanted to report my experience on the same.

So at one point I wanted to try patching my k6 kind rather than deleting k6 and deploying it again (after completion). As a result it fails everytime I does this. Initially I thought this to be an issue with my syntax, but it doesn’t looks to be. I wonder if this has something to do with the json-patch library which we are using in https://github.com/grafana/k6-operator

Sharing error code below

kubectl patch k6 k6-sample -p '{"spec": {"parallelism": 3 }}' --type=json
Error from server (BadRequest): json: cannot unmarshal object into Go value of type jsonpatch.Patch

It will be helpful if someone can guide.

NOTE: my kubectl version is latest. So that might not be an issue here


Hi @JinoArch, welcome to the forum :wave:

This is an interesting error: I think it’s due to --type=json requiring a JSON Patch, not a JSON Merge Patch – they use different format. Instead, the following command should work:

kubectl patch k6 k6-sample -p '{"spec": {"parallelism": 3 }}' --type=merge

Docs for reference.

A word of caution though. I think the applicability of patching k6 kind in this way is rather limited. It may depend on your particular use case of course but, e.g. changing parallelism from 2 to 3 can result in controller being stuck in a bad state and not being able to finish the test run. To make a restart you need to update status and that can be tricky too, due to this k8s issue.

Hope that helps!

Thanks @olha, :smiley:

The command did updated k6 but as you mentioned it looks like I need to update status to restart the flow. Presently, after patching its acts numb

  Stage:  started

So that does means if I need to re-run k6 kind again, I need to clear the resources in the same namespace and deploy it again? or find a workaround on updating the status? Please correct me if I am getting it wrong. Thanks

Yes, to restart it’s either a delete and apply for the resource or some working way to update the status with Kubernetes API.