Destroying test data generated during the test

Hi,
I’m having a problem with high volume of data created by the test and what to do with it.

Each VU will execute a POST request and insert some data into the system. I would like as a part of test execution to tidy up after itself so on re-run, system will be in a similar state.

My understanding is, it is not possible to collect data within the VU space and pass it over to teardown? What other options K6 has to enable post test clean up with data set that is unknown at the setup phase?

Hmm not lot, unfortunately… Maybe you can export the metrics to a JSON file and then use that in an external script to clean up things? Alternatively, an xk6 extension might suffice? :man_shrugging:

Can you elaborate on your use case some more? Is it not possible to somehow distinguish the test data from the normal data and clean it up in teardown() that way?

1 Like

Hi Robert,

Why not deleting it on the Teardown? It’s described here:
Test life cycle (k6.io)

I would try to do it like follows: add some tag to the data the test create (like “K6 auto-generated” and on the teardown, look for that data and delete em-all :wink:

Hope it helps!
J

1 Like

Hi Ned,
Thank you for responding.

So each POST request returns an ID of new entity. This ID is required to remove it from the system.
If there is way to pass IDs to Teardown somehow?

Sorry, was too fast - Ned’s response is exactly this :sweat_smile:

Not at the moment, sorry, unless the IDs are generated in setup().

How otherwise this can be addressed?

I suggested an alternative above: Destroying test data generated during the test - #2 by ned

1 Like

I think @joslat’s suggestion to tag the generated data is the simplest approach, but if this isn’t possible you’ll need some external storage you can update in the default function and read from in teardown(). A solution with Redis: https://stackoverflow.com/a/60576242 , which can now be somewhat easier with one of the xk6 Redis extensions.

2 Likes

Thank you for taking time in proposing those solutions.

Test runs are executed nightly on TFS with K6 task. This puts a limit on what can we do to what TFS Tasks provide, also adding new services to agents in use is currently not possible.

We do however capture metrics for further analysis, perhaps this is the way.