Iterations fail with plenty of headroom

I am load testing a facsimile of a production API. Request requires attachment of a 10Mb zipfile.
I am getting >90% failed iterations, despite (i think) provisioning plenty of VUs.

I know the script works, as if i change the attached file to a tiny one, with 1000s of requests, all succeed.
similarly, 1 single iteration with the 10Mb file, succeeds also.

Requests take an average of 500ms to complete (API logs from previously succesful iterations):

POST /api/v1/products 201 544.153 ms

POST /api/v1/products 201 687.144 ms

POST /api/v1/products 201 744.069 ms

POST /api/v1/products 201 771.189 ms

POST /api/v1/products 201 466.066 ms


I can see k6 ramping up on my machine resources Memory monitor, it does not run out of Memory).

The aim of the test is purely to check Memory usage of Multer in handling attachments on the locally running API.

Could someone help me figure out why iterations are failing?

import http from 'k6/http';
import { check, sleep } from 'k6';
import { FormData } from 'https://jslib.k6.io/formdata/0.0.2/index.js';

const binFile = open('/Users/alanharrison/Peak/loadtest/10Mb.zip', 'b');



export const options = {
    scenarios: {
        constant_request_rate: {
          executor: 'constant-arrival-rate',
          rate: 1,
          timeUnit: '1s', // 1 iterations per second, i.e. 1 RPS
          duration: '10s',
          preAllocatedVUs: 100, // how large the initial pool of VUs would be
          maxVUs: 300, // if the preAllocatedVUs are not enough, we can initialize more
        },

      },


};

const BASE_URL = 'http://grafana.staged-by-discourse.com/api/v1/products';

const fd = new FormData();
    fd.append('artifact', http.file(binFile, '10Mb.zip', 'application/zip'));

export default () => {
    
  
    const res = http.post(BASE_URL, fd.body(), {
      headers: { 'Content-Type': 'multipart/form-data; boundary=' + fd.boundary },
    });
    check(res, {
      'is status 201': (r) => r.status === 201,
    });

  sleep(1);
};

i see these results:

running (1m01.0s), 000/100 VUs, 1 complete and 9 interrupted iterations
constant_request_rate ✓ [======================================] 009/100 VUs  10s  1.00 iters/s
iteration_duration.............: avg=38.5s    min=38.5s    med=38.5s    max=38.5s    p(90)=38.5s    p(95)=38.5s   
     iterations.....................: 1       0.016393/s

Hi @alanharrison !

I believe something is wrong with the script configuration and endpoint.

Like what I can see from your configuration. The total desired duration is only 10s (also, k6 has some time for the shutdown) you want to deliver a constant arrival rate of 1 request/second. So during these 10s, ten requests were spawned.

However, we see that even the first request took 38.5s (maybe because of 10 MB files). Probably following nine requests even have no chance to finish.

Since you’re curious about investigating memory consumption, I’d suggest changing the executor and using a different duration.

Let me know if that answers,
Cheers!

hi @olegbespalov
a single request with the 10Mb attachment to that endpoint only takes approx 500ms (see POST /api/v1/products 201 544.153 ms ) so im confused as why, when i want more requests, it is taking so long, despite only one request actually being executed?

as to the executor, which would you suggest?

thanks!

so im confused as why, when i want more requests, it is taking so long

At least for now it looks like the result of the tests :smile: Like more requests, responses become slower.

despite only one request actually being executed?

As I said, it’s not the only request executed in your case. The thing is that with the current configuration, only one request has the chance to succeed. Other requests are ongoing and later dropped because they weren’t successful in configured test duration.

a single request with the 10Mb attachment to that endpoint only takes approx 500ms (see POST /api/v1/products 201 544.153 ms )

Where this is coming from? :thinking: I mean, how do you measure that? If this is the only time you render a response, can it be that some load balancer is between http://grafana.staged-by-discourse.com and the application (API)? The reason that I’m asking is that in k6 you see the total time, how

Out of curiosity, is what the curl showing you? Like:

curl -s -w '\nTotal: %{time_total}s\n' -X POST -H "Content-Type: multipart/form-data" \
-F "artifact=@/Users/alanharrison/Peak/loadtest/10Mb.zip" http://grafana.staged-by-discourse.com/api/v1/products

as to the executor, which would you suggest?

I’d suggest starting with the ramping-vus. For instance, start with 1 VU, increase the number, and check the response times. But also I’d say to maybe keep the duration longer than 10 seconds.

Cheers!

Curl response is

{"id":1}
Total: 0.716163s

theres no load balancer, app is containerised in Docker. Im monitoring docker mem usage, it never spikes (except if i do eg a 50Kb attachment, which allows every iteration to be succesful)

I am not getting logged any failed attempts in docker monitoring or app console output from inside docker, which makes me think the failed iterations are not leaving K6

I will try with Ramping VU next :slight_smile:

thanks!

1 Like