Constant-rate-arrival and http_requests timestamps

Hi,

I’m trying to setup a script that will run my request, let’s say (initially), every 10seconds for a duration of time. I’ve granted enough VUs for the current test run that each iteration would have a dedicated VU.

However I am noticing that the time stamps between iteration 1 and iteration 2 isn’t 10seconds. It will vary.

What I want is to guarantee a request is sent every ten seconds, regardless of how the previous test is performing. Am I doing it right - and if i am, why are the timestamps differences not exactly 10seconds?

export const options = {
    scenarios: {
    'load-test-1-per-10s':{
        executor: 'constant-arrival-rate',
        rate: 1,
        timeUnit: '10s',
        duration: '60s',
        preAllocatedVUs: 10, // how large the initial pool of VUs would be
        maxVUs: 100, // if the preAllocatedVUs are not enough, we can initialize more
      },
    },
  };

Now, the timings (I hope I have the relevant details):

|metric_name|timestamp|converted timestamp|metric_value|
|http_reqs|1675959446|09/02/2023 16:17:26|1|
|http_req_duration|1675959446|09/02/2023 16:17:26|3717.7834|
|http_reqs|1675959454|09/02/2023 16:17:34|1|
|http_req_duration|1675959454|09/02/2023 16:17:34|1520.3795|

I’ve excluded the timestamp between the first actual request and the setup script as I wouldn’t expect that to be at a 10second interval but is there any reason why these are 8s intervals instead of 10?

Now one thing is the system I am working against can have huge variance (from 1.5s response time to 15s) - would that be a factor and if so is there a way I can account for that to try keep a steady rate of requests?

Most likely I am misunderstanding what http_reqs is? Is it related to the start time of that request or something else?

Hi @SteOH

Welcome back :wave:

Regarding the timing issue, it’s important to note that the constant-arrival-rate executor does not guarantee a request is sent exactly every timeUnit (10 seconds in your case), but rather attempts to send requests at a constant rate over the duration of the test.

The actual timing between requests may vary due to various factors, such as server response times, network delays, and the performance of the local machine running the test. It’s normal to see some variation in the request intervals, and this variation can be greater when there is high variability in the response times of the system being tested.

What you should see is an iterations/s that matches your configuration, in this case, close to 0.10 iter/s.
I ran a similar example based on our example in the docs and what I get is 0.116042/s iterations:

  execution: local
     script: const-arr-rate.js
     output: -

  scenarios: (100.00%) 1 scenario, 100 max VUs, 1m30s max duration (incl. graceful stop):
           * contacts: 0.10 iterations/s for 1m0s (maxVUs: 10-100, gracefulStop: 30s)


running (1m00.3s), 000/010 VUs, 7 complete and 0 interrupted iterations
contacts ✓ [======================================] 000/010 VUs  1m0s  0.10 iters/s

     data_received..................: 43 kB  717 B/s
     data_sent......................: 3.2 kB 52 B/s
     http_req_blocked...............: avg=228.41ms min=216.17ms med=223.38ms max=257.74ms p(90)=246.58ms p(95)=252.16ms
     http_req_connecting............: avg=110.32ms min=103.44ms med=106.03ms max=128.47ms p(90)=122.88ms p(95)=125.67ms
     http_req_duration..............: avg=112.28ms min=105.35ms med=106.26ms max=133.47ms p(90)=126.68ms p(95)=130.07ms
       { expected_response:true }...: avg=112.28ms min=105.35ms med=106.26ms max=133.47ms p(90)=126.68ms p(95)=130.07ms
     http_req_failed................: 0.00%  ✓ 0        ✗ 7   
     http_req_receiving.............: avg=144.14µs min=83µs     med=145µs    max=201µs    p(90)=193.2µs  p(95)=197.1µs 
     http_req_sending...............: avg=81.28µs  min=44µs     med=100µs    max=106µs    p(90)=105.4µs  p(95)=105.7µs 
     http_req_tls_handshaking.......: avg=117.72ms min=109.85ms med=117.14ms max=129.09ms p(90)=125.37ms p(95)=127.23ms
     http_req_waiting...............: avg=112.06ms min=105.06ms med=106.08ms max=133.21ms p(90)=126.42ms p(95)=129.82ms
     http_reqs......................: 7      0.116042/s
     iteration_duration.............: avg=341.15ms min=322.86ms med=331.09ms max=391.67ms p(90)=373.78ms p(95)=382.72ms
     iterations.....................: 7      0.116042/s
     vus............................: 10     min=10     max=10
     vus_max........................: 10     min=10     max=10

From what I understand you need to ensure that a request is sent exactly every 10 seconds. Is that correct? Can you provide a bit more context for the need? Do you need the test to run for only 1 minute?

You could try using the ramping-arrival-rate executor with a rate of 1 request every 10 seconds. This executor ramps up the arrival rate gradually, which can help smooth out any variations in request timing.

export const options = {
    discardResponseBodies: true,
    scenarios: {
        contacts: {
            executor: 'ramping-arrival-rate',
            startRate: 1,
            timeUnit: '10s',
            preAllocatedVUs: 10,
            maxVUs: 100,
            stages: [
                { target: 1, duration: '10s' },
                { target: 1, duration: '10s' },
                { target: 1, duration: '10s' },
                { target: 1, duration: '10s' },
                { target: 1, duration: '10s' },
                { target: 1, duration: '10s' },
            ],
        },
    },
};

Regarding your question about the http_reqs metric, this metric counts the number of HTTP requests sent during the test, regardless of their timing. It’s not related to the start time of any particular request. The http_req_duration metric measures the duration of each HTTP request, which can help you understand how long the system takes to respond to each request.

On a side note, I would recommend to increase the preAllocatedVUs to make sure the test can run without having to allocate more mid-test. As this can affect the metrics as well. You can read more about this in the docs: Arrival-rate VU allocation

I hope this helps.

Cheers!

2 Likes

Hi @SteOH,

I am pretty sure in this particular case the reason is that the timestamp is when the metric is emitted - which happens to be the end of the request instead of the beginning.

In your case teh first request took 3.7s (rounded) and the second 1.5s which has a difference of about 2s, which is the difference you see.

All that @eyeveebe said is still relevant and true, but I just wanted to point out the exact reason of what you are seeing.

Hope this helps you

3 Likes

Thanks both really appreciate the responses.

I think this is enough for me to understand what’s going on. I can live with the variations now that I have a better idea of why they exist (which makes complete sense!).

2 Likes