-1

I created an alpine based docker image and installed latest version of node and newman(4.5.4) there. This docker image is being used as a jenkins agent where i run my collections. Everything works fine, except when i have a larger data set. When the number of requests reach somewhere around 10K, the newman process dies. I have no idea why. I confirmed it in my Jenkins settings and also after some test runs, that it is not the Jenkins who is killing the process. Newman seems to give up. It is not time bound as i tried adding delays, it sometimes dies within 30 mins and sometimes in 10 hours, but always after processing around 10K requests.

Steps to reproduce the behavior:

Install latest version of newman (4.5.4) on alpine Run any collection with some input data. Make sure eventually newman gets to call the API endpoint(s) well over 10K times. Newman fails after around 10K requests. Expected behavior Newman process must not die until it finishes all the given iterations.

ishan
  • 1,202
  • 5
  • 24
  • 44

1 Answers1

0

Is there a case or scenario is your test scripts - where you are setting setNextRequest to null, when some condition is met? This could be one of the cause for this.

Amit
  • 1
  • No i am not using setNextRequest anywhere. I tried it with different collections calling different sets of API, and this is why i think it might be a problem with newman. Maybe you can try to simulate in similar conditions with any REST API ? – ishan Sep 05 '19 at 07:51