1

I am using request npm module in my app, to make to create a http client, as this.

var request = require('request');

And each time, I make a request to some server, I pass the options as below:

var options = {
  url: "whateverurl...",
  body: { some json data for POST ... }
}
request(options, cb(e, r, body) {
   // handle response here...
})

This was working fine, until I started testing with high load, and I started getting errors indicating no address available (EADDRNOTAVAIL). It looks like I am running out of ephemeral ports, as there is no pooling or keep-alive enabled.

After that, I changed it to this:

var options = {
  url: "whateverurl...",
  body: { some json data for POST ... },
  forever: true
}
request(options, cb(e, r, body) {
  // handle response here...
})
  • (Note the option (forever:true)

I tried looking up request module's documentation about how to set keep-alive. According to the documentation and this stackoverflow thread, I am supposed to add {forever:true} to my options.

It didn't seem to work for me, because when I checked the tcpdump, the sever was still closing the connection. So, my question is:

  • Am I doing something wrong here?
  • Should I not be setting a global option to request module, while I am "require"ing it, instead of telling it to use {forever:true}, each time I make a http request? This is confusing to me.
Community
  • 1
  • 1
Mopparthy Ravindranath
  • 3,014
  • 6
  • 41
  • 78
  • It's possible you're running out of filehandles before you run out of ports. How many is your process allowed to create? On POSIX systems this is related to `ulimit`, specifically `ulimit -n`. – tadman Mar 15 '17 at 06:35
  • If the connections are being closed, why aren't the ports being reused? – robertklep Mar 15 '17 at 07:51
  • @robertklep I I could see many connections in TIME_WAIT state from my module. Does it explain that my prog has not yet closed the connections (after they are closed at web server end), meanwhile it is making new requests with new connections? – Mopparthy Ravindranath Mar 15 '17 at 10:23
  • @MupparthyRavindranath AFAIK, `TIME_WAIT` happens when _your_ side has closed the connection, but the remote side hasn't yet acknowledged the close. Perhaps [this post](https://vincent.bernat.im/en/blog/2014-tcp-time-wait-state-linux) can be of help (if you happen to be using Linux). – robertklep Mar 15 '17 at 10:44
  • 1
    Thks @robertklep. I realized that I was using keepAlive but not setting a limit on max sockets. So, looks like for every new req, it was creating a new socket, instead of reusing (due to "infinity" limit). Once I added maxSockets limit, the error has gone away. – Mopparthy Ravindranath Mar 15 '17 at 19:09

0 Answers0