4

Recently I've been trying to create a simple file server with nodejs and looks like I've run into some problems that I can't seem to overcome.

In short:

I configured iisnode to have 4 worker processes (there is a setting in web.config for this called nodeProcessCountPerApplication="4"). And it balances the load between these workers.

When there are 8 requests coming in, each worker has 2 request to process, but when an exception happens in one of the request that is being processed, the other one that is waiting also fails.

For example:

worker 1 handling request 1, request 5 waiting 
worker 2 handling request 2, request 6 waiting
worker 3 handling request 3, request 7 waiting
worker 4 handling request 4, request 8 waiting

If exception happens when handling request 3, the server responds with my custom error code, shuts down and is restarted by iisnode. But the problem is that request 7 also fails, even if it hasn't been processed.

I tried to set the maxConcurrentRequestsPerProcess="1" so that only 1 request goes at a time to one worker, but it does not work the way I want. Request 5,6,7,8 will be rejected with a 503 Service Unavailable response even though the maximum number of request that will queue is set to 1000 (default by iis).

The Question

These requests don't have anything to do with each other, so one failing should not take down the other.

Is there a setting in IIS that enables the behavior that I'm after? Or is this even possible to do with node and IIS?

In Long

Why?

I'm using node, because I have some other requirements (like logging, etc..) that I can do in JavaScript fairly easy. Since I have a ASP.NET MVC background and I'm running windows, after a few searches I've found the iisnode module for IIS, that can be used to host a node app with IIS. This makes it easy for me to manage and deploy the application. I also read on many sites, that node servers have good performance because of their async nature.

How?

I started with a very basic exception handling logic, that catches exceptions using the node's domain object:

var server = http.createServer(function (request, response) {
    var d = domain.create();
    d.on('error', function (err) {
        try {
            //stop taking new requests.
            serverShutdown();
            //send an error to the request that triggered the problem
            response.statusCode = 500;
            response.end('Oops, there was a problem! ;) \n');
        }
        catch (er2) {
            //oh well, not much we can do at this point.
            console.error('Error sending 500!', er2.stack);
            process.exit(1);
        }
    });

    d.add(request);
    d.add(response);

    d.run(function () {
        router.route(request, response);
    });
}).listen(process.env.PORT);

Since I could not find any best practices to gracefully shut down the server, when there is an unhandled exception, I decided to write my own logic. So after server.close() is called, I go through the sockets, and wake them so the server can shut down:

function serverShutdown() {
    server.close();
    for (var s in sockets) {
        sockets[s].setTimeout(1, function () { });
    }
}

This is also great!

What?

The problem comes when I try to stresstest this. For some reason the cluster module is not supported by the iisnode, but it has a similar feature. I configured iisnode to have 4 worker processes (there is a setting in web.config for this called nodeProcessCountPerApplication="4"). And it balances the load between these workers.

I'm not entirely sure on how this works, but here's what I figured out from testing:

When there are 8 requests coming in, each worker has 2 request to process, but when an exception happens in one of the request that is being processed, the other one that is waiting also fails.

For example:

worker 1 handling request 1, request 5 waiting 
worker 2 handling request 2, request 6 waiting
worker 3 handling request 3, request 7 waiting
worker 4 handling request 4, request 8 waiting

If exception happens when handling request 3, the server responds with my custom error code, shuts down and is restarted by iisnode. But the problem is that request 7 also fails, even if it hasn't been processed.

I tried to set the maxConcurrentRequestsPerProcess="1" so that only 1 request goes at a time to one worker, but it does not work the way I want. Request 5,6,7,8 will be rejected with a 503 Service Unavailable response even though the maximum number of request that will queue is set to 1000 (default by iis).

The Question Again

These requests don't have anything to do with each other, so one failing should not take down the other.

Is there a setting in IIS that enables the behavior that I'm after? Or is this even possible to do with node and IIS?

Any help is appreciated!


Update

I managed to rule out iisnode, and made the same server using cluster and worker processes.

The problem still persist, and request that are queued to the worker that has the exception are returned with 502 Bad Gateway.

Again, I don't know what's happening with the requests that are coming in to the server, and which level are they at the time of the exception. And I can't seem to find any info about this either...

Anyone could point me in the right direction? At least where to search for the solution?

zolipapa
  • 646
  • 6
  • 14
  • Considering depth of technical detail I would try submitting issue on Github - https://github.com/tjanczuk/iisnode/issues ... that guy that owns repo (tjanczuk) seems pretty responsive. – nikib3ro Nov 11 '16 at 05:15

0 Answers0