0

I'm trying to proxy a node.js server with an ngnix. My issue is that I can't access (curl nor nginx) my node.js server on localhost,

When opening its 3000 port to the exterior, I can see my node js server running. But curl and nginx cannot access it.

One weired thing is that we have a mongodb started and that one can be accessed on localhost without any issues

My instance id is the following : i-0fef394cbc350a4f4

Here's the link to the nginx server that cannot find localhost and therefore times out : http://ec2-54-246-136-64.eu-west-1.compute.amazonaws.com/

I cannot find another way to demonstrate that we cannot access curl localhost:3000

var sticky = require("socketio-sticky-session");
sticky(function () {
  // This code will be executed only in slave workers

  var http = require('http');
  var socketIO = require('socket.io');

  var server = http.createServer(app);

  var io = socketIO(server);
  require('./app_api/config/socketio')(io);

  io.on('connection', (socket) => {
    log.trace("new user connected");
    socket.on('disconnect', () => {
      log.trace("user disconnected");
    });
  });
  connectToDispatcher();

  process.title = "share_place";

  return server;
}).listen(3000, function () {
      console.log((cluster.worker ? 'WORKER ' + cluster.worker.id : 'MASTER') + '| PORT ' + 3000)

      app.domain.on('error', (er) => {
        log.error('error', er.stack);

        try {
          // make sure we close down within 30 seconds
          var killtimer = setTimeout(() => {
            process.exit(1);
          }, 30000);
          // But don't keep the process open just for that!
          killtimer.unref();

          // stop taking new requests.
          server.close();

          // Let the master know we're dead.  This will trigger a
          // 'disconnect' in the cluster master, and then it will fork
          // a new worker.
          cluster.worker.disconnect();

          // try to send an error to the request that triggered the problem
          res.statusCode = 500;
          res.setHeader('content-type', 'text/plain');
          res.end('Oops, there was a problem!\n');
        } catch (er2) {
          // oh well, not much we can do at this point.
          log.error('Error sending 500!', er2.stack);
        }
      });

    }
);

An even more weired situation : we installed the node code on a beanstalk instance.

We access the node server from the outside : http://share-place.eu-west-1.elasticbeanstalk.com:8081/

But the default nginx from amazon replies with a connection refused message (for public resources that we can access from the outside)

/var/log/nginx/error.log

2017/04/21 14:32:19 [warn] 3360#0: duplicate MIME type "text/html" in /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf:42 2017/04/21 14:37:48 [warn] 7358#0: duplicate MIME type "text/html" in /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf:42 2017/04/21 14:38:41 [error] 7362#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 41.226.2.5, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8081/", host: "share-place.eu-west-1.elasticbeanstalk.com" 2017/04/21 14:38:41 [error] 7362#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 41.226.2.5, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:8081/favicon.ico", host: "share-place.eu-west-1.elasticbeanstalk.com", referrer: "http://share-place.eu-west-1.elasticbeanstalk.com/" 2017/04/21 14:38:45 [error] 7362#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 41.226.2.5, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8081/", host: "share-place.eu-west-1.elasticbeanstalk.com"

Zied Hamdi
  • 2,400
  • 1
  • 25
  • 40

2 Answers2

1

Maybe you use net module that's why you can't make http request in some cases.

check if socketio-sticky-session use's the module net.createServer or http.createServer.

  • Thanks Marwen, we effectively had to use socketio-sticky-session instead of handling our workers alone, and I supected it had to be a boomerang someday (then I forgot about it :) ). You're absolutely right: it's the net.startServer that akes the server receive the request but never answer – Zied Hamdi Apr 24 '17 at 17:22
0

There can be a lot of problems here and with so little info and no code provided it is impossible to give you any definitive answer.

Your app can listen on port 3000 but it may bind to a specific interface and thus it can be available via an external IP and not with localhost that uses the loopback interface.

Your app may be run in a container that forwards its port to an external interface but not to localhost of other containers.

The URL that you included in the question may actually be, and likely is, a reverse proxy that handles the connections and what you see on the open ports may not reflect the actual state of the host that your services run on.

You can try using the other external IP instead of localhost and see if it works. You can try binding to a port of a specific interface in your Node app (I don't know how you do it currently because you didn't include any code example in the question). Then you can try closing the ports available from the outside and restricting access and see when it stops working but doing one thing at a time to narrow down the problem.

rsp
  • 107,747
  • 29
  • 201
  • 177
  • Thanks @rsp for your suggestions. I didn't include any code since there's nothing special in it. I can paste it here just for reference. I don't understand what you mean by "Your app can listen on port 3000 but it may bind to a specific interface" I tried accessing my sever with curl but through its public api, this also failed (while we can see it perfectly working in the browser) – Zied Hamdi Apr 21 '17 at 13:49