5

I am about to deploy a brand new node.js application, and I need some help setting this up.

The way my setup is right now is as follows.

I have Varnish running on external_ip:80

I have Nginx behind running on internal_ip:80

Both are listening on port 80, one internal port, one external.

NOTE: the node.js app runs on WebSockets

Now I have the my new node.js application that will listen on port 8080.

Can I have varnish set up that it is in front of both nginx and node.js.

Varnish has to proxy the websocket to port 8080, but then the static files such as css, js, etc has to go trough port 80 to nignx.

Nginx does not support websockets out of the box, else I would so a setup like:

varnish -> nignx -> node.js

Saif Bechan
  • 10,960
  • 10
  • 42
  • 63

1 Answers1

10

Having just setup a project that is essentially identical to what you describe, I'll share my approach - no guarantees that it is 'the best', but it does work.

My server stack is

  • Varnish (v3.0.2) - all interfaces, port 80
  • Nginx (v1.0.14) - local interface, port 81
  • Node.js (v0.6.13) - local interface, port 1337
  • Operating system is CentOS 6.2 (or similar)

My Node.js app uses Websockets (sockets.io - v0.9.0) and Express (v2.5.8) - and is launched using forever. (The same server also has other sites on it - primarily PHP which use the same instances of Nginx and Varnish).

The basic intention of my approach is as follows:

  • Single public port/address for both websocket and 'regular' data
  • Cache some assets using Varnish
  • Serve (uncached) static assets directly from nginx
  • Pass requests for 'web pages' to nginx, and from their proxy to Node.js
  • Pass web socket requests directly (from Varnish) to Node.js (bypass nginx).

Varnish config - /etc/varnish/default.vcl:

#Nginx - on port 81
backend default {
  .host = "127.0.0.1";
  .port = "81";
  .connect_timeout = 5s;
  .first_byte_timeout = 30s;
  .between_bytes_timeout = 60s;
  .max_connections = 800;
}
#Node.js - on port 1337
backend nodejs{
  .host = "127.0.0.1";
  .port = "1337";
  .connect_timeout = 1s;
  .first_byte_timeout = 2s;
  .between_bytes_timeout = 60s;
  .max_connections = 800;
}

sub vcl_recv {
    set req.backend = default;

    #Keeping the IP addresses correct for my logs
    if (req.restarts == 0) {
        if (req.http.x-forwarded-for) {
            set req.http.X-Forwarded-For =
            req.http.X-Forwarded-For + ", " + client.ip;
        } else {
            set req.http.X-Forwarded-For = client.ip;
        }
    }

    #remove port, if included, to normalize host
    set req.http.Host = regsub(req.http.Host, ":[0-9]+", "");

    #Part of the standard Varnish config
    if (req.request != "GET" &&
      req.request != "HEAD" &&
      req.request != "PUT" &&
      req.request != "POST" &&
      req.request != "TRACE" &&
      req.request != "OPTIONS" &&
      req.request != "DELETE") {
        /* Non-RFC2616 or CONNECT which is weird. */
        return (pipe);
    }
    if (req.request != "GET" && req.request != "HEAD") {
        /* We only deal with GET and HEAD by default */
        return (pass);
    }

    #Taken from the Varnish help on dealing with Websockets - pipe directly to Node.js
    if (req.http.Upgrade ~ "(?i)websocket") {
        set req.backend = nodejs;
        return (pipe);
    }

    ###Removed some cookie manipulation and compression settings##


    if(req.http.Host ~"^(www\.)?example.com"){
            #Removed some redirects and host normalization
            #Requests made to this path, even if XHR polling still benefit from piping - pass does not seem to work
        if (req.url ~ "^/socket.io/") {
            set req.backend = nodejs;
            return (pipe);
        }

    #I have a bunch of other sites which get included here, each in its own block
    }elseif (req.http.Host ~ "^(www\.)?othersite.tld"){
        #...
    }

 #Part of the standard Varnish config
 if (req.http.Authorization || req.http.Cookie) {
        /* Not cacheable by default */
        return (pass);
    }

    #Everything else, lookup
    return (lookup);
}


sub vcl_pipe {
    #Need to copy the upgrade for websockets to work
    if (req.http.upgrade) {
        set bereq.http.upgrade = req.http.upgrade;
    }
    set bereq.http.Connection = "close";
    return (pipe);
 }
 #All other functions should be fine unmodified (for basic functionality - most of mine are altered to my purposes; I find that adding a grace period, in particular, helps.

Nginx config - /etc/nginx/*/example.com.conf:

server {
    listen *:81;
    server_name example.com www.example.com static.example.com;
    root /var/www/example.com/web;
    error_log /var/log/nginx/example.com/error.log info;
    access_log /var/log/nginx/example.com/access.log timed;

    #removed error page setup

    #home page
    location = / {
        proxy_pass http://node_js;
    }

    #everything else
    location / {
        try_files $uri $uri/ @proxy;
    }
    location @proxy{
        proxy_pass http://node_js;
    }

    #removed some standard settings I use
}

upstream node_js {
    server 127.0.0.1:1337;
    server 127.0.0.1:1337;
}

I am not particularly crazy about the repetition of the proxy_pass statement, but haven't gotten around to finding a cleaner alternative yet, unfortunately. One approach may be to have a location block specifying the static file extensions explicitly and leave the proxy_pass statement outside of any location block.

A few settings from /etc/nginx/nginx.conf:

set_real_ip_from 127.0.0.1;
real_ip_header X-Forwarded-For;

log_format  timed  '$remote_addr - $remote_user [$time_local] "$request" '
                   '$status $body_bytes_sent "$http_referer" '
                   '"$http_user_agent" "$http_x_forwarded_for" '
                   '$request_time $upstream_response_time $pipe';

port_in_redirect off;

Among my other server blocks and settings, I also have gzip and keepalive enabled in my nginx config. (As an aside, I believe there is a TCP module for Nginx which would enable the use of websockets - however, I like using 'vanilla' versions of software (and their associated repositories), so that wasn't really an option for me).

A previous version of this setup resulted in an unusual 'blocking' behaviour with the piping in Varnish. Essentially, once a piped socket connection was established, the next request would be delayed until the pipe timed out (up to 60s). I haven't yet seen the same recur with this setup - but would be interested to know if you see a similar behaviour.

cyberx86
  • 20,805
  • 1
  • 62
  • 81
  • +1 thanks for this complete presentation. I am at work right now, I will try this tonight and report back. I already did a initial setup last night that worked, but this looks a lot better. thanks! – Saif Bechan Mar 19 '12 at 11:08
  • Ah as I said in the meantime I had done some config of myself. I also switched to socket.io, and express, and on the socket.io git repo was basic config for varnish. The main trick here was to set up 2 backends, just as you have in the example. You have some minor tweaks which I can use. I also don't need the proxy_pass in nginx, as I have a single page that loads directly from express to varnish. For my static files I use a different domain, and eventually a CDN. Glad someone helped me out with this, it is a pretty sweet setup. – Saif Bechan Mar 19 '12 at 17:48
  • Just out of curiosity, what is your node.js client/server setup like? I have spent the last few days learning backbone.js. I think it has a nice way of dealing with server side javascript. On the backend I use mongodb, with mongoose to structure it. So on both ends I have kind of the same structure of models/collections. I have to say it does take some time to wrap my head around it, esp comming from php/mysql. I like the challenge tho, too bad there are not that much articles/tutorials on the subject. – Saif Bechan Mar 19 '12 at 17:59
  • Glad it was helpful - websockets have been fun to play with so far (although, I spent longer than ideal getting sessions to work well with socket.io). One of the side effects, if you will, of proxying the page through Nginx is the use of gzip - the setup is easier in Nginx (compared to Varnish) and the performance is better (compared to Express) - the added time per request is under 2ms, which I can live with (I spent a good bit of time getting timings at each layer to isolate some issues I had). – cyberx86 Mar 19 '12 at 17:59
  • I use Express as a framework - with Jade as a server-side templating engine. My changes are committed through SVN (I keep meaning to switch to GIT, but haven't yet) - and a post-commit hook runs the JS and CSS through closure-compiler and YUI compressor. No static assets are served by Node.js - even the socket.io client is included in the minified JS that is generated by closure-compiler. Client side I use jQuery and jsrender templates. I use MongoDB for sessions and data - but use the native driver. I haven't really played with Backbone.js yet, although, it is on my todo list. – cyberx86 Mar 19 '12 at 18:05
  • With regard to the static files - interestingly you don't need a separate (nginx) server block for it. You'll note that I have 'static.example.com' in my nginx config. By simply referencing that in my files, I avoid the issues with cookies and have an essentially 'separate' domain. That said, a CDN would be the way to go (although, honestly, I don't really find that Cloudfront improves load time for small files). I have this setup running on a local VM and on EC2 - makes for easy testing. Compared to my other Varnish/Nginx configs, this one is actually fairly straight-forward - a nice change. – cyberx86 Mar 19 '12 at 18:13
  • @cyberx86 Is it possible to have SSL with this particular setup too? I heard that Varnish doesn't support SSL, but my Node app I want to setup will need SSL. – littlejim84 Jul 07 '12 at 17:53
  • @littlejim84: Varnish [does not support SSL](https://www.varnish-cache.org/docs/trunk/phk/ssl.html), however, the way I implement it, I have Nginx listening on the SSL port. Usually I just use SSL for logins - so, the login page is served Node.js --> Nginx (port 443), over SSL, and bypasses Varnish. While other pages are served either Node.js --> Nginx --> Varnish or Node.js --> Varnish (websockets). – cyberx86 Jul 11 '12 at 17:28