0

I have a load balancer using Nginx from the Servers For Hackers tutorial site. This refers to two servers in a round robin setup. I have a self signed SSL certificate in place to test an http to https redirect.

When I access the ip address of the load balancer, the request is only forwarding to the first ip address in the upstream app block. I want it split to 50/50.

Looking at the configuration file, can someone tell me how to do this? These are all Amazon ec2 instances.

The redirect is working from http to https and the proxy is working to the first server.

upstream app {
    server 172.31.33.5:80 weight=1;
    server 172.31.42.208:80 weight=1;
}

server {
    listen 80 default_server;

    # Requests to /.well-known should look for local files
    location /.well-known {
        root /var/www/html;
        try_files $uri $uri/ =404;
    }

    # All other requests get load-balanced
    location / {
        return 301 https://$http_host$request_uri;
    }
}

server {
    listen 443 ssl default_server;


    ssl_protocols              TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers                ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS;
    ssl_prefer_server_ciphers  on;
    ssl_session_cache          shared:SSL:10m;
    ssl_session_timeout        24h;
    keepalive_timeout          300s;

    ssl_certificate      /etc/pki/tls/certs/load_balance.crt;
    ssl_certificate_key  /etc/pki/tls/certs/load_balance.key;

    charset utf-8;

    location / {
        include proxy_params;
        proxy_pass http://app;
        proxy_redirect off;

        # Handle Web Socket connections
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}

Debug log. fastcgi.conf ngix.conf nginx Files

LeDoc
  • 111
  • 4
  • Curious why you're using a single Nginx instance rather than a managed load balancer. This sounds like a single point of failure. Load balancers do have a cost associated with them, but it's not a huge cost. An application load balancer should be able to do what I see in your Nginx configuration file. – Tim Dec 15 '19 at 22:40
  • i'm just following a tutorial website as I'm teaching myself these things. Happy to hear of alternative solutions. – LeDoc Dec 16 '19 at 08:17

4 Answers4

0

weight=1 is default configuration, so specifying it explicitly, or removing and having it implicitly won't make any difference.

Overall you config looks correct.

Try removing the working server from upstream list and leaving just one which doesn't work. I suspect it won't work at all because of routing or firewall issues.

Sergey Nudnov
  • 863
  • 6
  • 12
  • I've tried commenting out the working server, and the proxy now successfully forwards to the other server. – LeDoc Dec 15 '19 at 21:38
  • @LeDoc, Try enabling the [debug log](http://nginx.org/en/docs/debugging_log.html), connecting to the second server in your upstream and reviewing the log entries. Post them there if you feel that necessary. – Sergey Nudnov Dec 15 '19 at 21:50
  • I've uploaded my debug log. If you can spot anything, that would be much appreciated. – LeDoc Dec 16 '19 at 08:15
  • @LeDoc, I didn't like first 7 lines in the log you provided, particularly this `unknown directive "qupstream" in /etc/nginx/sites-enabled/default:1`. Not sure why nginx starts at all with the `emerg` errors. Try to test your config `/usr/bin/nginx -t`. Ensure you have one copy of nginx installed and using the correct config. Also it may be beneficial to look at the whole set of nginx configs, if you could share them - all `.conf` files in the `/etc/nginx` folder – Sergey Nudnov Dec 17 '19 at 02:57
  • Ok, I've added links to the config files. – LeDoc Dec 17 '19 at 12:12
  • @LeDoc, Could you please zip and provide everything from `/etc/nginx/modules-enabled/*.conf`, `/etc/nginx/conf.d/*.conf`, `/etc/nginx/sites-enabled/*`. By the way, it is better to specify: `include /etc/nginx/sites-enabled/*.conf;` – Sergey Nudnov Dec 20 '19 at 16:23
  • No problem - added a link now in the original question. – LeDoc Dec 22 '19 at 10:45
  • @LeDoc, I couldn't help you while you don't provide information. Your archive contains just symlinks. You should review ALL nginx configuration files, starting from nginx.conf, and following all include statements in it and in the included files. If you want me to review them, collect them all, archive and share. Thanks – Sergey Nudnov Dec 22 '19 at 21:20
0

I agree with Sergey. It's very possible there is some firewall issue with the one server not getting traffic, causing nginx to remove it from the pool of available servers.

0

While not directly answering your question, a better solution to your problem is to use an AWS Application Load Balancer. It's a reasonably well featured load balancer for basic to standard use cases. You may need to roll your own if you have any really specific requirements.

AWS ALB is a highly available service with no single point of failure, running across multiple availability zones. This will make it far more reliable than a single instance, and can be cost effective compared with running instances. Sometimes it will cost a little more, but you get benefits from it.

Tim
  • 31,888
  • 7
  • 52
  • 78
-2

Drop the weight=1 and it should work

Hannes
  • 301
  • 4
  • 9