5

This may be a stupid question, but after googling a while I can't find the answer or just don't know how to ask it.

I have a web app running on a server named 'myserver1'. I've brought up 'myserver2' with an identical instance of the web app, and set up replication between the two databases on the two boxes. Now, I'd like to employ nginx to do some load balancing, plus make one server take over if the other keels over.

Most of the nginx documentation is written around a simple scenario like this, but it seems to indicate that you put an nginx server in front of the web servers. That would seem to be another single point of failure. How do you make nginx itself redundant? Can you just run nginx on both web server boxes? If so, where do you point the DNS entry of myapp.mydomain.com?

EDIT: I guess I should add that this is for an internal app with a relatively small user base. My primary concern is that our internal users can still get to it if we lose a server or connectivity to one of the data centers. I just can't see how to do that on nginx without introducing another single point of failure.

coding_hero
  • 241
  • 3
  • 6
  • 11
  • Your question doesn't reeealy have anything to do with nginx - you're asking about server architecture. You need at least 3 servers ([a load balancer](http://nginx.org/en/docs/http/load_balancing.html), and two application servers) to do what you're asking, the load balancer is where the dns points. With only 2 servers - you're better off using one for the db and the other for the application. – AD7six Nov 06 '14 at 21:06
  • That's what I've surmised from my reading, but even with three servers, you have a single point of failure at the load balancer. That seems counter-intuitive. – coding_hero Nov 06 '14 at 21:08
  • bearing in mind you have only 2 servers - it seems a bit early to worry about how to manage multiple load balancers (which is _effectively_ what you're wondering about, and where using dns load balancing comes in). – AD7six Nov 06 '14 at 21:13
  • 1
    I don't *only* have two servers. We have a pretty large environment, and can spin up new vm's as needed. It just seems odd to try to increase availability of a web app by adding *another* single point of failure. But, there could be something I'm not getting. – coding_hero Nov 06 '14 at 21:22
  • load balancing is a very light task. IME a load balancer is also well specified to ensure it can cope with any traffic it's likely to see. The bottleneck/problem is always the DB - having the database and the application on the same server is the very first thing to change if you have plenty of hardware (hopefully vms don't mean you're treating a vm as a production server), and if you're replicating the database between two application server instances - more so =). Anyway, if you want more search results - just remove "nginx" from what you've been searching for - it's not nginx specific. – AD7six Nov 06 '14 at 21:38
  • I guess I should make clear: my primary goal is not to balance some overwhelming load. This is a small app that is only used inside my company. My primary goal is to make sure that it's still available if we lose part of the network or a data center. The two boxes are logically and geographically diverse. The failover is the important part here. And I don't see how to address it with nginx without introducing another single point of failure. – coding_hero Nov 06 '14 at 22:08
  • Let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/18463/discussion-between-coding-hero-and-ad7six). – coding_hero Nov 06 '14 at 22:22

1 Answers1

5

The only way to load-balance in nginx is having a single frontend (reverse-proxy) host load-balancing backend servers.

The idea/hypothesis behind this design is that load will happen on the backend only and that your single entrypoint will always be able to cope with whatever amount of traffic it is supposed to deal with, since it simply redirects and never processes anything itself.

What you are talking about is actually failover, not load-balancing. Your concern is the failure of your single entrypoint.

As @coding_hero explained, this has nothing to do with nginx, it is something to be dealt with at the underlying layers (OS/network).

One way of doing it might be read on the following page (old example talking about Debian oldstable though, commands might need to be freshen up): http://linuxmanage.com/fast-failover-configuration-with-drbd-and-heartbeat-on-debian-squeeze.html. Heartbeat is a well-known technology allowing several identical servers to monitor each other, electing a master and failing-over to slaves with needed.

You even have dedicated network hardware doing the same job by rerouting (or maybe reconfiguring routers on-the-fly to reroute?) traffic to the currently elected master.

Bernard Rosset
  • 1,373
  • 12
  • 25
  • Thank you for the explaination. I found Heartbeat yesterday after asking this question, so that's the way I plan to go. – coding_hero Nov 07 '14 at 22:08
  • 1
    I find [this blog post](https://www.cloudsigma.com/an-introduction-to-fail-over-in-the-cloud/) a much more useful guide than the above referenced one, at least for a cloud setup, in case it helps anyone – matt Jun 13 '16 at 17:53