0

I need to upgrade our current infrastructure based on a single server to a multi server one. Basically we run an HTTP app, MySQL and other services.

My idea is to put HAProxy on all the servers so they can balance/failover themselves.

Something like this:

        WAN
    |         |
|------|  |------|
| HAP1 |  | HAP2 |      HAProxy servers 
|------|  |------|
   \         /
    ----X----
   /         \
|------|  |------|
| NGX1 |  | NGX2 |      Nginx webservers
|------|  |------|

The idea is to configure the public hostname with round-robin DNS pointing to HAP servers, which will then balance to the web servers.

In order to have a not totally randomic balance I wish to use leastconn algorithm, but here comes the problem: is it possible for HAP servers to share each other how they balanced the incoming connections? I'd like to avoid both servers considering a backend offloaded because of upstream round robin.

EDIT: I'd like to avoid using keepalived and a virtual IP shared by the two HAP because the servers will be in different datacenters.

Maxxer
  • 302
  • 5
  • 21

1 Answers1

0

If you plan to have active/active load-balancers in different datacenters, it's going to be pretty hard to reliably share connection status info both ways.

Given that you're concerned with the load on the backend servers as a means to decided which one to choose, it stands to reason you should go to them to figure out how much traffic to send.

Thankfully, HAProxy has some directives for this: agent-check, agent-port.
I suppose weight and agent-inter should also be included since they've got some effect here too.

This allows you to run a service on each backend server which will respond with a string, as outlined in the docs, which will affect the weight of the server.

The agent could check almost anything (CPU, memory, network, dependent services, time of day, weather), so long as it returns sane values that HAProxy can use and understand.

GregL
  • 9,370
  • 2
  • 25
  • 36
  • 1
    my main concern was to have the frontend proxy equally distribute connections among the backend servers, but after a chat yesterday in IRC and after your answer I realized that's the wrong focus. As you point out load should be distributed where there's less charge. So even if it's not the real answer to my question I'm going to accept yours. Thanks – Maxxer Sep 30 '16 at 07:42