0

I have 3 servers, each one having 100mbps.

The first server has an nginx load balancer, configured like this:

http {
  upstream myproject {
    server 127.0.0.1:8000 weight=3;
    server 127.0.0.1:8001;
    server 127.0.0.1:8002;
    server 127.0.0.1:8003;
  }

  server {
    listen 80;
    server_name www.domain.com;
    location / {
      proxy_pass http://myproject;
    }
  }
}

On servers 2 and 3 I have a video.

I would like to know, if 1 user = 1mbps then if 10 users visit server1 it will send them to server 2 or 3? How much bandwidth I will have on server 1, 2 and 3?

200_success
  • 4,771
  • 1
  • 25
  • 42
arlind
  • 43
  • 1
  • 7
  • Seems like this might be a dupe of http://serverfault.com/questions/813675/linux-bandwidth-load-balancer – ceejayoz Nov 08 '16 at 23:18

1 Answers1

1

Theoretically, having 10 x 1Mbps user connected to your service via your loadbalancer would consume 10Mbps down from that loadbalancer point of view, then probably something like 5Mbps up to server2, and an other 5Mbps up to server3.

Having a 100Mbps full duplex connectivity on your loadbalancer, you may serve up to 100x 1Mbps user. With a half duplex link (does this still exist, IRL?), you'ld be limited to 50.


Update:

To clarify, if you do want to add up bandwidths of your "backends" servers, then the "frontend" server need to redirect clients to either of these backends.

From the network point of view, your client establishes a TCP connection to you "loadbalancer". That session keeps running, until you create an other one somewhere else. But your loadbalancer would balance traffic, not redirect it.

Adding up bandwidths would either imply to:

  • use some sort of round-robin DNS, your public record would point to each of your backends server. No rewrites. A backend failure would involve client requests getting lost, until you'ld have either updated your DNSs, or fixed the backend server
  • use your first server to rewrite clients to any of your backend servers. Implies that your backend servers are available through distinct public DNSs. Reconvert your "loadbalancer" as some sort of "rewriting gateway". Could imply some custom scripting checking backends availability to toggle configuration blocks from nginx point of view.

Update², since new threads are getting started on that matter, ...

One way to do it would be to have your first server using the rewrite directive, instead of proxy_pass. The difficulty being to rewrite to a dynamic address, based on which backend actually works, .... A solution is given here: http://www.sourceprojects.org/nginx-stateless-loadbalancer

An other way to do it would be some sort of round-robin DNS. You may not need to define a record for server1 after all, your zone would look something like this:

$ORIGIN example.com. myservicename A server2.ip.address myservicename A server3.ip.address

SYN
  • 1,751
  • 9
  • 14