1

Say someone makes an HTTP get/post request to api.example.com/a/b

Now say I have ten servers setup as my backend servers.

I want a proxy to act as a load balancer, and during the request, handshake responds with:

apiX.example.com/a/b

where X is a number in the range 1..10

If HAProxy isn't the right tool for this, what would you suggest?

What benefits to hardware-based load balancers offer?

Update

Generally, from what I understand of proxies is that HAProxy will take a request, and proxy it to a backend server, wait for the response, and then send the answer to the client. The client has no idea which backend server responded to their request.

Now, if I have ten backend servers, the HAProxy server will be overloaded since it will have to handle the throughput of 10 servers traffic/bandwidth since all requests and responses are going through the HAProxy server.

I am curious if HAProxy could hand off the request to a particular backend server, and then the client will talk directly with the backend server (the backend will be publicly accessible at api3.example.com or api[1..10].example.com)

The client will be making only a single request, so the session will last for a single application only where the client makes an HTTP get/post request and waits for a response, that's it.

Blankman
  • 2,891
  • 10
  • 39
  • 68
  • Can you provide more details? I'm not sure that I get your proposition. – Marcelo Bittencourt Jan 02 '12 at 03:16
  • @MarceloBittencourt i've updated my question. – Blankman Jan 02 '12 at 03:26
  • It may be possible, in the first hit the chosen server can create a redirect (302) to himself, and all the conversation later is direct to the server. But I don't think that you can overload haproxy so easily. My main HAProxy gets 11mi hits/day , and it's running in a single proc VPS server with only 1G RAM – Marcelo Bittencourt Jan 02 '12 at 03:46
  • @MarceloBittencourt but if clients are uploading files that are 20-50kb in size, that is allot of data going through a single server no? – Blankman Jan 02 '12 at 03:50
  • 2
    I believe that it can handle it. near half of my 11mi hits are images, and some other static content. did you know that Stack Exchange uses HAProxy? http://blog.serverfault.com/2011/09/30/the-stack-exchange-architecture-2011-edition-episode-1/ – Marcelo Bittencourt Jan 02 '12 at 04:09

2 Answers2

0

This can be possible or not depending on your setup. If your clients are configured to use a proxy server (which is HAproxy) or you are always transparently proxying all requests to HAproxy, then apparently your idea can not be implemented.

On the other hand, if the name api.example.com points to Haproxy host itself. You can use redirect to send 302 HTTP codes to your clients. The redirects should point to one of the backend servers. Then, you need to find a way to use different redirect statements for different backend servers based on some criteria. You can choose something changing randomly such as source port or source IP.

Khaled
  • 36,533
  • 8
  • 72
  • 99
  • Yes api.example.com will point to haproxy itself, and yes I will distribute load (initially) doing round robin. – Blankman Jan 02 '12 at 13:21
0

Have you considered having your backend servers append a header with their own header with server id/host name in the response?

Update 10/8/12: HAProxy 1.4.20 added an option http-send-name-header, which adds the backend server as a hostname on responses.