Say someone makes an HTTP get/post request to api.example.com/a/b
Now say I have ten servers setup as my backend servers.
I want a proxy to act as a load balancer, and during the request, handshake responds with:
apiX.example.com/a/b
where X is a number in the range 1..10
If HAProxy
isn't the right tool for this, what would you suggest?
What benefits to hardware-based load balancers offer?
Update
Generally, from what I understand of proxies is that HAProxy
will take a request, and proxy it to a backend server, wait for the response, and then send the answer to the client. The client has no idea which backend server responded to their request.
Now, if I have ten backend servers, the HAProxy
server will be overloaded since it will have to handle the throughput of 10 servers traffic/bandwidth since all requests and responses are going through the HAProxy
server.
I am curious if HAProxy
could hand off the request to a particular backend server, and then the client will talk directly with the backend server (the backend will be publicly accessible at api3.example.com or api[1..10].example.com)
The client will be making only a single request, so the session will last for a single application only where the client makes an HTTP get/post request and waits for a response, that's it.