0

Sorry i'm a beginner in load balancing.


In distributed environments we tend more and more to send the treatment (map/reduce) to the data so that the result gets computed locally and then aggregated.

What i'd like to do apply for partionned/distributed data, not replicated. Following the same kind of principle, i'd like to be able to send an user request on the server where the user data is cached.


When using an embedded cache or datagrid to get low response time, when the dataset is large, we tend to avoid replication and use distributed/partitionned caches.

The partitionning algorithm are generally hash-based and permits to have replicas to handle server failures.

So finally, a user data is generally hosted on something like 3 servers (1 primary copy and 2 replicas)

On a local cache misses, the caches are generally able to search for the entry on other cache peers. This works fine but needs a network access. I'd like to have a load balancing strategy that avoid this useless network call.


What i'd like to know: is it possible to have a load balancer that is aware of the partitionning mecanism of the cache so that it always forwards to one of the webservers having a local copy if the data we need?

For exemple, i have a request www.mywebsite.com/user=387 The load balancer will check the 387 userId and know that this user is stored in servers 1, 6 and 12. And thus he can roundrobin to one of them or other strategy.


If there's no generic solution, are there opensource or commercial, software or hardware load balancers that permits to define custom routing strategies?

How much extracting data of a request will slow down the load balancer? What's the cost of extracting an url parameter (like in my exemple with user=387) and following some rules to go to the right webserver, compared to a roundrobin strategy for exemple?

Is there an abstraction library on top of cache vendors so that we can retrieve easily the partitionning data and make it available to the load balancer?

Thanks!

Sebastien Lorber
  • 89,644
  • 67
  • 288
  • 419
  • I assume you've not seen what Oracle Coherence does? Providing you're doing key based look-ups it will go directly to the partition that contains the value. Pretty sure Hazelcast does the same. – Nick Holt Nov 28 '13 at 14:17
  • @NickHolt I think you misunderstand my point. I'm not talking in a client/server mode (all of them route the request to the right server). I'd like the reverse proxy to be able to route the request directly to the right frontend with the embedded cache, so that the frontend can get the value directly from memory without any communication with another server. – Sebastien Lorber Nov 28 '13 at 20:50
  • I think you need to look at near and local caching as it's know in Coherence. Again, I'm pretty sure Hazelcast offers similar behaviour. – Nick Holt Dec 02 '13 at 10:00
  • Again it's not what I'm looking for. The front cache of coherence will contain duplicate data while I'm looking for a front cache which could be partitioned with no replicate and still avoid a lot of cache misses – Sebastien Lorber Dec 02 '13 at 10:21
  • But isn't there the danger that the process containing the front cache goes down and the data is then lost? – Nick Holt Dec 08 '13 at 11:15
  • Yes @NickHolt, it was just an exemple. Imagine you have 100 front servers, you ask for data with userId=1, and this data is on 3 front servers (1 master and 2 replicates). Then the LB could know that when an user queries for user=1 the request should be routed to one of these 3 servers, instead of routing to a random server (round robin) that will route to one of the 3 servers. This avoid 1 network hop, and in case of a front going down, the LB will update its strategy (so potentially there could be 2 network hops on some requests for a few time...) – Sebastien Lorber Dec 11 '13 at 13:52

1 Answers1

0

Interesting question. I don't think there is a readily available solution for your requirements, but it would be pretty easy to build if your hashing criteria is relatively simple and depends only on the request (a URL parameter as in your example).

If I were building this, I would use Varnish (http://varnish-cache.org), but you could do the same in other reverse proxies.

Andrea Campi
  • 446
  • 2
  • 11