0

Can I use twemproxy as a load balancer for a pool of redis instances, according to size of redis queue (amount of keys per instance)? Is twemproxy able to turn off from the upstream one of the redis instances, if it reaches a preconfigured maximum amount of keys in it's database?

If so, how can I do that (I'm very new to redis and stuff, so I just possibly don't get this from documentation)?

Thanks in advance.

d.ansimov
  • 2,131
  • 2
  • 31
  • 54

1 Answers1

1

No, it is not possible to loadbalance on the exact number of keys or queue size per instance. The sharding is based on calculations based on the key name. (see hash, hash_tag and distribution settings) Usually you should get a pretty even distribution but with some bad luck some shards might get much more keys than others.

udondan
  • 57,263
  • 20
  • 190
  • 175
  • Thank you, mate, I got your point. In my case, there's a java app, which serves every redis instance, pulling out queued data, and if it falls, queue will be overfilled pretty fast, and redis would fall by RAM lack, losing my data... So, it's pretty important for me the balancer to work in exact same way, as I described(( Do you have any suggestions, how can I get this working?.. – d.ansimov Apr 26 '16 at 09:26
  • I'm not sure I'm getting how limiting the number of keys in queue would help you. If I understand you correctly you use redis to store some kind of jobs that will be processed by your Java application. Let's assume you have an option to limit the queue size per instance. If you have two instances behind the proxy, and one reaches the defined limit, the proxy will start forwarding to the other instance. Then this one would soon reach the defined limit. How's this better/different from distributing the keys more or less evenly over both instances? – udondan Apr 26 '16 at 10:00
  • The key to data loss is redundancy (replication), and running redis with persistence (writing data to disk). If your concern is your hosts are running out of RAM, [virtual memory](http://redis.io/topics/virtual-memory) might be interesting for you. (though deprecated) – udondan Apr 26 '16 at 10:01
  • But to answer your question, no, I don't know how you could archive this without writing your own proxy application. Neither twemproxy nor redis cluster have this feature implemented as far as I know. – udondan Apr 26 '16 at 10:04
  • I mean, while 1 of 4 redis instances reached the queue size limit and was turned off, the load from it would be distributed over other 3 instances, so bad instance would have time to get back to upstream (cron will start java app, too heavy data would be processed, etc.) This is how limiting the number of keys would help me. – d.ansimov Apr 26 '16 at 12:33
  • Also, I've tried already to use writing data to disk, and it seems to make high load at my instances, which is no good for me, cause java app runs there too :'( – d.ansimov Apr 26 '16 at 12:37