3

background

I realize a redis client(support cluster) and an issue is raised for the support of distributed lock made by redis cluster.

I've read post of redlock algorithm and related debate

problems

Actually it is impossible to make one key hashed to different nodes in a redis cluster and hard to generate keys in a specific rule and make sure they do not migrated in cluster. In the worst situation, all key slots may exists in one node. Then the availability will be the same as one key in one node.

my algorithm

My solution is to take advantage of READONLY mode of slaves to ensure the lock key is synced from master to N/2 + 1 of its slaves to avoid the fail-over problem. Since it is a single-key solution, the migration problem also do no matter.

  1. random token + SETNX + expire time to acquire a lock in cluster master node
  2. if lock is acquired successfully then check the lock in slave nodes(may there be N slave nodes) using READONLY mode, if N/2+1 is synced successfully then break the check and return True
  3. Use lua script described in redlock algorithm to release lock with the client which has the randomly generated token, if the client crashes, then wait until the lock key expired.

Could you please do me a favour to see if the algorithm is wrong? I have thought several corner cases about it but i am still not so sure.

Stefan Zobel
  • 3,182
  • 7
  • 28
  • 38
Chen MIng
  • 31
  • 1
  • 3
  • See this answer regarding how to route the same key to all shards reliably. https://stackoverflow.com/questions/46294925/is-there-a-way-to-make-a-specific-key-locate-on-a-specific-redis-instance-in-clu/46295533#46295533 – Not_a_Golfer Sep 27 '17 at 07:43
  • @Not_a_Golfer That's really cool! I notice that you were facing similar problem of using redis cluster to build a distributed lock, but how do you handle the slot migration problem? – Chen MIng Sep 27 '17 at 10:03
  • You generate the per-shard keys for the lock based on the current topology. You could store the sub keys of the lock as the value of each of them, or as a "master" key, that you can check before generating the rest of the keys. this will prevent inconsistencies during resharding. But once the topology has settled and the lock released, the next locking of it will generate a new set of per-shard keys. – Not_a_Golfer Sep 27 '17 at 10:26
  • @Not_a_Golfer Your solution seems to be good, but i still doubt if there is possibility that when client1 holds a lock and take check on lock's sub keys in loop, client2 may be enabled to acquired the lock if most of slots are migrated and have not been reseted yet? Since the CRC16_TABLE is in client and check set action is not atomic (it may be just unecessary wory since probability of the problem seems to get lower with more nodes and sub keys added) – Chen MIng Sep 27 '17 at 11:13
  • @Not_a_Golfer Your solution is really good, sincerely. I am just a little confused about the `check-reset` part of the sub keys – Chen MIng Sep 27 '17 at 11:17
  • To be honest, I don't think it's a good idea to do this thing in a cluster. Redlock is designed for multiple masters, and running it on a cluster creates several problems with a pretty complex solution. – Not_a_Golfer Sep 27 '17 at 11:32
  • @Not_a_Golfer I totally agree with you, distributed lock is made with zookeeper in my daily work instead of using redis. I think main thought of redlock is quorum so i try to reuse it in the algorithm described before. – Chen MIng Sep 27 '17 at 13:32

0 Answers0