0

Actually I'm using a configuration of Redis Master-Slaves with HAProxy for Wordpress to have High Avaibility. This configuration is nice and works perfect (I'm able to remove any server to maintenance without downtime). The problem of this configuration is that only one Redis server is getting all the traffic and the others are just waiting if that server dies, so in a very high load webpage can be a problem, and add more servers is not a solution because always only one will be master.

With this in mind, I'm thinking if maybe I can just use a Redis Cluster to allow to read/write on all nodes but I'm not really sure if it will works on my setup.

My setup is limited to three nodes the most of times, and I've read in some places that Redis cluster minimal setup is three nodes, but six is recommended. This is rational because this setup allow to have Slaves nodes that will become Masters if her Master dies, and then all data will be kept, but what happend if data don't cares?. I mean, on my setups the data is just cached objects, so if don't exists it just create it again so:

  • The data will be lost (don't care), and the other nodes will get the objects from clients again, to serve it on later requests (like happen if a Flush the data).
  • The nodes will answer that data doesn't exists and will reject to cache because the object would have to be on other node that is dead.

Someone know it?

Thanks!!

1 Answers1

0

When a master dies, the Redis cluster goes to a down state and any command involving a key served by the failed instance will fail.

This may differ from some other distributed software because Redis Cluster is not the kind of program that every master holds all data. In fact, the key space is horizontally partitioned and each key is served by only one master.

This is mentioned in the specification:

The key space is split into 16384 slots... a single hash slot will be served by a single node...

The base algorithm used to map keys to hash slots is the following:

HASH_SLOT = CRC16(key) mod 16384

When you setup a cluster, you certainly ask each node to serve a set of slots, and each slot can only be served by one node. If one node dies, you lose the slots on this node unless you have a slave failover to serve them, so that any command involving keys mapped to these slots will fail.

Community
  • 1
  • 1
neuront
  • 9,312
  • 5
  • 42
  • 71
  • Hello, thanks for your response, but I've already read it and I know it. That's why slaves exists. I'm asking if the information on a dead node can be overwritten on alive nodes (is a cache, if don't exists is created) or command will fail. For example: node1: A-D, node2: E-H, node3: I-L. Node2 dies and data between E and H is down (as expected). Client try to read F and fails, then it tries to create F again. F will be created on other alive node or the write command will fail too because is supposed that will be on node2. Anyway, I'll try to test it. – Daniel Carrasco Aug 22 '18 at 14:32