7

In a simple situation with 3 servers with 1 master and 2 slaves with no sharding. Is there a proven solution with java and Jedis that has no single point of failure and will automatically deal with a single server going down be that master or slave(automated failover). e.g. promoting masters and reseting after the failure without any lost data.

It seems to me like it should be a solved problem but I can't find any code on it just high level descriptions of possible ways to do it.

Who actually has this covered and working in production?

Derek Organ
  • 8,323
  • 17
  • 56
  • 75

2 Answers2

9

You may want to give a try to Redis Sentinel to achieve that:

Redis Sentinel is a system designed to help managing Redis instances. It performs the following three tasks:

  • Monitoring. Sentinel constantly check if your master and slave instances are working as expected.

  • Notification. Sentinel can notify the system administrator, or another computer program, via an API, that something is wrong with one of the monitored Redis instances.

  • Automatic failover. If a master is not working as expected, Sentinel can start a failover process where a slave is promoted to master, the other additional slaves are reconfigured to use the new master, and the applications using the Redis server informed about the new address to use when connecting.

... or to use an external solution like Zookeeper and Jedis_failover:

JedisPool pool = new JedisPoolBuilder()
    .withFailoverConfiguration(
        "localhost:2838", // ZooKeeper cluster URL
        Arrays.asList( // List of redis servers
            new HostConfiguration("localhost", 7000), 
            new HostConfiguration("localhost", 7001))) 
    .build();

pool.withJedis(new JedisFunction() {
    @Override
    public void execute(final JedisActions jedis) throws Exception {
        jedis.ping();
    }
});

See this presentation of Zookeeper + Redis.

[Update] ... or a pure Java solution with Jedis + Sentinel is to use a wrapper that handles Redis Sentinel events, see SentinelBasedJedisPoolWrapper.

FGRibreau
  • 7,021
  • 2
  • 39
  • 48
  • there doesn't appear to be a piece of drop in code for java that I can use with the sentinels. I get the overriding idea but what are people actually using as in literal code right now in production? – Derek Organ May 03 '13 at 13:04
  • 1
    Managing automated failover of Redis **without Sentinel** will always require an external piece of software. However did you take a look at the FailSafe implementation ? https://github.com/xetorthio/jedis/pull/386 It won't be as good as a real failover but if you are looking for a quick solution it may be a good starting point – FGRibreau May 03 '13 at 13:09
  • 1
    "but what are people actually using as in literal code right now in production?" Zookeeper and now Redis Sentinel, mostly. – FGRibreau May 03 '13 at 13:14
  • but with Redis Sentinel you need code on top of Jedis for example to know when master has changed and I presume lots of retry functionality and checks? – Derek Organ May 03 '13 at 13:16
  • 1
    Here is the code you currently need for working with Sentinel and Jedis: https://github.com/hamsterready/jedis-sentinel-pool/blob/master/src/main/java/pl/quaternion/SentinelBasedJedisPoolWrapper.java – FGRibreau May 03 '13 at 13:21
  • 4
    Alternatively you can run a script when the failover happens, and the script is supposed to reconfigure the clients in some way with the new master address. – antirez May 03 '13 at 14:13
  • testing the Jedis-sentinel-pool and it is working very well on local tests. going to have stash in our persistent data store for while the failover happens that an be replayed in Pipeline – Derek Organ May 03 '13 at 18:09
  • Don't forget to validate the answer if Jedis-sentinel-pool was what you were looking for :) – FGRibreau May 05 '13 at 18:04
1

Currently using Jedis 2.4.2 ( from git ), I didn't find a way to do a failover based only on Redis or Sentinel. I hope there will be a way. I am thinking to explore the zookeeper option right now. Redis cluster works well in terms of performance and even stability but its still on beta stage.

If anyone has better insight let us know.

Dikla
  • 3,461
  • 5
  • 30
  • 43
Guy Lubovitch
  • 180
  • 1
  • 6