4

We have a redis configuration with two redis servers. We also have 3 sentinels to monitor the two instances and initiate a fail over when needed.

We get the following issue intermittently from some of our applications:

ServiceStack.Redis.RedisException: No Redis Sentinels were available ---> ServiceStack.Redis.RedisException: Unable to Connect: sPort: 0

The Unable to Connect: sPort: 0 portion may indicate that it is a ConnectTimeout issue (per this question: ServiceStack.Redis: Unable to Connect: sPort: 0). However, I'm less confident that this is the problem since it says "No Redis Sentinels were available".

While we get this issue intermittently on some applications, there are others (e.g. some console apps we wrote) that seem to be getting the issue consistently.

Can anyone shed light into what this issue is an how to solve it? If you Google "No Redis Sentinels were available" you'll only get the ServiceStack.Redis GitHub page that has the actual code that outputs this message.

Community
  • 1
  • 1
jakejgordon
  • 4,008
  • 7
  • 36
  • 45
  • I can't say for certain whether it was related, but we updated from version 4.0.44 to version 4.0.46 and haven't had the problem since. – jakejgordon Oct 15 '15 at 15:14

1 Answers1

2

That error message is thrown after the RedisSentinel Worker exceeds RedisSentinel.MaxFailures (default: 5) continuous errors when trying to connect to one of the available sentinels.

The Redis Client needs to be able to connect to one the available Sentinels to discover the available masters and slaves and to be notified when the master is no longer responsive and gets failed over.

You can increase RedisSentinel.MaxFailures count to have it continue to cycle through and connect to available Redis Sentinels. I've also added a commit to reset the failure count when it was able to connect to a valid Sentinel (so only continuous errors are checked against MaxFailures), this change is available from v4.0.47 that's now available on MyGet.

Print Snapshot of Redis Client Stats

To have a better idea of the health of the Redis connectivity you can dump a snapshot of the internal Redis Stats to show the activity and health of the client connections with:

RedisStats.ToDictionary().PrintDump();

Enable Debug Logging

You can enable debug logging to see more error details in your preferred Logging provider with:

LogManager.LogFactory = new ConsoleLogFactory(debugEnabled:true);

But this also emits the Redis commands which as it may be too verbose, you can suppress with:

RedisConfig.DisableVerboseLogging = true;

Handle Error Callbacks

Also the RedisSentinel provides a number of hooks to be able to handle custom events, e.g. you can handle when the Sentinel Worker is unable to connect by assigning the OnWorkerError delegate, e.g:

var sentinel = new RedisSentinel(sentinelHost, masterName)
{
    OnWorkerError = ex =>
    {
        "Worker error: {0}".Print(ex);
    },
};
mythz
  • 141,670
  • 29
  • 246
  • 390
  • The logging was indeed very interesting and we'll use it again in the future. Since updating to version 4.0.46 we haven't had the problem since -- but I don't know that it is anything other than a coincidence. – jakejgordon Oct 15 '15 at 15:15
  • For the record, we are definitely still getting this issue frequently -- despite my previous comment. If I understand your comment correctly, Max Failures was a static total that persisted across numerous connection attempts and didn't necessarily represent 5 attempts from one connection? If so, that could explain the issue since we have so many connections happening we are bound to have a failed attempt here and there. So we should see an error message every 5 failed attempts? Also, when is 4.0.47 coming out? We'd rather not switch to the my get feed at this time. Thanks! – jakejgordon Oct 28 '15 at 19:40
  • @jakejgordon we're at least 2-3 weeks from our next official v4.0.48 release on NuGet. If you don't want to use the interim pre-release NuGet packages you can set MaxFailures to a high number that you wont reach. – mythz Oct 29 '15 at 02:29