I have a service that is dependent on a Redis cache which is running as a separate service. I have added probes to make sure my service doesn't come up before Redis is up. However, there are times when Redis gets restarted and when this happens, the data is not available anymore and my service stops working as the cache doesn't have any data. We need to restart the service which reloads the data. While we are working make persistence enabled for Redis, is it possible for me add a liveness probe in my service to check for the Redis pod to be up and has the required data and if not, restart itself?
-
Have you tried to persist your redis data to a persistent volume? This should solve your issue. – Rakesh Gupta Apr 09 '23 at 14:18
-
yes we are looking to do that. outside of it, i want to add this dependency. so trying to figure out a way to do it. Thank you – sg1973 Apr 09 '23 at 14:27
-
I would not try to rely on probes here. If your application can't tolerate the cache being unavailable and can't attempt to reconnect it self, it should just exit, and Kubernetes will try to restart it. It won't specifically be aware of the dependency but there is reasonable back-off behavior. – David Maze Apr 09 '23 at 14:50
-
Thanks, Sorry i should have been more clear. I want the probe to check the Redis availability and the data inside it and if not available, just fail so Kubernetes will restart it – sg1973 Apr 09 '23 at 14:57
-
Then the only other option that I can think of is to add an initContainer to your Redis pod, to create all the cache at startup. This initContainer can invoke an api implemented by your other service and populate Redis cache. But, I think what you are trying to do is anti-pattern – Rakesh Gupta Apr 09 '23 at 15:47
-
Thanks. are you saying redis reaching out to the service to load is an anti-pattern? – sg1973 Apr 10 '23 at 16:36
1 Answers
You could run a sidecar container in your service that is a basic webserver (apache, httpd, and nginx all can do this) and all that webserver does is proxy to the Redis cluster, so the kubernetes health check can check something like yourservice:1234/redis_health
and that would proxy the request to Redis.
If redis is down, the webserver would return 502 Bad Gateway
.
The cluster would then restart the service pod based on the nature and frequency of the failed checks, because it thinks the service pod is down, not Redis.
As long as you don't expose the webserver via a Kubernetes Service, this won't harm the functionality of your service.
Another possible solution is to use the redis client redis-cli
within your service pod as part of the livenessProbe
-- in other words, you use the Redis service as the indicator for whether your service is healthy, but with the caveat that if the Redis dies, your service pods will be continuously killed and restarted.

- 4,442
- 2
- 17
- 30
-
Thanks. I am wondering how to do the 2nd option you mentioned to use redis-cli as part of the livenessProbe which will help handle this. If Redis dies, i want the service pods to not to come up either – sg1973 Apr 10 '23 at 16:41
-
For that you could use an `initContainer` (https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) that waits for the Redis service to be ready -- so the main container doesn't start until that is the case, then the main pod's livenessProbe keeps checking redis to make sure it stays good. – Blender Fox Apr 10 '23 at 16:51