3

Here's the example I have modeled after.

In the Readme's "Delete our manual pod" section:

  1. The redis sentinels themselves, realize that the master has disappeared from the cluster, and begin the election procedure for selecting a new master. They perform this election and selection, and chose one of the existing redis server replicas to be the new master.

How do I select the new master? All 3 Redis server pods controlled by the redis replication controller from redis-controller.yaml still have the same

labels:
  name: redis

which is what I currently use in my Service to select them. How will the 3 pods be distinguishable so that from Kubernetes I know which one is the master?

writofmandamus
  • 1,151
  • 1
  • 23
  • 40

3 Answers3

4

How will the 3 pods be distinguishable so that from Kubernetes I know which one is the master?

Kubernetes isnt aware of the master nodes. You can find the pod manually by connecting to it and using:

redis-cli info

You will get lots of information about the server but we need role for our purpose:

redis-cli info | grep ^role
Output:
role: Master

Please note Replication controllers are replaced by Deployments for stateless services. For stateful services use Statefulsets.

Farhad Farahi
  • 35,528
  • 7
  • 73
  • 70
  • Thank you. To avoid downtime, would I need to somehow configure Sentinel (or find some kind of hook) to automatically update the master pod (perhaps by adding in a `role: master` label in addition to the current `name: redis` label) and create another Service that selects `name:redis` and `role: master`)? Otherwise, my application cannot make write requests until I am notified of the Redis server failure and manually go in to find the master. – writofmandamus Mar 16 '17 at 15:52
  • You dont have to do that, deployments/replicasets, will automatically start another instance of underlying pod on pod failure (exited/node failure), to enhance this capability, you can use liveliness and rediness probes to check the microservice(service in the container) health at certain intervals, and if the container is unhealthy, it will get restarted automatically. – Farhad Farahi Mar 16 '17 at 15:59
  • Sorry I don't understand. Right now my replication controller (soon to be StatefulSet/Deployment) does indeed automatically start another instance (as described in the tutorial). However, the problem is (as back to my original question) Kubernetes wouldn't know which one (whether the new pod or the existing replications) is the master. You suggested that I manually go inside each container and use redis-cli to check for the roles. So, my followup question was instead of manually, how can this be done automatically (e.g. my Service can still accurately select a master pod). – writofmandamus Mar 16 '17 at 17:28
3

Your client Redis library can actually handle this. For example with ioredis:

ioredis guarantees that the node you connected to is always a master even after a failover.

So, you actually connect to a redis-sentinel instead of a redis-client.

writofmandamus
  • 1,151
  • 1
  • 23
  • 40
0

We need to do the same thing and tried different things like modifying chart. Finally, just created a simple python docker that does the labeling and created chart that expose the master redis as service. This periodically checked the pods create for redis-ha and label them according to their role ( master/ slave)

It uses the same sentinel commands to find the master/slave.

helm chart redis-pod-labeler here source repo

Manoj Prasanna
  • 99
  • 1
  • 11