I've gone ahead and used the postgres helm release to install a small "cluster" on an on premise kubernetes cluster. the installation went smoothly - we have an master instance and two slaves to which the data is being replicated (the single restart seen below is OK as it was done manually for testing).
prod-postgres-postgresql-master-0 2/2 Running 0 15h
prod-postgres-postgresql-slave-0 1/1 Running 0 16h
prod-postgres-postgresql-slave-1 1/1 Running 1 9d
These pods came with their respective services (I am using a NodePort since there is no cloud provider to add an external IP to a LoadBalancer):
prod-postgres-postgresql NodePort 10.96.119.67 <none> 5432:31920/TCP 9d
prod-postgres-postgresql-headless ClusterIP None <none> 5432/TCP 9d
prod-postgres-postgresql-metrics ClusterIP 10.106.163.49 <none> 9187/TCP 9d
prod-postgres-postgresql-read ClusterIP 10.97.58.56 <none> 5432/TCP 9d
The values used for the installation are the same as the production values on the repo with the small change of password and storage-class (for which I manually provided the needed PVs)
My question:
How do I now use this DB deployment for reading from all postgres nodes?
I understand that:
- just the master writes
- this helm chart does not offer any form of failover once a master is dead (say the pod is in crash-backoff for some reason).
- the master has its own service:
prod-postgres-postgresql
- the slaves have their own service:
prod-postgres-postgresql-read
Since the services are different, how can I tell my app it's allowed to read from more than just the master?
If this is not supported then what is the "point" of this helm chart? Along with the lack of failover the slaves seem pointless.