3

I have installed Postgresql and microk8s on Ubuntu 18.
One of my microservice which is inside microk8s single node cluster needs to access postgresql installed on same VM.
Some articles suggesting that I should create service.yml and endpoint.yml like this.

apiVersion: v1
metadata:
 name: postgresql
spec:
 type: ClusterIP
 ports:
 - port: 5432
   targetPort: 5432

---

kind: Endpoints
apiVersion: v1
metadata:
 name: postgresql
subsets:
 - addresses:
     - ip: ?????
   ports:
     - port: 5432

Now, I am not getting what should I put in subsets.addresses.ip field ?

Bhushan
  • 1,489
  • 3
  • 27
  • 45
  • How have you deployed your **Postgresql** ? Could you share your `Deployment` or maybe `StatefulSet` defitition `yaml` ? – mario Mar 11 '20 at 13:11
  • I installed Postgresql using apt-get install – Bhushan Mar 11 '20 at 16:18
  • Ahh, sorry, I haven't read carefully enough all the details :) You actually mentioned that **microk8s cluster** and **postgres** are installed on the same vm. As to your question, It can be done pretty easily. I'll post my answer soon. – mario Mar 11 '20 at 17:59

1 Answers1

3

First you need to configure your Postgresql to listen not only on your vm's localhost. Let's assume you have a network interface with IP address 10.1.2.3, which is configured on your node, on which Postgresql instance is installed.

Add the following entry in your /etc/postgresql/10/main/postgresql.conf:

listen_addresses = 'localhost,10.1.2.3'

and restart your postgres service:

sudo systemctl restart postgresql

You can check if it listens on the desired address by running:

sudo ss -ntlp | grep postgres

From your Pods deployed within your Microk8s cluster you should be able to reach IP addresses of your node e.g. you should be able to ping mentioned 10.1.2.3 from your Pods.

As it doesn't require any loadbalancing you can reach to your Postgresql directly from your Pods without a need of configuring additional Service, that exposes it to your cluster.

If you don't want to refer to your Postgresql instance in your application using it's IP address, you can edit your Deployment (which manages the set of Pods that connect to your postgres db) to modify the default content of /etc/hosts file used by your Pods.

Edit your app Deployment by running:

microk8s.kubectl edit deployment your-app

and add the following section under Pod template spec:

  hostAliases: # it should be on the same indentation level as "containers:"
  - hostnames:
    - postgres
    - postgresql
    ip: 10.1.2.3

After saving it, all your Pods managed by this Deployment will be recreated according to the new specification. When you exec into your Pod by running:

microk8s.kubectl exec -ti pod-name -- /bin/bash

you should see additional section in your /etc/hosts file:

# Entries added by HostAliases.
10.1.2.3    postgres    postgresql

Since now you can refer to your Postgres instance in your app by names postgres:5432 or postgresql:5432 and it will be resolved to your VM's IP address.

I hope it helps.

UPDATE:

I almost forgot that some time ago I've posted an answer on a very similar topic. You can find it here. It describes the usage of a Service without selector, which is basically what you mentioned in your question. And yes, it also can be used for configuring access to your Postgresql instance running on the same host. As this kind of Service doesn't have selectors by its definition, no endpoint is automatically created by kubernetes and you need to create one by yourself. Once you have the IP address of your Postgres instance (in our example it is 10.1.2.3) you can use it in your endpoint definition.

Once you configure everything on the side of kubernetes you may still encounter an issue with Postgres. In your Pod that is trying to connect to the Postgres instance you may see the following error message:

org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host 10.1.7.151

It basically means that your pg_hba.conf file lacks the required entry that would allow your Pod to access your Postgresql database. Authentication is host-based, so in other words only hosts with certain IPs or with IPs within certain IP range are allowed to authenticate.

Client authentication is controlled by a configuration file, which traditionally is named pg_hba.conf and is stored in the database cluster's data directory. (HBA stands for host-based authentication.)

So now you probably wonder which network you should allow in your pg_hba.conf. To handle cluster networking Microk8s uses flannel. Take a look at the content of your /var/snap/microk8s/common/run/flannel/subnet.env file. Mine looks as follows:

FLANNEL_NETWORK=10.1.0.0/16
FLANNEL_SUBNET=10.1.53.1/24
FLANNEL_MTU=1410
FLANNEL_IPMASQ=false

Adding to your pg_hba.conf only flannel subnet should be enough to ensure that all your Pods can connect to Posgresql.

mario
  • 9,858
  • 1
  • 26
  • 42
  • Thanks for the help but when I tried this, I got error in pod `org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host 10.1.7.151` – Bhushan Mar 14 '20 at 08:07
  • And if I am not wrong then I should not add this above IP in pg_hba.conf as it may get change if new pod gets created. – Bhushan Mar 14 '20 at 08:09
  • `That's why you need to make this configuration a permanent part of your Pods` Can you elaborate which configuration you talking about ? – Bhushan Mar 14 '20 at 10:26
  • Forget it. My mistake. I looked at it from `Pod's` perspective as as if it was in **Postgres** logs. Let's summarize and clarify: **Host based authentication** configuration is set on your **Postgresql** instance and you already told that **Postgres** instance is not part of your **kubernetes cluster** but it is installed using apt-get on your VM, the same one, on which you run your **microk8s cluster**, right ? But when you are trying to connect to it from your `Pod` you could see in its logs the above error, that it cannot be authenticated as for its IP there is no entry in `pg_hba.conf`. – mario Mar 14 '20 at 10:51
  • You're right, `Pods` IPs are ephemeral and are subject to change every time a `Pod` is recreated, so it makes no sense to add static entries for **single IP** in your `pg_hba.conf` as it won't work. But note that these are not just any random IPs. They are part of specific **subnet**, which can be added to `pg_hba.conf` instead of individual **IPs**. This way you can authenticate all your `Pods` as they will always have **IPs** within a specific range. – mario Mar 14 '20 at 10:55
  • Take a look at [this](https://github.com/ubuntu/microk8s/issues/276): *IP range 10.1.1.0/24, 10.152.183.0/24 are userd for cluster or pods by default.*. And it is also explains where they can be customized. – mario Mar 14 '20 at 11:06
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/209630/discussion-between-bhushan-and-mario). – Bhushan Mar 14 '20 at 15:41