0

I'm not an expert when it comes to Kubernetes so, for learning, I started building a simple Spring Boot web application (called meal-planner) which reads and writes to a PostgreSQL database.

PostgreSQL and Spring Boot app are deployed in Kubernetes which is running on (currently) two Raspberry Pis (model 3 and 3B+).

I work in the namespace home and have created a Deployment and Service for each (Postgres and Spring Boot application.).

I have created a secret defining credentials which I use to create the database and as database credentials in the Spring Boot application. Productively, I wouldn't put the database in the Kubernetes cluster, but with the Raspberry Pi, I don't care since the data is worthless.

Describing the secret yields:

Name:         dev-db-secret
Namespace:    home
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
password:  20 bytes
username:  7 bytes

The application.yaml is the following:

spring:
  jpa:
    database-platform: org.hibernate.dialect.PostgreSQLDialect
    open-in-view: false
    hibernate:
      ddl-auto: update
  datasource:
    url: "jdbc:postgresql://postgres:5432/home"
    username: ${DB_USER}
    password: ${DB_PASSWORD}

I used the following Dockerfile for building the Spring Boot application:

FROM arm32v7/openjdk:11-jdk
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]

And I build it on a Windows 10 PC using docker buildx build --platform linux/arm/v7 -t ... --push. Kubernetes finds and deploys it just fine.

The following is the Service and Deployment for the PostgreSQL DB:

kind: Service
apiVersion: v1
metadata:
  name: postgres
  namespace: home
  labels:
    app: postgres
spec:
  type: NodePort
  selector:
    app: postgres
  ports:
    - port: 5432
      targetPort: 5432
      name: postgres
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: postgres
  namespace: home
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:13.2
          imagePullPolicy: IfNotPresent
          env:
            - name: POSTGRES_USER
              valueFrom:
                secretKeyRef:
                  name: dev-db-secret
                  key: username
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: dev-db-secret
                  key: password
            - name: POSTGRES_DB
              value: home
          ports:
            - containerPort: 5432
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: postgres-data
      volumes:
        - name: postgres-data
          persistentVolumeClaim:
            claimName: postgres-pv-claim
---

The deployment works fine and I can access the database using an SQL client (like DBeaver) using the credentials in the dev-db-secret secret (when I port-forward the service using kubectl port-forward service/postgres -n home 5432:5432).

I use the following deployment descriptor for the Spring-Boot application.

kind: Service
apiVersion: v1
metadata:
  name: meal-planner
  namespace: home
  labels:
    app: meal-planner
spec:
  type: NodePort
  selector:
    app: meal-planner
  ports:
    - port: 8080
      targetPort: 8080
      name: meal-planner
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: meal-planner
  namespace: home
spec:
  replicas: 1
  selector:
    matchLabels:
      app: meal-planner
  template:
    metadata:
      labels:
        app: meal-planner
    spec:
      containers:
        - name: meal-planner
          image: 08021986/meal-planner:v1
          imagePullPolicy: Always
          env:
            - name: DB_USER
              valueFrom:
                secretKeyRef:
                  name: dev-db-secret
                  key: username
            - name: DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: dev-db-secret
                  key: password
          ports:
            - containerPort: 8080
---

The application deploys and starts just fine. However, when I kubectl port-forward service/meal-planner -n home 8080:8080 and call the "Read all data from DB"-Endpoint, I get the following error:

...
Caused by: org.hibernate.exception.GenericJDBCException: Unable to open JDBC Connection for DDL execution
...
Caused by: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "<<username from secret>>"

When I start the application locally, port-forward the DB-Service (and replace the datasource URL in the application.yaml file with jdbc:localhost://postgres:5432/home), everything works perfectly.

I don't understand why the PostgreSQL uses the secret to create the credentials - which are clearly working, since I can access the database using a client or with the application running locally, but the authentication fails for the Spring Boot service (which still seems to be able to fetch the username from the secret). What am I doing wrong?

For completeness of the deployment descriptors, here the namespace, persistent volume and claim descriptor:

apiVersion: v1
kind: Namespace
metadata:
  name: home
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-volume
  namespace: home
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-pv-claim
  namespace: home
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

Code can be found on https://github.com/Urr4/deployments and https://github.com/Urr4/meal-planner

Update:

After completely removing the secret and adding the credentials by hand to the application.yaml file, I still get the same error. I'm now assuming, that I can’t reach the PostgreSQL service from the meal-planner pod, meaning the service can’t connect to the database and thus fails authentication, not because of credentials.

I tried kubectl exec <<meal-planner-pod-id>> -n home -- ping postgres:5432 and got ping: postgres:5432: Name or service not known.

I also tried postgres.default:5432 and postgres.home:5432 without success.

Running k exec <<meal-planner-pod-id>> -n home -- wget -O- postgres yields Connecting to postgres (postgres)|10.43.62.32|:80... failed: Network is unreachable. which sounds like an error, which I don't know how to fix.

I still don't know what the problem is.

Update:

I've started a BusyBox, because I can’t prevent the meal-planer from crashing and kubectl exec -it into the BusyBox pod and execute telnet postgres.home 5432 which yields telnet connected. wget -O- postgres.home still yields Network is unreachable. I tried telnet on the meal-planer pod, but it just tells me telnet: not found.

I think postgres (or postgres.home) is the right DNS name, since in that case I get the Network is unreachable. For non-existent services I get Name or service not known.

I currently think there is some kind of network issue which prevents pods from communicating, but I don't know how to find and fix this.

After debugging in the completely wrong direction, I'm going to close this question in favor of asking it again, but more precisely and smaller. The new question can be found here.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Urr4
  • 611
  • 9
  • 26
  • Could you try setting the variables' value directly in the deployment template? For example - name: spring.datasource.password valueFrom: secretKeyRef: name: dev-db-secret key: password – Yayotrón Jul 16 '21 at 08:07
  • Thank, I will try that – Urr4 Jul 16 '21 at 08:11
  • Didn't work, I still get the same error. I'm now assuming that I somehow cant reach the postgres service from the meal-planner – Urr4 Jul 16 '21 at 08:52
  • I can see in your project that your dev profile is hitting localhost and the prod one is hitting postgres. Are you using prod profile in your deployment? Your application property is also using dev. Also to use kubernetes dns resolver you must specify the namespace as well, try with postgres.default instead of just postgres in the database connection string – Yayotrón Jul 16 '21 at 09:12
  • I build the application in prod profile, the dev profile is juste checked into git. Since postgres and meal-planner are in the same namespace, should "postgres" suffice instead of postgres.default? I still tried "postgres", "postgres.default" and "postgres.home" as dns names. Still nothing. – Urr4 Jul 16 '21 at 09:20
  • Could you try removing type: NodePort from your service? From what I understand you're trying to reach your service from within the cluster so you should use ClusterIP (default value) – Yayotrón Jul 16 '21 at 09:26
  • 1
    Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/234962/discussion-between-yayotron-and-urr4). – Yayotrón Jul 16 '21 at 09:32

1 Answers1

2

The service created for PostgreSQL should be of type ClusterIP. If you are using NodePort, you will be able to access PostgreSQL using NodeIP:NodePort. Since both the instances are within the namespace, you can change type to ClusterIP.

Also, you can change the environment variable name in the deployment meal-planner.

DB_USER -> username

DB_PASSWORD -> password

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
San
  • 191
  • 4
  • Thank you, that's good feedback. I've changed the NodePort to ClusterIP for both deployments, since the meal-planner is supposed to be a backend which should not be accessible from the outside. However I don't understand what you mean by "change the env varibale name". I did that and it didn't work, but I'm unsure if I understood it correctly. I changed the word "DB_USER" to the word "username" (same for DB_PASSWORD) in the meal-planner.yaml and kept everything else the same – Urr4 Jul 16 '21 at 07:49
  • Also, sadly, this did not solve the problem – Urr4 Jul 16 '21 at 12:24