I'm not an expert when it comes to Kubernetes so, for learning, I started building a simple Spring Boot web application (called meal-planner) which reads and writes to a PostgreSQL database.
PostgreSQL and Spring Boot app are deployed in Kubernetes which is running on (currently) two Raspberry Pis (model 3 and 3B+).
I work in the namespace home
and have created a Deployment and Service for each (Postgres and Spring Boot application.).
I have created a secret defining credentials which I use to create the database and as database credentials in the Spring Boot application. Productively, I wouldn't put the database in the Kubernetes cluster, but with the Raspberry Pi, I don't care since the data is worthless.
Describing the secret yields:
Name: dev-db-secret
Namespace: home
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
password: 20 bytes
username: 7 bytes
The application.yaml
is the following:
spring:
jpa:
database-platform: org.hibernate.dialect.PostgreSQLDialect
open-in-view: false
hibernate:
ddl-auto: update
datasource:
url: "jdbc:postgresql://postgres:5432/home"
username: ${DB_USER}
password: ${DB_PASSWORD}
I used the following Dockerfile for building the Spring Boot application:
FROM arm32v7/openjdk:11-jdk
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
And I build it on a Windows 10 PC using docker buildx build --platform linux/arm/v7 -t ... --push
.
Kubernetes finds and deploys it just fine.
The following is the Service and Deployment for the PostgreSQL DB:
kind: Service
apiVersion: v1
metadata:
name: postgres
namespace: home
labels:
app: postgres
spec:
type: NodePort
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
name: postgres
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: postgres
namespace: home
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: dev-db-secret
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: dev-db-secret
key: password
- name: POSTGRES_DB
value: home
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pv-claim
---
The deployment works fine and I can access the database using an SQL client (like DBeaver) using the credentials in the dev-db-secret
secret (when I port-forward the service using kubectl port-forward service/postgres -n home 5432:5432
).
I use the following deployment descriptor for the Spring-Boot application.
kind: Service
apiVersion: v1
metadata:
name: meal-planner
namespace: home
labels:
app: meal-planner
spec:
type: NodePort
selector:
app: meal-planner
ports:
- port: 8080
targetPort: 8080
name: meal-planner
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: meal-planner
namespace: home
spec:
replicas: 1
selector:
matchLabels:
app: meal-planner
template:
metadata:
labels:
app: meal-planner
spec:
containers:
- name: meal-planner
image: 08021986/meal-planner:v1
imagePullPolicy: Always
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: dev-db-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: dev-db-secret
key: password
ports:
- containerPort: 8080
---
The application deploys and starts just fine. However, when I kubectl port-forward service/meal-planner -n home 8080:8080
and call the "Read all data from DB"-Endpoint, I get the following error:
...
Caused by: org.hibernate.exception.GenericJDBCException: Unable to open JDBC Connection for DDL execution
...
Caused by: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "<<username from secret>>"
When I start the application locally, port-forward the DB-Service (and replace the datasource URL in the application.yaml file with jdbc:localhost://postgres:5432/home
), everything works perfectly.
I don't understand why the PostgreSQL uses the secret to create the credentials - which are clearly working, since I can access the database using a client or with the application running locally, but the authentication fails for the Spring Boot service (which still seems to be able to fetch the username from the secret). What am I doing wrong?
For completeness of the deployment descriptors, here the namespace, persistent volume and claim descriptor:
apiVersion: v1
kind: Namespace
metadata:
name: home
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
namespace: home
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
namespace: home
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Code can be found on https://github.com/Urr4/deployments and https://github.com/Urr4/meal-planner
Update:
After completely removing the secret and adding the credentials by hand to the application.yaml file, I still get the same error. I'm now assuming, that I can’t reach the PostgreSQL service from the meal-planner pod, meaning the service can’t connect to the database and thus fails authentication, not because of credentials.
I tried kubectl exec <<meal-planner-pod-id>> -n home -- ping postgres:5432
and got ping: postgres:5432: Name or service not known
.
I also tried postgres.default:5432
and postgres.home:5432
without success.
Running k exec <<meal-planner-pod-id>> -n home -- wget -O- postgres
yields Connecting to postgres (postgres)|10.43.62.32|:80... failed: Network is unreachable.
which sounds like an error, which I don't know how to fix.
I still don't know what the problem is.
Update:
I've started a BusyBox, because I can’t prevent the meal-planer from crashing and kubectl exec -it
into the BusyBox pod and execute telnet postgres.home 5432
which yields telnet connected
. wget -O- postgres.home
still yields Network is unreachable
.
I tried telnet on the meal-planer pod, but it just tells me telnet: not found
.
I think postgres
(or postgres.home
) is the right DNS name, since in that case I get the Network is unreachable
. For non-existent services I get Name or service not known
.
I currently think there is some kind of network issue which prevents pods from communicating, but I don't know how to find and fix this.
After debugging in the completely wrong direction, I'm going to close this question in favor of asking it again, but more precisely and smaller. The new question can be found here.