I've been digging into persistent volumes and I've run into this problem.
I created a persistent volume on one of my directories to store things such as database data, initialization scripts, config files, etc... for my postgres deployment. Here is the postgres-pvc-pv.yaml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume # Sets PV's name
labels:
#Stype: local # Sets PV's type to local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi # Sets PV Volume
accessModes:
- ReadWriteMany
hostPath:
path: "/home/kubernetesUser/postgresKubernetes/volume"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim # Sets name of PVC
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany # Sets read and write access
resources:
requests:
storage: 5Gi # Sets volume size
volumeName: postgres-pv-volume
As you can see the volume is in the path /home/kubernetesUser/postgresKubernetes/volume
And here is the deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
- secretRef:
name: postgres-secret
volumeMounts:
# - name: postgres-data
# mountPath: /var/lib/postgresql/data
# subPath: data
# - name: postgres-data
# mountPath: /etc/postgresql/postgresql.conf
# subPath: my-postgres.conf
- name: postgres-data
mountPath: /var/backups
subPath: backups
# - name: postgres-data
# mountPath: /docker-entrypoint-initdb.d
# subPath: initScripts
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pv-claim
As you can see some options under volumeMounts are commented out, that's because I was testing around. When they all are commented out the deployment works fine, when any of them isn't, I get the error.
anyways, when running kubectl apply -f postgres-deployment.yaml...
my pod gets stuck on CreateContainerConfigError
Here is the events part of kubectl describe pod.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 23s default-scheduler 0/1 nodes are available: persistentvolumeclaim "postgres-pv-claim" not found. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..
Warning FailedScheduling 22s default-scheduler 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..
Normal Scheduled 19s default-scheduler Successfully assigned default/postgres-7bf8f99856-98cxl to minikube
Normal Pulled 6s (x4 over 19s) kubelet Container image "postgres:alpine" already present on machine
Warning Failed 6s (x4 over 19s) kubelet Error: stat /home/kubernetesUser/postgresKubernetes/volume: no such file or directory
It says... /home/kubernetesUser/postgresKubernetes/volume: no such file or directory.
I feel like maybe I'm not understanding how pv's work?
Sorry for not clarifying, the path that supposedly doesn't exist, does indeed exist in my host machine.