I have a PersistentVolume
created locally:
$ kubectl get pv example-pv -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"example-pv"},"spec":{"accessModes":["ReadWriteOnce"],"capacity":{"storage":"10Gi"},"local":{"path":"/var/k8s-volumes/first"},"nodeAffinity":{"required":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"kubernetes.io/hostname","operator":"In","values":["rke2-server-node-1"]}]}]}},"persistentVolumeReclaimPolicy":"Delete","storageClassName":"local-storage","volumeMode":"Filesystem"}}
creationTimestamp: "2023-04-26T02:14:22Z"
finalizers:
- kubernetes.io/pv-protection
name: example-pv
resourceVersion: "4740152"
uid: 0874cd56-76be-4743-b24f-6ee7dce5604a
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
local:
path: /var/k8s-volumes/first
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- rke2-server-node-1
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
volumeMode: Filesystem
status:
phase: Available
And for some reason my PersistentVolumeClaim
is not matching it...
$ kubectl get pvc -n namespace -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
meta.helm.sh/release-name: manager
meta.helm.sh/release-namespace: manager
creationTimestamp: "2023-04-26T02:38:02Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app.kubernetes.io/instance: manager
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: manager
app.kubernetes.io/version: 0.1.3
helm.sh/chart: manager-0.1.0
name: manager-pvc-pg
namespace: manager
resourceVersion: "4740252"
uid: efa5460f-7aef-4a0d-a4a1-f902868cc38b
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeMode: Filesystem
status:
phase: Pending
kind: List
metadata:
resourceVersion: ""
I saw a similar question elsewhere and his issue was that there was no resource attempting to use his PVC, but that is not the case for me since I have a pod:
$ kubectl describe pod -n manager manager-pg-674df946fd-nqplc
Name: manager-pg-674df946fd-nqplc
Namespace: manager
Priority: 0
Service Account: default
Node: <none>
Labels: app.kubernetes.io/component=postgres
app.kubernetes.io/instance=manager
app.kubernetes.io/name=manager
pod-template-hash=674df946fd
Annotations: kubernetes.io/psp: global-unrestricted-psp
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/manager-pg-674df946fd
Containers:
manager-pg:
Image: registry:5000/postgresql12:12.13
Port: 5432/TCP
Host Port: 0/TCP
Environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_USER: user
PGUSER: user
POSTGRES_PASSWORD: <set to the key 'databasePassword' in secret 'shared-secret'> Optional: false
POSTGRES_DB: dbname
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w6wsl (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
postgres-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: manager-pvc-pg
ReadOnly: false
kube-api-access-w6wsl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m4s (x4 over 17m) default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.
Edit 1, adding more output based on questions asked:
There is a storageclass present, but I am intentionally not using it (storageClass is set to "" on my PVC because I am trying to get statically provisioned volumes working)
$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path rancher.io/local-path Delete WaitForFirstConsumer false 11d
I am not trying to use the csidriver.
$ kubectl get csidriver
No resources found