0

I have a PersistentVolume created locally:

$ kubectl get pv example-pv -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"example-pv"},"spec":{"accessModes":["ReadWriteOnce"],"capacity":{"storage":"10Gi"},"local":{"path":"/var/k8s-volumes/first"},"nodeAffinity":{"required":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"kubernetes.io/hostname","operator":"In","values":["rke2-server-node-1"]}]}]}},"persistentVolumeReclaimPolicy":"Delete","storageClassName":"local-storage","volumeMode":"Filesystem"}}
  creationTimestamp: "2023-04-26T02:14:22Z"
  finalizers:
  - kubernetes.io/pv-protection
  name: example-pv
  resourceVersion: "4740152"
  uid: 0874cd56-76be-4743-b24f-6ee7dce5604a
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi
  local:
    path: /var/k8s-volumes/first
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - rke2-server-node-1
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  volumeMode: Filesystem
status:
  phase: Available

And for some reason my PersistentVolumeClaim is not matching it...

$ kubectl get pvc -n namespace -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    annotations:
      meta.helm.sh/release-name: manager
      meta.helm.sh/release-namespace: manager
    creationTimestamp: "2023-04-26T02:38:02Z"
    finalizers:
    - kubernetes.io/pvc-protection
    labels:
      app.kubernetes.io/instance: manager
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: manager
      app.kubernetes.io/version: 0.1.3
      helm.sh/chart: manager-0.1.0
    name: manager-pvc-pg
    namespace: manager
    resourceVersion: "4740252"
    uid: efa5460f-7aef-4a0d-a4a1-f902868cc38b
  spec:
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 10Gi
    volumeMode: Filesystem
  status:
    phase: Pending
kind: List
metadata:
  resourceVersion: ""

I saw a similar question elsewhere and his issue was that there was no resource attempting to use his PVC, but that is not the case for me since I have a pod:

$ kubectl describe pod -n manager manager-pg-674df946fd-nqplc
Name:             manager-pg-674df946fd-nqplc
Namespace:        manager
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app.kubernetes.io/component=postgres
                  app.kubernetes.io/instance=manager
                  app.kubernetes.io/name=manager
                  pod-template-hash=674df946fd
Annotations:      kubernetes.io/psp: global-unrestricted-psp
Status:           Pending
IP:
IPs:              <none>
Controlled By:    ReplicaSet/manager-pg-674df946fd
Containers:
  manager-pg:
    Image:      registry:5000/postgresql12:12.13
    Port:       5432/TCP
    Host Port:  0/TCP
    Environment:
      PGDATA:             /var/lib/postgresql/data/pgdata
      POSTGRES_USER:      user
      PGUSER:             user
      POSTGRES_PASSWORD:  <set to the key 'databasePassword' in secret 'shared-secret'>  Optional: false
      POSTGRES_DB:        dbname
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w6wsl (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  postgres-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  manager-pvc-pg
    ReadOnly:   false
  kube-api-access-w6wsl:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  2m4s (x4 over 17m)  default-scheduler  0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.

Edit 1, adding more output based on questions asked:

There is a storageclass present, but I am intentionally not using it (storageClass is set to "" on my PVC because I am trying to get statically provisioned volumes working)

$ kubectl get storageclass
NAME         PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  11d

I am not trying to use the csidriver.

$ kubectl get csidriver
No resources found
pooley1994
  • 723
  • 4
  • 16
  • Can you run the commands : **kubectl get storageclass** and **kubectl get csidriver**, let me know the output? – Veera Nagireddy Apr 26 '23 at 04:02
  • Try to update your K8s deployment's securityContext to include "fsGroupChangePolicy: OnRootMismatch". Refer to [Allow volume ownership to be only set after fs formatting. #69699](https://github.com/kubernetes/kubernetes/issues/69699) and [Skip Volume Ownership Change](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/695-skip-permission-change), which mayhelp to resolve your issue. – Veera Nagireddy Apr 26 '23 at 05:24
  • Hi @VeeraNagireddy thanks for engaging. I added the output you requested, however please note I am specifically trying to get static volumes to work without using a storageclass. I will look into what you mentioned in your second comment presently though. – pooley1994 Apr 26 '23 at 12:50
  • Turns out volume ownership was not the issue in my case @VeeraNagireddy - I had a storageclass mix up going on. I posted my answer below. – pooley1994 Apr 26 '23 at 13:26
  • If no StorageClass is specified, then the default StorageClass will be used, please have look at my answer for details. – Veera Nagireddy Apr 26 '23 at 13:36

1 Answers1

0

OK, so even though I was explicitly trying to test not using StorageClasses (as in, statically provisioned Volume) the issue still ended up being related to StorageClass.

As a test, I manually specified storageClassName on my PersistentVolumeClaim to try and get it to attach to the matching PersistentVolume and it gave an error that the storage class did not match. I thought having no storage class specified on my PVC would let it pick up any matching PV, but since my PV had a storage class specified of "local-storage" this meant only PVCs explicitly specifying a storage class of "local-storage" would get matched with it. To fix it I removed the storage class name on my PV and then things worked accordingly. I think I could have alternately have specified local-storage on my PVC but that is not the test case I was interested in.

Relevant excerpts from the documentation:

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class

A PV can have a class, which is specified by setting the storageClassName attribute to the name of a StorageClass. A PV of a particular class can only be bound to PVCs requesting that class. A PV with no storageClassName has no class and can only be bound to PVCs that request no particular class.

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1

A claim can request a particular class by specifying the name of a StorageClass using the attribute storageClassName. Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC.

PVCs don't necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster, depending on whether the DefaultStorageClass admission plugin is turned on.

pooley1994
  • 723
  • 4
  • 16