1

I have a Kubernetes setup where I've defined a PersistentVolumeClaim and a Pod. The PVC is set to request 30GB of storage. The NFS server has a total drive size of 100GB. I"m using the nfs-subdir-external-provisioner helm chart and Azure storage.

However, when I log into the pod and run the df -h command, it shows 100GB for the mounted path (/mnt/nfs), which is the total size of the entire NFS drive. I was expecting to see 30GB, which is the size I specified in the PVC.

Here's my PVC spec:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 30Gi

And here's my Pod spec:

apiVersion: v1
kind: Pod
metadata:
  name: foopod
spec:
  containers:
    - name: test-container
      image: nginx
      volumeMounts:
        - name: nfs-volume
          mountPath: /mnt/nfs
  volumes:
    - name: nfs-volume
      persistentVolumeClaim:
        claimName: my-pvc

Is this behavior expected? Shouldn't the df -h command inside the pod show the size of the PVC (30GB) instead of the total NFS drive size (100GB)?

How can I ensure that the pod only sees the size allocated by the PVC? Is there any configuration I need to apply to the NFS server or the PVC?

My concern is with the potential for one pod to use up more space than it's PVC allocates. Or have I completely misunderstood how PVC's work with NFS?

Metro
  • 873
  • 8
  • 19
  • Does this answer your question? [Setting up PVC in NFS, doesn't mount the set PVC size, instead sets the whole NFS volume size](https://stackoverflow.com/questions/68663635/setting-up-pvc-in-nfs-doesnt-mount-the-set-pvc-size-instead-sets-the-whole-nf) – Hackerman Aug 23 '23 at 20:02

0 Answers0