We are using NFS volume (GCP filestore with 1TB size) to set RWX Many access PVC in GCP, the problem here is: for example I allot a PVC of 5Gi and mount it to a nginx pod under /etc/nginx/test-pvc, instead of just allotting 5Gi it allots the whole NFS volume size.
I logged into the nginx pod and did a df -kh:
df -kh
Filesystem Size Used Avail Use% Mounted on
overlay 95G 16G 79G 17% /
tmpfs 64M 0 64M 0% /dev
tmpfs 63G 0 63G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
/dev/sda1 95G 16G 79G 17% /etc/hosts
10.x.10.x:/vol 1007G 5.0M 956G 1% /etc/nginx/test-pvc
tmpfs 63G 12K 63G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 63G 0 63G 0% /proc/acpi
tmpfs 63G 0 63G 0% /proc/scsi
tmpfs 63G 0 63G 0% /sys/firmware
size of /etc/nginx/test-pvc is 1007G, which is my whole volume size in NFS(1 TB), it should have been 5G instead, even the used space 5MB isn't actually used in /etc/nginx/test-pvc. Why is the behaviour so ?
PV and PVC yaml used:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-test
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
path: /vol
server: 10.x.10.x
persistentVolumeReclaimPolicy: Recycle
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-claim1
spec:
accessModes:
- ReadWriteOnce
storageClassName: ""
resources:
requests:
storage: 5Gi
volumeName: pv-nfs-test
Nginx deployment yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-pv-demo-depl
spec:
replicas: 1
selector:
matchLabels:
app: nfs-pv-demo
template:
metadata:
name: nfs-pv-pod
labels:
app: nfs-pv-demo
spec:
containers:
- image: nginx
name: nfs-pv-multi
imagePullPolicy: Always
name: ng
volumeMounts:
- name: nfs-volume-1
mountPath: "/etc/nginx/test-pvc"
volumes:
- name: nfs-volume-1
persistentVolumeClaim:
claimName: nfs-claim1
Is there anything I'm missing ? Or is this the behaviour of NFS ? If so what is the best way to handle it in production, as we will have multiple other PVCs and could cause some confusions and volume denial issues.