3

I know there are lots of discussions round this topic but somehow, I can not get it working.
I am trying to install elastic search cluster with statefulset and nfs persistent volume on bare metal. My pv, pvc and sc configs are as below:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: manual
provisioner: kubernetes.io/no-provisioner
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-storage-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: manual
  nfs:
    server: 172.23.240.85
    path: /servers/scratch50g/vishalg/kube

Statefuleset has following pvc section defined:

volumeClaimTemplates:
  - metadata:
      name: beehive-pv-claim
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: manual
      resources:
        requests:
          storage: 1Gi

Now, when I try to deploy it, I get the following error on statefulset:

 pod has unbound immediate PersistentVolumeClaims 

When I get the pvc events, it shows:

 Warning  ProvisioningFailed  3s (x2 over 12s)  persistentvolume-controller  no volume plugin matched

I tried not giving any storageclass (did not create it) and removed it from pv and pvc both altogether. This time, I get below error:

no persistent volumes available for this claim and no storage class is set

I also tried setting storageclass as "" in pvc and not mention it in pv, but it did not work also.

Please help here. What can I check more to get it working?
Can it be related to nfs server and path (if by chance, it is mentioned incorrectly), though I see pv created successfully.

EDIT1:
One issue was that accessmode of pvc was different from accessmode of pv. I got it corrected and now my pvc is shown as bound.
But now even, I get following error:
pod has unbound immediate PersistentVolumeClaims I tried using local volume also but again same error. PV and PVC are bound correctly but statefulset shows above error.
When using hostPath volume, everything works fine.
Is anything fundamentally that I am doing wrong here?

EDIT2
I got the local volume working. It takes some time to pod to bind to pvc. After waiting for coupl eof minutes, my pod got bind to pvc.
I think, nfs binding issue can be more of permission related. But still, k8s should give out some error for the same.

NumeroUno
  • 1,100
  • 2
  • 14
  • 34
  • 2
    have you tried matching the accessModes? the PVC is targeting a ReadWriteOnce volume. – AYA Jul 31 '19 at 07:28
  • 1
    Thanks @AYA. Changing PVC accessmode lead to successful creation of pvc (status=bound as compared to pending previously). But still, my statefulset show "pod has unbound immediate PersistentVolumeClaims". I am getting the below event: pod/beehive-master-data-0 Unable to mount volumes for pod "beehive-master-data-0_pulse(5da0d3a6-b22d-4e33-9ced-073dc46043a6)": timeout expired waiting for volumes to attach or mount for pod "pulse"/"beehive-master-data-0". list of unmounted volumes=[beehive-pv-claim default-token-t2x7x]. list of unattached volumes=[beehive-pv-claim default-token-t2x7x] – NumeroUno Jul 31 '19 at 14:41
  • 1
    Also, if I give wrong nfs path, it gives me error : "directory/path not found". If I correct out the path, it give error as mentioned in above comment. In both cases, pvc status is shown as bound. Please help. – NumeroUno Jul 31 '19 at 16:46
  • 1
    I would check with Nfs at this point, does the node/pod have access to the nfs target? – AYA Jul 31 '19 at 18:48
  • 1
    Node has access to the nfs. How can I check for pos? – NumeroUno Aug 01 '19 at 02:14
  • 1
    Can you mount it on the node manually? – AYA Aug 01 '19 at 21:00
  • 1
    Thanks AYA, I got the hint. It seems nfs will not mount as it would require root permission. – NumeroUno Aug 02 '19 at 06:23
  • 1
    Glad that solved it. Let me turn the comment into a proper answer and if you could accept the answer, others can find it as well. – AYA Aug 02 '19 at 06:49

1 Answers1

4

Could you try matching the accessModes as well?

The PVC is targeting a ReadWriteOnce volume right now.

And if you mount the nfs volume on the node manually, any access/security issue can be debugged.

AYA
  • 917
  • 6
  • 7