2

Description: Unable to bind a new PVC to an existing PV that already contains data from previous run (and was dynamically created using gluster storage class).

  • Installed a helm release which created PVC and dynamically generated PV from GlusterStorage class.
  • However due to some reason, we need to bring down the release (helm del) and re-install it (helm install). However, want to use the existing PV instead of creating a new one.

I tried a few things: - Following the instruction here: https://github.com/kubernetes/kubernetes/issues/48609. However, that did not work for GlusterFS storage solution since after I tried the needed steps, it complained:

  Type     Reason            Age                From                              Message
  ----     ------            ----               ----                              -------
  Warning  FailedScheduling  <unknown>          default-scheduler                 error while running "VolumeBinding" filter plugin for pod "opensync-wifi-controller-opensync-mqtt-broker-fbbd69676-bmqqm": pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  <unknown>          default-scheduler                 error while running "VolumeBinding" filter plugin for pod "opensync-wifi-controller-opensync-mqtt-broker-fbbd69676-bmqqm": pod has unbound immediate PersistentVolumeClaims
  Normal   Scheduled         <unknown>          default-scheduler                 Successfully assigned connectus/opensync-wifi-controller-opensync-mqtt-broker-fbbd69676-bmqqm to rahulk8node1-virtualbox
  Warning  FailedMount       31s (x7 over 62s)  kubelet, rahulk8node1-virtualbox  MountVolume.NewMounter initialization failed for volume "pvc-dc52b290-ae86-4cb3-aad0-f2c806a23114" : endpoints "glusterfs-dynamic-dc52b290-ae86-4cb3-aad0-f2c806a23114" not found
  Warning  FailedMount       30s (x7 over 62s)  kubelet, rahulk8node1-virtualbox  MountVolume.NewMounter initialization failed for volume "pvc-735baedf-323b-47bc-9383-952e6bc5ce3e" : endpoints "glusterfs-dynamic-735baedf-323b-47bc-9383-952e6bc5ce3e" not found

Apparently besides the PV, we would also need to persist gluster-dynamic-endpoints and glusterfs-dynamic-service. However, these are created in the pod namespace and since the namespace is removed as part of helm del, it also deletes these endpoints and svc.

I looked around other pages related to GlusterFS endpoint missing: https://github.com/openshift/origin/issues/6331 but that does not applies to the current version of Storage class. When I added endpoint: "heketi-storage-endpoints" to the Storage class parameters, I got the following error when creating the PVC:

Failed to provision volume with StorageClass "glusterfs-storage": invalid option "endpoint" for volume plugin kubernetes.io/glusterfs

This option was removed in 2016 - see https://github.com/gluster/gluster-kubernetes/issues/87.

Is there anyway to use existing PV from a new PVC?

1 Answers1

3

I would like to suggest a different approach.

You can use this annotation on the PVC, it will skip deleting the resource on delete.

helm.sh/resource-policy: "keep"

Here is an example:

{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: {{ template "bitcoind.fullname" . }}
  annotations:
    "helm.sh/resource-policy": keep
  labels:
    app: {{ template "bitcoind.name" . }}
    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    release: "{{ .Release.Name }}"
    heritage: "{{ .Release.Service }}"
spec:
  accessModes:
    - {{ .Values.persistence.accessMode | quote }}
  resources:
    requests:
      storage: {{ .Values.persistence.size | quote }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
  storageClassName: ""
{{- else }}
  storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- end }} 

You can also use parameters as seen here, where they implemented an option to flag (which is either true or false) while you install your helm chart.

persistence.annotations."helm.sh/resource-policy"

You can also include a configurable parameters to set the name of the pvc you want to reuse as seen here.

In this example you can set persistence.existingClaim=mysql-pvc during your chart install.

So mixing everything you can have something that should look like this when you run your helm install:

helm install --namespace myapp --set existingClaim=mysql-pvc stable/myapp
Frederik Carlier
  • 4,606
  • 1
  • 25
  • 36
Mark Watney
  • 5,268
  • 2
  • 11
  • 33
  • Thanks for the reply @mWatney. From the 3 examples you provided, I tried the annotation in my deployment and here is the result: - The annotation `helm.sh/resource-policy: "keep" ` did help to skip deleting the PVC (and it's dependents - glusterfs created endpoints and services), provided the namespace is NOT deleted. However if the namespace gets removed, these are removed too irrespective of the annotation value. – Rahul Sharma Apr 14 '20 at 17:45
  • - I added the annotation ` helm.sh/resource-policy: "keep" ` to the namespace and now we have Namespace with existing PVC, Endpoints and Services ready to be used by new helm install deployments. I used the `.Values.persistence.existingClaimData` in my override file instead of `--set existingClaim=mysql-pvc` as you specified. – Rahul Sharma Apr 14 '20 at 17:45
  • 1
    This is for sure one solution - I would say a work-around since now my PVC are always existing even if I don't want them. And for any deletion of the helm chart, I need to manually delete the PVC, Gluster Endpoints, Gluster Service and my Namespace if I need to clean-up my system. – Rahul Sharma Apr 14 '20 at 17:46
  • I was thinking if somebody from Gluster/K8S community can chime in for a solution similar to https://github.com/kubernetes/kubernetes/issues/48609 (implemented for GCE), wherein we don't need PVC or any of the Gluster specific resources to exist after `helm del`, so we can also delete the Namespace. And during the next helm install, just provide the PV that this PVC should bind to. When I tried this, PVC was able to bind to PV but Kubernetes complained that Gluster-dynamic endpoints/services are not found. – Rahul Sharma Apr 14 '20 at 17:46
  • So if we can have someway to create those (gluster-dynamic-endpoints and gluster-dynamic-services), when PVC is being re-created, that would be ideal for this usecase. What do you think? Sorry the comments don't allow a big message hence multiple messages. – Rahul Sharma Apr 14 '20 at 17:46
  • Your idea would possibly be best case scenario but even if you raise an issue or suggest this issue, would take a lot of time for them to implement it. My suggestion is a work-around to allow you to work with what we have now in the best way possible. I can't think on any other solution now. – Mark Watney Apr 15 '20 at 08:51
  • 1
    I agree @mwatney. For now, I have used your suggested workaround for our case. To track, I have also raised a ticket with Kubernetes https://github.com/kubernetes/kubernetes/issues/90176. – Rahul Sharma Apr 15 '20 at 15:58
  • I stumbled upon another workaround-solution. If we convert our deployment (using the PVC) into StatefulSet and use VolumeClaimTemplate to create PVC, it behaves identical to using `helm.sh/resource-policy: "keep"` annotation in PVC. Once the Statefulset is deleted during `helm del`, the PVC and other Gluster resources remain. On reinstalling the chart, it binds to existing claim. The other advantage is - of course - scaling. What do you think? – Rahul Sharma Apr 16 '20 at 19:50