16

Is is possible to rename a PVC? I can't seem to find an evidence it is possible.


I'm trying mitigate an "No space left of device" issue I just stumbled upon. Essentially my plan requires me to resize the volume, on which my service persists its data.

Unfortunately I'm still on Kubernetes 1.8.6 on GKE. It does not have the PersistentVolumeClaimResize admission plugin enabled:

Therefor I have to try and save the data manually. I made the following plan:

  1. create a new, bigger volume PVC,
  2. create a temp container with attached "victim" pvc and a new bigger pvc,
  3. copy the data,
  4. drop "victim" PVC,
  5. rename new bigger pvc to take place of "victim".

The PVC in question is attached to StatefulSet, so the old and new names must match (as StatefulSet expects follows the volume naming convention).

But I don't understand how to rename persistent volume claims.

oldhomemovie
  • 14,621
  • 13
  • 64
  • 99
  • I just realized I messed up. I don't need to rename anything. After I copy the data, I'll then just drop & re-create victim. – oldhomemovie Jan 23 '18 at 15:12

3 Answers3

29

The answer of your question is NO. There is no way to change any meta name in Kubernetes.

But, there is a way to fulfill your requirement.

You want to claim your new bigger PersistentVolume by old PersistentVolumeClaim.

Lets say, old PVC named victim and new PVC named bigger. You want to claim PV created for bigger by victim PVC. Because your application is already using victim PVC.

Follow these steps to do the hack.

Step 1: Delete your old PVC victim.

Step 2: Make PV of bigger Available.

$ kubectl get pvc bigger
NAME      STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
bigger    Bound     pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6   10Gi       RWO            standard       30s

Edit PV pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 to set persistentVolumeReclaimPolicy to Retain. So that deleting PVC will not delete PV.

Now, delete PVC bigger.

$ kubectl delete pvc bigger

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM            STORAGECLASS   REASON    AGE
pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6   10Gi       RWO            Retain           Released   default/bigger   standard                 3m

See the status, PV is Released.

Now, make this PV available to be claimed by another PVC, our victim.

Edit PV again to remove claimRef

$ kubectl edit pv pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6   10Gi       RWO            Retain           Available             standard                 6m

Now the status of PV is Available.

Step 3: Claim bigger PV by victim PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: victim
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6
  resources:
    requests:
      storage: 10Gi

Use volumeName pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6

kubectl get pvc,pv
NAME         STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc/victim   Bound     pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6   10Gi       RWO            standard       9s

NAME                                          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM            STORAGECLASS   REASON    AGE
pv/pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6   10Gi       RWO            Retain           Bound     default/victim   standard                 9m

Finally: Set persistentVolumeReclaimPolicy to Delete

This is how, your PVC victim has had bigger PV.

Shahriar
  • 13,460
  • 8
  • 78
  • 95
  • Hi, sorry for a late response. I did not have chance to try out your solution yet, but just by looking at it it seems legit! I didn't know you could pull of a trick like that with `PV`. Thank you! – oldhomemovie Feb 10 '18 at 09:57
  • Hi @Mir I followed these steps and I can confirm they worked perfectly. – diegoubi Jul 02 '20 at 14:52
  • Do you know if it is possible to use `--cascade=false` when deleting the PVC to prevent it from deleting the PV? I know this works when deleting a StatefulSet to not delete the Pods but not sure if it applies to PVCs also. That would make the process easier. – diegoubi Jul 02 '20 at 20:52
  • 3
    These two patch command examples can be used if using scripting vs. `kubectl edit` Patch persistentVolumeReclaimPolicy: `kubectl patch pv -p "{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}"` Patch to remove spec.claimRef `kubectl patch pv --type json -p '[{"op": "remove", "path": "/spec/claimRef"}]'` – bczoma Jul 20 '21 at 16:05
5

With Kubernetes 1.11+ you can perform on-demand resizing by simply modifying the PVC's storage request (https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/).

GKE supports this (I've used it several times my self) and it's pretty straightforward and without the drama.

yomateo
  • 2,078
  • 12
  • 17
0

I cannot validate this however I am fairly certain that for GKE you can go to disks in the Google Cloud Console and find the one that the PV uses and resize it there. Once you've done that you should be able to log into the node which its attached and run resize2fs on the device. This is dirty, but fairly certain this has worked for me once in the past.

You don't have to unmount or copy to do this, which can save you if the disk is live or large.

Derek Lemon
  • 121
  • 1
  • 5
  • Hi Derek! I can resize the disk, no issue with that. What I wanted to do is to resize persistent volume claim, but that feature is not available in my version of Kubernetes. To work around the issue I decided to go with data "copying" solution. It required that I rename one of the PVC. But that seems to be impossible... – oldhomemovie Jan 23 '18 at 20:56
  • Oh sorry I must have completely missed the `C` in all of that. I know that resizing Persistent Volumes has been an annoyance for me as its not automated in some of these clouds. – Derek Lemon Jan 25 '18 at 20:06