5

I have created a kubernetes cluster using terraform with persistance disk (pd-ssd). I have also created storage class and persistance volume claim as well.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-claim
      labels:
        app: elasticsearch
    spec:
      storageClassName: ssd
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 30G
    ---
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: ssd
    provisioner: kubernetes.io/gce-pd
    parameters:
      type: pd-ssd
    reclaimPolicy: Retain
---
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  labels:
    name: elasticsearch
spec:
  type: NodePort
  ports:
    - name: elasticsearch-port1
      port: 9200
      protocol: TCP
      targetPort: 9200
    - name: elasticsearch-port2
      port: 9300
      protocol: TCP
      targetPort: 9300
  selector:
    app: elasticsearch
    tier: elasticsearch
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: elasticsearch-application
  labels:
    app: elasticsearch
spec:
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: elasticsearch
        tier: elasticsearch
    spec:
      hostname: elasticsearch
      containers:
        - image: gcr.io/xxxxxxxxxxxx/elasticsearch:7.3.1 
          name: elasticsearch
          ports:
            - containerPort: 9200
              name: elasticport1
            - containerPort: 9300
              name: elasticport2
          env:
            - name: discovery.type
              value: single-node
          volumeMounts:
          - mountPath: /app/elasticsearch/gcp/
            name: elasticsearch-pv-volume
      volumes:
          - name: elasticsearch-pv-volume
            persistentVolumeClaim:
              claimName: pvc-claim

The pvc-claim and storage classes bound perfeclty and I have set the reclamin policy as retain. so the persistance disk should not be deleted when the kubernetes cluster is deleted. But the cluster and other data's deletes with cluster

pvc_bounded_successfully

My scenerio is I need a persitance disk and when the cluster is deleted also my data's should not be deleted. The disk should remain as it is. Is there any fesible solution to my scenerio.

klee
  • 1,554
  • 2
  • 19
  • 31
  • First, cloud or on-prem? Second why do you want to delete the cluster? – Crou Aug 30 '19 at 11:49
  • @Crou The cluster is available on Cloud[GCP] and to reduce cost we are deleting it – klee Aug 30 '19 at 11:52
  • Workaround would be, instead of using GKE setup your own cluster on VMs that you can stop when they are not being used. As for storage I see two ways, you backup date before stopping the deployment or use NFS Shares. But I did not tested that and cannot say if data will be readable after you remove PVC. – Crou Aug 30 '19 at 12:01
  • Or you can try using [GCS Bucket](https://stackoverflow.com/questions/48222871/i-am-trying-to-use-gcs-bucket-as-the-volume-in-gke-pod) – Crou Aug 30 '19 at 12:05
  • 3
    Can you see if deleting the kubernetes labels on ssd first and then deleting the pvc and then pv and then deleting the cluster is retaining the data for you on disk? – Tummala Dhanvi Aug 31 '19 at 23:12

1 Answers1

0

I have created kubernetes cluster using kOps in AWS. When I deleted my cluster I faced same issue as you. The EBS volume that I used for my Database got deleted. Luckily, I had snapshot to create a volume out of it.

Solution: Remove the tags of the volume from AWS UI. And then delete your kubernetes cluster. Then the volume will not get removed. I hope this is possible in GCP too.

For more details, have a look at this video and this post