-1

I have a Kubernetes cluster running on GKE, with minio instance on it, installed using bitnami minio chart. Currently, the minio is operating in standalone mode, as a Deployment with one pod.

The problem I'm facing is that every time I want to upgrade the minio resources, I suffer from downtime until the pod gets redeployed again with the new configuration.

I thought about changing the minio to distributed mode, meaning it will be deployed by a Statefulset, with updateStrategy: RollingUpdate set and podManagementPolicy: OrderedReady, for now it solves to problem but: I'm losing all the data that was stored on the PV, since the Statefulset cannot use the PV that the Deployment used, and I'm trying to find ways to migrate all the current data from the Deployment to the Statefulset.

Thanks for helping!

1 Answers1

0

You can use the volume as normal like we use in deployment with stateful set however, you wont get that magic of auto PVC provisioning.

If you have existing PVCs you can attach it to single replica of the stateful set. As soon as you will scale the stateful set it will create the issue.

In that case, migrating the data would be a good option.

Just for ref,

Example :

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis
spec:
  serviceName: "redis"
  selector:
    matchLabels:
      app: redis
  updateStrategy:
    type: RollingUpdate
  replicas: 3
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis
        ports:
          - containerPort: 6379
        volumeMounts:
          - name: redis-data
            mountPath: /usr/share/redis
      volumes:
        - name: redis-data
          persistentVolumeClaim:
            claimName: redis-data-pvc

Option : 1

PVC at the end is Disk only unless you using the NFS or so, you can mount the one POD with Gcloud CLI installed to PVC and upload all data to the bucket and restore it. Not a scalable option if you want to run multiple replicas of a statefulset you might need to do restore data to multiple pods.

Option : 2

If you have PV as in disk you can clone the disk to replicas you are looking for. The idea is to Using pre-existing persistent disks as PersistentVolumes creating the PV with the -o,-1 post fix.

volumeClaimTemplates:
  - metadata:
      name: PVC_TEMPLATE_NAME
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 100Gi

Ref : https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd#pv_to_statefulset

Option :3

You might like to use the tool instead option 2 manual option, you can check out the velero

Below ref is not exact however you can use it for ref and restore the multiple volume with different name and attach those back to stateful set replicas.

Ref : https://gist.github.com/deefdragon/d58a4210622ff64088bd62a5d8a4e8cc

Harsh Manvar
  • 27,020
  • 6
  • 48
  • 102