You can use the volume as normal like we use in deployment with stateful set however, you wont get that magic of auto PVC provisioning.
If you have existing PVCs you can attach it to single replica of the stateful set. As soon as you will scale the stateful set it will create the issue.
In that case, migrating the data would be a good option.
Just for ref,
Example :
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: "redis"
selector:
matchLabels:
app: redis
updateStrategy:
type: RollingUpdate
replicas: 3
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
volumeMounts:
- name: redis-data
mountPath: /usr/share/redis
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: redis-data-pvc
Option : 1
PVC at the end is Disk only unless you using the NFS or so, you can mount the one POD with Gcloud CLI installed to PVC and upload all data to the bucket and restore it. Not a scalable option if you want to run multiple replicas of a statefulset you might need to do restore data to multiple pods.
Option : 2
If you have PV as in disk you can clone the disk to replicas you are looking for. The idea is to Using pre-existing persistent disks as PersistentVolumes creating the PV with the -o,-1 post fix.
volumeClaimTemplates:
- metadata:
name: PVC_TEMPLATE_NAME
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
Ref : https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd#pv_to_statefulset
Option :3
You might like to use the tool instead option 2 manual option, you can check out the velero
Below ref is not exact however you can use it for ref and restore the multiple volume with different name and attach those back to stateful set replicas.
Ref : https://gist.github.com/deefdragon/d58a4210622ff64088bd62a5d8a4e8cc