2

I’m trying to setup preview environments for my pull requests. Each environment needs its own prepopulated database.

My seed database is about 15GB.

I have a process to bootstrap a MySQL image and copy the /var/lib/mysql contents to a PVC volume (I also have this in a tarball).

I need to find a way to make new PVC which are populated with this data. To me I see a few options:

  1. Clone an existing PVC for my new deployment and use that
  2. do some backup/restore process to make a new PVC from the old
  3. Make a new PVC and populate it with a tarball

I'm struggling to get any of these to work on GKE. Has anyone managed to achieve the above? I can't mount in the sql file as it simply takes too long to create the database from it - I need to mount in the database files directly.

I spent some time trying to get the CSI drivers working, but it seems I can't find a reasonable how-to guide.

Matt Gill
  • 91
  • 2
  • 5
  • I personally would choose option 1, it's the cleanest and easiest. If your persistent volume claim points to a persistent volume which is a `GCEPersistentDisk` then you can use `gcloud compute disks snapshot` & `gcloud compute disks create` to clone the GCE disk. Cloning basically depends what you mount in your `PV`, i.e. NFS, iSCSI, CephFS, and the likes. What `PV` are you using? – yvesonline Feb 21 '20 at 08:26
  • @yvesonline, yeah it's a `GCEPersistentDisk`, nothing fancy about the PV, it gets created automatically from the PVC ``` apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-var-lib spec: accessModes: - ReadWriteOnce resources: requests: storage: 30Gi ``` – Matt Gill Feb 21 '20 at 08:35
  • I'll have a go at using this: https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd – Matt Gill Feb 21 '20 at 08:37
  • Then I'd definitely recommend: 1) Snapshot existing disk 2) Create new disk from snapshot 3) Optional: mount somewhere temporarily to do changes you might need to do and 4) Rewrite your PV to use the new disk. Easy ;-) – yvesonline Feb 21 '20 at 08:38

1 Answers1

7

Using advice from @yvesonline I was able to achieve option 1 above.

  1. After my original volume has been populated, take a snapshot
gcloud compute disks snapshot [PD-name] --zone=[zone] --snapshot-names=mysql-seed-snapshot-21022020 --description="Snapshot of the /var/lib/mysql folder"
  1. Create a new disk using the snapshot
gcloud compute disks create pvc-example-1 --source-snapshot=mysql-seed-snapshot-21022020 --zone=europe-west2-a
  1. In the cluster create a new pv and pvc:
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-demo
spec:
  persistentVolumeReclaimPolicy: Delete
  storageClassName: ""
  capacity:
    storage: 30Gi
  accessModes:
    - ReadWriteOnce
  gcePersistentDisk:
    pdName: pvc-example-1
    fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv-claim-demo
spec:
  # It's necessary to specify "" as the storageClassName
  # so that the default storage class won't be used, see
  # https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
  storageClassName: ""
  volumeName: pv-demo
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 30Gi
  1. Then launch a new deployment using the pvc created above:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.7
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: root
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: task-pv-storage
          mountPath: /var/lib/mysql
      volumes:
        - name: task-pv-storage
          persistentVolumeClaim:
           claimName: pv-claim-demo        

Once the volume cloning in K8s is more established in GKE this will be easier, but this solution will do for the mean time!

Matt Gill
  • 91
  • 2
  • 5