0

I want to use Velero to backup an application to a minio bucket. Here's some context, I have 2 AKS clusters [dev, tools].

The tools cluster runs my minio instance and dev is the cluster for my workloads.

I followed some examples on the internet on how to install Velero using helm and how to configure it to backup worloads to minio.

Right now, I can do a backup of an application with it's PersistentVolume but when I do a restore there's no data in the volume. I will go into detail below, and I appreciate any advice or help from the community to resolve this issue.

Here's the steps that I followed:

  1. Installing Velero :
helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
helm repo update
helm upgrade --install $RELEASE_NAME vmware-tanzu/velero \ 
  --namespace $NAMESPACE --create-namespace -f $VALUES_FILE
  1. Here's an extract of the helm values.yaml file I use, with the most important bits :
initContainers:
    - name: velero-plugin-for-aws
      image: velero/velero-plugin-for-aws:v1.6.1
      imagePullPolicy: IfNotPresent
      volumeMounts:
        - mountPath: /target
          name: plugins

  configuration:
    plugins:
      aws:
        image: velero/velero-plugin-for-aws:v1.2.0
      minio:
        image: velero/velero-plugin-for-minio:v1.2.0
  
    provider: aws
  
    backupStorageLocation:
      name: default 
      bucket: dev-velero-backup
      config:
        region: minio
        s3ForcePathStyle: "true"
        publicUrl: http://dev.api
        s3Url: "minio.tenant"
        insecureSkipTLSVerify: true
    
    volumeSnapshotLocation:
      region: minio
      name: default
      provider: aws
      config:
        region: minio
  
  # specify the credentials for minio.
  credentials:
    useSecret: true
    existingSecret: ""
    secretContents:
      cloud: |
        [default]
        aws_access_key_id = minio
        aws_secret_access_key = minio
      s3: ""
  
    features:
    namespace: velero
  backupsEnabled: true
  snapshotsEnabled: true
  1. When I run the backup command I can see the objects created in the minio bucket, so there's not an issue with the communication between velero and minio.

  2. This is the command I use to do a backup of my nginx-example application : velero backup create nginx-example --include-namespaces nginx-example --snapshot-volumes

The backups complete without any errors.

Here are the logs from the backup :

time="2023-03-21T13:11:28Z" level=info msg="Executing RemapCRDVersionAction" backup=velero/nginx-example cmd=/velero logSource="pkg/backup/remap_crd_version_action.go:61" pluginName=velero
time="2023-03-21T13:11:28Z" level=info msg="Exiting RemapCRDVersionAction, the cluster does not support v1beta1 CRD" backup=velero/nginx-example cmd=/velero logSource="pkg/backup/remap_crd_version_action.go:89" pluginName=velero
time="2023-03-21T13:11:28Z" level=info msg="Backed up a total of 24 items" backup=velero/nginx-example logSource="pkg/backup/backup.go:413" progress=
  1. The next step is simulating a DR event, by deleting the nginx-example namespace and verifying that all k8s resources for the app is destroyed including the PV.

kubectl delete ns nginx-example #Wait, and Check if pv is deleted.

  1. When I attempt to restore the nginx-example from the velero backup, by running this command:

velero restore create --from-backup nginx-example --include-namespaces nginx-example --restore-volumes

I can see in the restore logs the following messages :

velero restore logs nginx-example-20230321141504
time="2023-03-21T13:15:06Z" level=info msg="Waiting for all post-restore-exec hooks to complete" logSource="pkg/restore/restore.go:596" restore=velero/nginx-example-20230321141504
time="2023-03-21T13:15:06Z" level=info msg="Done waiting for all post-restore exec hooks to complete" logSource="pkg/restore/restore.go:604" restore=velero/nginx-example-20230321141504
time="2023-03-21T13:15:06Z" level=info msg="restore completed" logSource="pkg/controller/restore_controller.go:545" restore=velero/nginx-example-20230321141504
  1. When I verify if the nginx acess-logs still contain the data of previous visits, it is empty :

kubectl exec -it nginx-deploy-bf489bc5-8jrtz -- cat /var/log/nginx/access.log

The nginx-example application mounts a path to /var/log/nginx on the PV.

    spec:
      volumes:
        - name: nginx-logs
          persistentVolumeClaim:
           claimName: nginx-logs
      containers:
      - image: nginx:stable
        name: nginx
        ports:
        - containerPort: 80
        volumeMounts:
          - mountPath: "/var/log/nginx"
            name: nginx-logs
            readOnly: false

The end goal should be to do a successful backup and restore of the nginx-example application with it's persistent volume that contains the access logs data.

I'll be really happy if this issue can be resolved with your help and of course I will provide any relevant information.

Additional Information

  1. VolumeSnapshotLocation
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
  annotations:
    helm.sh/hook: post-install,post-upgrade,post-rollback
    helm.sh/hook-delete-policy: before-hook-creation
  creationTimestamp: "2023-03-21T01:26:37Z"
  generation: 1
  labels:
    app.kubernetes.io/instance: velero-ontwikkel
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: velero
    helm.sh/chart: velero-3.1.4
  name: default
  namespace: velero
  resourceVersion: "83378185"
  uid: cea663dd-c1d9-4035-8c84-79a240f4351c
spec:
  config:
    region: minio
  provider: aws
  1. Installed velero plugins
NAME                                        KIND
velero.io/crd-remap-version                 BackupItemAction
velero.io/pod                               BackupItemAction
velero.io/pv                                BackupItemAction
velero.io/service-account                   BackupItemAction
velero.io/aws                               ObjectStore
velero.io/add-pv-from-pvc                   RestoreItemAction
velero.io/add-pvc-from-pod                  RestoreItemAction
velero.io/admission-webhook-configuration   RestoreItemAction
velero.io/apiservice                        RestoreItemAction
velero.io/change-pvc-node-selector          RestoreItemAction
velero.io/change-storage-class              RestoreItemAction
velero.io/cluster-role-bindings             RestoreItemAction
velero.io/crd-preserve-fields               RestoreItemAction
velero.io/init-restore-hook                 RestoreItemAction
velero.io/job                               RestoreItemAction
velero.io/pod                               RestoreItemAction
velero.io/pod-volume-restore                RestoreItemAction
velero.io/role-bindings                     RestoreItemAction
velero.io/service                           RestoreItemAction
velero.io/service-account                   RestoreItemAction
velero.io/aws                               VolumeSnapshotter

Chesneycar
  • 545
  • 16
  • 43

1 Answers1

1

you may need to enable NodeAgent to do volume backups,add deployNodeAgent: true in helm values.yaml, and when do backups use option --default-volumes-to-fs-backup(like old version using option --default-volumes-to-restic).

velero backup create backup-test --include-namespaces nginx-example --default-volumes-to-fs-backup --snapshot-volumes --ttl 180h

after done backup you could do describe with --details and you could find something looks like this and you will know volume backup success. velero backup describe backup-test --details

kopia Backups:
  Completed:
    nginx-example/nginx-deployment-5b47dbff44-cw9l4: nginx-logs
SikiShen
  • 101
  • 4
  • Thanks @SikiShen, I was missing that `--default-volumes-to-fs-backup` flag. I was able to successfully backup my resources in the namespace with the persistent volume. I could verify that the data was there after destroying the resources and doing a restore from my backup. – Chesneycar Mar 22 '23 at 11:40
  • Slightly off-topic, but how would I go about doing an incremental backup? I already made a backup of the namespace and when I perform another backup I get the message that another backup already exist. @SikiShen – Chesneycar Mar 22 '23 at 13:42
  • 1
    @Chesneycar, should work by schedule backups, and kopia is actually doing incremental backup as long as there's one non-expired backup exists. – SikiShen Mar 22 '23 at 14:05
  • 1
    @Chesneycar I do asked them on the same question lol, https://github.com/vmware-tanzu/velero/discussions/6000, they didn't say every clear on incremental in there docs. – SikiShen Mar 22 '23 at 14:17
  • 1
    I saw in the helm chart values file you can set `defaultVolumesToFsBackup` to true then you don't have to specify the `--default-volumes-fs-backup` flag. – Chesneycar Mar 22 '23 at 20:30
  • One last update, even when I set the value `defaultVolumesToFsBackup: true` in the helm chart values.yaml file, it does not create a backup of the persistent volume. I still had to specify the `--default-volumes-to-fs-backup` flag in my velero backup command. – Chesneycar Mar 23 '23 at 10:43