0

k3s cluster.

I have used velero helm installation:

helm install vmware-tanzu/velero --namespace velero-minio -f helm-custom-values-minio.yaml --generate-name --create-namespace

and

helm install vmware-tanzu/velero --namespace velero-aws -f helm-custom-values-aws.yaml --generate-name --create-namespace

Custom helm values:

helm-custom-values-minio.yaml

configuration:
  provider: aws
  backupStorageLocation:
    bucket: k3s-backup
    name: minio
    default: false
    config:
      region: minio
      s3ForcePathStyle: true
      s3Url: http://10.10.5.15:9009
  volumeSnapshotLocation:
    name: minio
    config:
      region: minio
credentials:
 secretContents:
  cloud: |
    [default]
    aws_access_key_id=minioadm
    aws_secret_access_key=<password>
initContainers:
  - name: velero-plugin-for-aws
    image: velero/velero-plugin-for-aws:latest
    imagePullPolicy: IfNotPresent
    volumeMounts:
      - mountPath: /target
        name: plugins
snapshotsEnabled: true
deployRestic: true

and helm-custom-values-aws.yaml

configuration:
  provider: aws
  backupStorageLocation:
    name: aws-s3
    bucket: k3s-backup-aws
    default: false
    provider: aws
    config:
      region: us-east-1
      s3ForcePathStyle: false
  volumeSnapshotLocation:
    name: aws-s3
    provider: aws
    config:
      region: us-east-1
credentials:
 secretContents:
  cloud: |
    [default]
    aws_access_key_id=A..............MJ
    aws_secret_access_key=qZ79rA/yVUq2c................xnIA
initContainers:
  - name: velero-plugin-for-aws
    image: velero/velero-plugin-for-aws:latest
    imagePullPolicy: IfNotPresent
    volumeMounts:
      - mountPath: /target
        name: plugins
snapshotsEnabled: true
deployRestic: true

velero backup jobs:

velero create backup k3s-mongodb-restic-minio --include-namespaces mongodb --default-volumes-to-restic=true --storage-location minio -n velero-minio

velero create backup k3s-mongodb-restic-aws --include-namespaces mongodb --default-volumes-to-restic=true --storage-location aws-s3 -n velero-aws

....

They all failed:

Restic Backups:
  Failed:
    mongodb/mongodb-cluster-0: agent-scripts, data-volume, healthstatus, hooks, logs-volume, mongodb-cluster-keyfile, tmp
    mongodb/mongodb-cluster-1: agent-scripts, data-volume, healthstatus, hooks, logs-volume, mongodb-cluster-keyfile, tmp

time="2022-10-17T17:42:32Z" level=error msg="Error backing up item" backup=velero-minio/k3s-mongodb-restic-minio error="pod volume backup failed: running Restic backup, stderr=Fatal: unable to open config file: Stat: The Access Key Id you provided does not exist in our records.\nIs there a repository at the following location?\ns3:http://10.10.5.15:9009/k3s-backup/restic/mongodb\n: exit status 1" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/restic/backupper.go:199" error.function="github.com/vmware-tanzu/velero/pkg/restic.(*backupper).BackupPodVolumes" logSource="pkg/backup/backup.go:417" name=mongodb-cluster-0

...

velero get backup-locations -n velero-aws

NAME     PROVIDER   BUCKET/PREFIX    PHASE       LAST VALIDATED                  ACCESS MODE   DEFAULT
aws-s3   aws        k3s-backup-aws   Available   2022-10-17 14:12:46 -0400 EDT   ReadWrite 

...

velero get backup-locations -n velero-minio

NAME    PROVIDER   BUCKET/PREFIX   PHASE       LAST VALIDATED                  ACCESS MODE   DEFAULT
minio   aws        k3s-backup      Available   2022-10-17 14:16:25 -0400 EDT   ReadWrite  

velero backup part works without errors but restic fails for all my jobs (mongodb is the only example). It looks like the restic can't create snapshots for my nfs pvc.

What am I doing wrong?

lk7777
  • 303
  • 1
  • 5
  • 10

1 Answers1

0

It looks like velero doesn't work with multiple installations, at least the restic part fails (in my case, two instances in name spaces velero-aws and velero-minio). So, I installed only one instance of velero to work with minio.

Removed --default-volumes-to-restic=true from the backup job configuration.

Used opt-in pod volume backup with the restic integration. Each pod that has pvc volume needs to be annotated, like the following:

kubectl -n mongodb annotate pod/mongodb-cluster-0 backup.velero.io/backup-volumes=logs-volume,data-volume

I have not tried velero-pvc-watcher, probably it works well

Now backup works with no errors.

lk7777
  • 303
  • 1
  • 5
  • 10