I want to use Velero to backup an application to a minio bucket. Here's some context, I have 2 AKS clusters [dev, tools].
The tools cluster runs my minio instance and dev is the cluster for my workloads.
I followed some examples on the internet on how to install Velero using helm and how to configure it to backup worloads to minio.
Right now, I can do a backup of an application with it's PersistentVolume but when I do a restore there's no data in the volume. I will go into detail below, and I appreciate any advice or help from the community to resolve this issue.
Here's the steps that I followed:
- Installing Velero :
helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
helm repo update
helm upgrade --install $RELEASE_NAME vmware-tanzu/velero \
--namespace $NAMESPACE --create-namespace -f $VALUES_FILE
- Here's an extract of the helm values.yaml file I use, with the most important bits :
initContainers:
- name: velero-plugin-for-aws
image: velero/velero-plugin-for-aws:v1.6.1
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /target
name: plugins
configuration:
plugins:
aws:
image: velero/velero-plugin-for-aws:v1.2.0
minio:
image: velero/velero-plugin-for-minio:v1.2.0
provider: aws
backupStorageLocation:
name: default
bucket: dev-velero-backup
config:
region: minio
s3ForcePathStyle: "true"
publicUrl: http://dev.api
s3Url: "minio.tenant"
insecureSkipTLSVerify: true
volumeSnapshotLocation:
region: minio
name: default
provider: aws
config:
region: minio
# specify the credentials for minio.
credentials:
useSecret: true
existingSecret: ""
secretContents:
cloud: |
[default]
aws_access_key_id = minio
aws_secret_access_key = minio
s3: ""
features:
namespace: velero
backupsEnabled: true
snapshotsEnabled: true
When I run the backup command I can see the objects created in the minio bucket, so there's not an issue with the communication between velero and minio.
This is the command I use to do a backup of my nginx-example application :
velero backup create nginx-example --include-namespaces nginx-example --snapshot-volumes
The backups complete without any errors.
Here are the logs from the backup :
time="2023-03-21T13:11:28Z" level=info msg="Executing RemapCRDVersionAction" backup=velero/nginx-example cmd=/velero logSource="pkg/backup/remap_crd_version_action.go:61" pluginName=velero
time="2023-03-21T13:11:28Z" level=info msg="Exiting RemapCRDVersionAction, the cluster does not support v1beta1 CRD" backup=velero/nginx-example cmd=/velero logSource="pkg/backup/remap_crd_version_action.go:89" pluginName=velero
time="2023-03-21T13:11:28Z" level=info msg="Backed up a total of 24 items" backup=velero/nginx-example logSource="pkg/backup/backup.go:413" progress=
- The next step is simulating a DR event, by deleting the nginx-example namespace and verifying that all k8s resources for the app is destroyed including the PV.
kubectl delete ns nginx-example
#Wait, and Check if pv is deleted.
- When I attempt to restore the nginx-example from the velero backup, by running this command:
velero restore create --from-backup nginx-example --include-namespaces nginx-example --restore-volumes
I can see in the restore logs the following messages :
velero restore logs nginx-example-20230321141504
time="2023-03-21T13:15:06Z" level=info msg="Waiting for all post-restore-exec hooks to complete" logSource="pkg/restore/restore.go:596" restore=velero/nginx-example-20230321141504
time="2023-03-21T13:15:06Z" level=info msg="Done waiting for all post-restore exec hooks to complete" logSource="pkg/restore/restore.go:604" restore=velero/nginx-example-20230321141504
time="2023-03-21T13:15:06Z" level=info msg="restore completed" logSource="pkg/controller/restore_controller.go:545" restore=velero/nginx-example-20230321141504
- When I verify if the nginx acess-logs still contain the data of previous visits, it is empty :
kubectl exec -it nginx-deploy-bf489bc5-8jrtz -- cat /var/log/nginx/access.log
The nginx-example application mounts a path to /var/log/nginx
on the PV.
spec:
volumes:
- name: nginx-logs
persistentVolumeClaim:
claimName: nginx-logs
containers:
- image: nginx:stable
name: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/var/log/nginx"
name: nginx-logs
readOnly: false
The end goal should be to do a successful backup and restore of the nginx-example application with it's persistent volume that contains the access logs data.
I'll be really happy if this issue can be resolved with your help and of course I will provide any relevant information.
Additional Information
- VolumeSnapshotLocation
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
annotations:
helm.sh/hook: post-install,post-upgrade,post-rollback
helm.sh/hook-delete-policy: before-hook-creation
creationTimestamp: "2023-03-21T01:26:37Z"
generation: 1
labels:
app.kubernetes.io/instance: velero-ontwikkel
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: velero
helm.sh/chart: velero-3.1.4
name: default
namespace: velero
resourceVersion: "83378185"
uid: cea663dd-c1d9-4035-8c84-79a240f4351c
spec:
config:
region: minio
provider: aws
- Installed velero plugins
NAME KIND
velero.io/crd-remap-version BackupItemAction
velero.io/pod BackupItemAction
velero.io/pv BackupItemAction
velero.io/service-account BackupItemAction
velero.io/aws ObjectStore
velero.io/add-pv-from-pvc RestoreItemAction
velero.io/add-pvc-from-pod RestoreItemAction
velero.io/admission-webhook-configuration RestoreItemAction
velero.io/apiservice RestoreItemAction
velero.io/change-pvc-node-selector RestoreItemAction
velero.io/change-storage-class RestoreItemAction
velero.io/cluster-role-bindings RestoreItemAction
velero.io/crd-preserve-fields RestoreItemAction
velero.io/init-restore-hook RestoreItemAction
velero.io/job RestoreItemAction
velero.io/pod RestoreItemAction
velero.io/pod-volume-restore RestoreItemAction
velero.io/role-bindings RestoreItemAction
velero.io/service RestoreItemAction
velero.io/service-account RestoreItemAction
velero.io/aws VolumeSnapshotter