0

I have set up Minio and Velero backup for my k8s cluster. Everything works fine as I can take backups and I can see them in Minio. I have a PGO operator cluster hippo running with load balancer service. When I restore a backup via Velero, everything seems okay. It creates namespaces and all the deployments and pods in running state. However I am not able to connect to my database via PGadmin. When I delete the pod it is not recreating it but shows an error of unbound PVC. This is the output.

  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  16m   default-scheduler  0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims.                    preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..
  Warning  FailedScheduling  16m   default-scheduler  0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims.                    preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get PV
error: the server doesn't have a resource type "PV"
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                                 STORAGECLASS       REASON   AGE
pvc-1ca9e092-4e84-4ca4-88e3-0050890ef101   5Gi        RWO            Delete           Bound    postgres-operator/hippo-s3-instanc                   e2-4bhf-pgdata   openebs-hostpath            16m
pvc-2dd12937-a70e-40b4-b1ad-be1c9f7b39ec   5G         RWO            Delete           Bound    default/local-hostpath-pvc                                            openebs-hostpath            6d9h
pvc-30af7f3b-7ce5-4e2a-8c68-5c701881293b   5Gi        RWO            Delete           Bound    postgres-operator/hippo-s3-instanc                   e2-xvhq-pgdata   openebs-hostpath            16m
pvc-531c9ac7-938c-46b1-b4fa-3a7599f40038   5Gi        RWO            Delete           Bound    postgres-operator/hippo-instance2-                   p4ct-pgdata      openebs-hostpath            7m32s
pvc-968d9794-e4ba-479c-9138-8fbd85422920   5Gi        RWO            Delete           Bound    postgres-operator/hippo-instance2-                   s6fs-pgdata      openebs-hostpath            7m33s
pvc-987c1bd1-bf41-4180-91de-15bb5ead38ad   5Gi        RWO            Delete           Bound    postgres-operator/hippo-s3-instanc                   e2-c4rt-pgdata   openebs-hostpath            16m
pvc-d4629dba-b172-47ea-ab01-12a9039be571   5Gi        RWO            Delete           Bound    postgres-operator/hippo-instance2-                   29gh-pgdata      openebs-hostpath            7m32s
pvc-e79d68c3-4e2f-4314-b83f-f96c306a9b38   5Gi        RWO            Delete           Bound    postgres-operator/hippo-repo2                                         openebs-hostpath            7m30s
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pvc -n postgres-operator                                        NAME                             STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                          AGE
hippo-instance2-29gh-pgdata      Bound     pvc-d4629dba-b172-47ea-ab01-12a9039be571   5Gi        RWO            openebs-hostpath                      7m51s
hippo-instance2-p4ct-pgdata      Bound     pvc-531c9ac7-938c-46b1-b4fa-3a7599f40038   5Gi        RWO            openebs-hostpath                      7m51s
hippo-instance2-s6fs-pgdata      Bound     pvc-968d9794-e4ba-479c-9138-8fbd85422920   5Gi        RWO            openebs-hostpath                      7m51s
hippo-repo2                      Bound     pvc-e79d68c3-4e2f-4314-b83f-f96c306a9b38   5Gi        RWO            openebs-hostpath                      7m51s
hippo-s3-instance2-4bhf-pgdata   Bound     pvc-1ca9e092-4e84-4ca4-88e3-0050890ef101   5Gi        RWO            openebs-hostpath                      16m
hippo-s3-instance2-c4rt-pgdata   Bound     pvc-987c1bd1-bf41-4180-91de-15bb5ead38ad   5Gi        RWO            openebs-hostpath                      16m
hippo-s3-instance2-xvhq-pgdata   Bound     pvc-30af7f3b-7ce5-4e2a-8c68-5c701881293b   5Gi        RWO            openebs-hostpath                      16m
hippo-s3-repo1                   Pending                                                                        pgo                                   16m
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pods -n postgres-operator                                       NAME                           READY   STATUS      RESTARTS   AGE
hippo-backup-txk9-rrk4m        0/1     Completed   0          7m43s
hippo-instance2-29gh-0         4/4     Running     0          8m5s
hippo-instance2-p4ct-0         4/4     Running     0          8m5s
hippo-instance2-s6fs-0         4/4     Running     0          8m5s
hippo-repo-host-0              2/2     Running     0          8m5s
hippo-s3-instance2-c4rt-0      3/4     Running     0          16m
hippo-s3-repo-host-0           0/2     Pending     0          16m
pgo-7c867985c-kph6l            1/1     Running     0          16m
pgo-upgrade-69b5dfdc45-6qrs8   1/1     Running     0          16m
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl delete pods hippo-s3-repo-host-0 -n postgres-operator
pod "hippo-s3-repo-host-0" deleted
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pods -n postgres-operator                                       NAME                           READY   STATUS      RESTARTS   AGE
hippo-backup-txk9-rrk4m        0/1     Completed   0          7m57s
hippo-instance2-29gh-0         4/4     Running     0          8m19s
hippo-instance2-p4ct-0         4/4     Running     0          8m19s
hippo-instance2-s6fs-0         4/4     Running     0          8m19s
hippo-repo-host-0              2/2     Running     0          8m19s
hippo-s3-instance2-c4rt-0      3/4     Running     0          17m
hippo-s3-repo-host-0           0/2     Pending     0          2s
pgo-7c867985c-kph6l            1/1     Running     0          17m
pgo-upgrade-69b5dfdc45-6qrs8   1/1     Running     0          17m
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pvc -n postgres-operator                                        NAME                             STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                          AGE
hippo-instance2-29gh-pgdata      Bound     pvc-d4629dba-b172-47ea-ab01-12a9039be571   5Gi        RWO            openebs-hostpath                      8m45s
hippo-instance2-p4ct-pgdata      Bound     pvc-531c9ac7-938c-46b1-b4fa-3a7599f40038   5Gi        RWO            openebs-hostpath                      8m45s
hippo-instance2-s6fs-pgdata      Bound     pvc-968d9794-e4ba-479c-9138-8fbd85422920   5Gi        RWO            openebs-hostpath                      8m45s
hippo-repo2                      Bound     pvc-e79d68c3-4e2f-4314-b83f-f96c306a9b38   5Gi        RWO            openebs-hostpath                      8m45s
hippo-s3-instance2-4bhf-pgdata   Bound     pvc-1ca9e092-4e84-4ca4-88e3-0050890ef101   5Gi        RWO            openebs-hostpath                      17m
hippo-s3-instance2-c4rt-pgdata   Bound     pvc-987c1bd1-bf41-4180-91de-15bb5ead38ad   5Gi        RWO            openebs-hostpath                      17m
hippo-s3-instance2-xvhq-pgdata   Bound     pvc-30af7f3b-7ce5-4e2a-8c68-5c701881293b   5Gi        RWO            openebs-hostpath                      17m
hippo-s3-repo1                   Pending                                                                        pgo                                   17m

What Do I Want?

I want Velero to restore the full backup and I should be able to get access to my databases as I can before restore. It seems like Velero is not able to perform full backups. Any suggestion will be appreciated

2 Answers2

1

Velero is a backup and restore solution for Kubernetes clusters and their associated persistent volumes. While Velero does not currently support full backup and restore of databases Refer these limitations. It does support snapshotting and restoring persistent volumes. This means that, while you may not be able to directly restore a full database, you can restore the persistent volumes associated with the database and then use the appropriate tools to restore the data from the snapshots. Additionally, Velero's plugin architecture allows you to extend the capabilities of Velero with custom plugins that can add custom backup and restore functionality.

Refer to this blog from digital ocean by Hanif Jetha and Jamon Camisso for more information on backup and restore.

Hemanth Kumar
  • 2,728
  • 1
  • 4
  • 19
0

You setup is missing the PVC or PVC based on the error you have shared.

Velero can take backup of PVC and PV in general snapshot if using AWS, GCP plugin and when you restore it create the PVC and PV for you with that also.

i have migrated the Elasticsearch database with velero along with PVC and worked well in my case, however not are you using the same Cloud provider or storageclass in both cluster ? Why PVC is pending for hippo-s3-repo ? Did you the reason for that ?

Here is my article however i was using the plugin and bucket as storage : https://faun.pub/clone-migrate-data-between-kubernetes-clusters-with-velero-e298196ec3d8

Harsh Manvar
  • 27,020
  • 6
  • 48
  • 102