0

I have created a Kubernetes CronJob (in AKS) to run a database dump (the database is not located inside the Kubernetes cluster). This CronJob create a Job each day which will dump the database in a single file and upload this file in a remote backend (Azure Blob Storage). The dump file is currently 40GB and it uses the local disk from the node on which the pod is executed.

Even if the dump file is ephemeral in the Kubernetes cluster, as it deleted once the pod is deleted, I would like to now if there is a better approach then using the local node disk (to avoid the case where the local node file system become full).

Do you any suggestion?

Thanks a lot!

Hipster Cat
  • 131
  • 5
Alexis
  • 3
  • 1

1 Answers1

0

Since you are on Azure you could use Azure Disk or Azure File.
Using those would require you to enable CSI drivers.

However it may be easier, and more cost effective, to just ssh (if possible) into DB's host and use DB host storage to store dump files.


In case you use MSSQL you can backup straight into Azure Blob storage.

  • I also had this option in mind, but is it not over engineering to use Azure Disk or Azure File as the PV will have to be deleted after the dump ? By the way, I cannot SSH to the machine as I use MongoDB Atlas (SaaS offer). – Alexis Jun 03 '21 at 09:58
  • Yes, it is, that's why I proposed `ssh`, but since you can't do that, I have no other ideas. –  Jun 03 '21 at 10:12
  • Finally I created a specific node pool with larger host disk so I can run this kind of workload using the host file system. – Alexis Jun 04 '21 at 11:34