0

I have a Flask application deployed on GKE, duplicated in several pods. For performance issues, I want this app to read local files from the app repository. I need to update these files frequently with another Python script.

I could implement an endpoint to fetch the refreshed files from a server but if I ping this endpoint only one pod will update the local files, and I need all app instances to read from latest data files.

Do you have an idea on how to solve this issue?

Khaja Shaik
  • 136
  • 6
Martin Becuwe
  • 117
  • 2
  • 9

1 Answers1

1

There are multiple solutions, the best one is to set up a shared NFS volume for those files: The Flask Pods mount it read-only, the Python script mounts it read and write, and updates the files.

An NFS volume allows an existing NFS (Network File System) share to be mounted into a Pod. Unlike emptyDir, which is erased when a Pod is removed, the contents of an NFS volume are preserved and the volume is merely unmounted. This means that an NFS volume can be pre-populated with data, and that data can be shared between pods.

Reference link to create kubernetes NFS volume on GKE and advantages.

Jyothi Kiranmayi
  • 2,090
  • 5
  • 14
  • Thank you for your detailed answer, I will try this solution. I hope it is possible to: - have several pods sharing the same NFS volume - refresh the files stored in the NFS volume shared by the app pods from other pods where cron jobs are running – Martin Becuwe Jun 15 '21 at 15:12
  • Are you going to read the data files on every request which needs them or are you planning to load the files into memory? You mentioned local storage for performance reasons. – Gari Singh Jun 15 '21 at 22:11
  • Both solutions are okay, loading the files on request is okay since these files are for most of them light models – Martin Becuwe Jun 16 '21 at 09:48