0

I'm playing around with jobs/cronjobs in openshift 3 online. I have a pod (podA) that runs my application. podA has a persistent storage connected. I want to launch a job/cronjob that places e.g. a logfile on the persistent storage or on the podA's file system. When ever I launch a job, it creates a new pod (podB) but this podB seems to have no permission to create anything on podA's file system. I always get a permission denied message. The way I'm launching the job is e.g.

oc run crontest --schedule="* * * * *" --image=docker-registry.default.svc:5000/myproject/django:latest --restart=OnFailure --labels parent="crontest" -- /opt/app-root/src/scripts/testScript.sh

where testScript tries to write files. What would be the way to create files on podA's file system/persistent storage from a job/cronjob?

Graham Dumpleton
  • 57,726
  • 6
  • 119
  • 134
user3620060
  • 131
  • 1
  • 14
  • That command alone is not going to mount any persistent storage. What are you doing to actually have the job mount the persistent volume? Also be aware that OpenShift Online only supports ReadWriteOnce storage, which means it cannot be mounted against more than one instance of an application, or multiple applications, at the same time. So only way job could run with the same persistent storage is to shutdown the main application first. You maybe should explain what the actual problem is you need a solution for, not how to get a specific solution working. – Graham Dumpleton Jan 07 '18 at 06:11
  • ah right, that makes sense. I do not mount the persistent volume... In openshift 2 I used the cronjob to regularly start a "watchdog" script that checked if my application and the celery processes were running. I used to have the problem that celery dies from time to time and never figured out the reason. As it is very important in this scenario to have celery constantly running I used this cronjob. The cronjob sent an email to the admin whenever it noticed an error. Thats what I'm trying to port to openshift3. – user3620060 Jan 07 '18 at 21:54
  • If you use mod_wsgi-express service script, using ``os.execl()``, not a sub process, if the Celery main process does crash and goes away, then it will be automatically restarted. So you don't need a cron job. So as already noted in https://stackoverflow.com/questions/48102638/how-to-run-celery-with-django-on-openshift-3/48123382 change the service script so have two separate ones and use ``os.execl()``. – Graham Dumpleton Jan 07 '18 at 23:16

0 Answers0