0

I have a cronjob that runs to sync the aws s3 bucket to a PV , when i check the logs for the cronjob it shows the data is synced but when checking in the pod by execing changes are not showing up?

apiVersion: batch/v1
kind: CronJob
metadata:
  name: s3-sync-cronjob
spec:
  schedule: "*/5 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: awscli-s3-sync
            image: amazon/aws-cli
            command: ["/bin/sh"]
            args: ["-c", "aws s3 sync s3://${S3_BUCKET_NAME} /mnt/data --region ${S3_REGION} --delete && ls -al && pwd && id"]
            workingDir: /mnt/data
            env:
            - name: S3_BUCKET_NAME
              valueFrom:
                secretKeyRef:
                  name: s3-secret
                  key: bucket-name
            - name: S3_REGION
              valueFrom:
                configMapKeyRef:
                  name: s3-config
                  key: region
            volumeMounts:
            - name: data
              mountPath: /mnt/data
          restartPolicy: OnFailure
          volumes:
          - name: data
            persistentVolumeClaim:
              claimName: my-pvc
          securityContext:
            runAsUser: 0 # Or the UID of the user that should own the files on the mounted volume.
            runAsGroup: 0 # Or the GID of the group that should own the files on the mounted volume.
            fsGroup: 0 # Or the GID of the group that should own the files on the mounted volume.
  concurrencyPolicy: Forbid
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1

I have the same pvc volume attached to the deployment pod

0 Answers0