3

I have a cron job that continues to run though I have no deployments or jobs. I am running minikube:

$ kubectl get deployments
No resources found in default namespace.

$ kubectl delete pods --all && kubectl delete jobs --all && get deployments
pod "hello-27125612-lmcb5" deleted
pod "hello-27125613-w5ln9" deleted
pod "hello-27125614-fz84r" deleted
pod "hello-27125615-htf4z" deleted
pod "hello-27125616-k5czn" deleted
pod "hello-27125617-v79hx" deleted
pod "hello-27125618-bxg52" deleted
pod "hello-27125619-d6wps" deleted
pod "hello-27125620-66b65" deleted
pod "hello-27125621-cj8m9" deleted
pod "hello-27125622-vx5kp" deleted
pod "hello-27125623-xj7nj" deleted
job.batch "hello-27125612" deleted
job.batch "hello-27125613" deleted
job.batch "hello-27125614" deleted
...

$ kb get jobs
No resources found in default namespace.
$ kb get deployments
No resources found in default namespace.
$ kb get pods
No resources found in default namespace.

Yet a few seconds later:

$ kb get jobs
NAME             COMPLETIONS   DURATION   AGE
hello-27125624   0/1           79s        79s
hello-27125625   0/1           19s        19s

Get the job:

$ kubectl get job hello-27125624 -oyaml
apiVersion: batch/v1
kind: Job
metadata:
  creationTimestamp: "2021-07-29T05:44:00Z"
  labels:
    controller-uid: 26beb7de-1c60-4854-a70f-54b6d066c22c
    job-name: hello-27125624
  name: hello-27125624
  namespace: default
  ownerReferences:
  - apiVersion: batch/v1
    blockOwnerDeletion: true
    controller: true
    kind: CronJob
    name: hello
    uid: 32be2372-d827-4971-a659-129823de18e2
  resourceVersion: "551585"
  uid: 26beb7de-1c60-4854-a70f-54b6d066c22c
spec:
  backoffLimit: 6
  completions: 1
  parallelism: 1
  selector:
    matchLabels:
      controller-uid: 26beb7de-1c60-4854-a70f-54b6d066c22c
  template:
    metadata:
      creationTimestamp: null
      labels:
        controller-uid: 26beb7de-1c60-4854-a70f-54b6d066c22c
        job-name: hello-27125624
    spec:
      containers:
      - command:
        - /bin/sh
        - -c
        - date; echo Hello from the Kubernetes cluster
        image: kahunacohen/hello-kube:latest
        imagePullPolicy: IfNotPresent
        name: hello
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: OnFailure
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  active: 1
  startTime: "2021-07-29T05:44:00Z"

I tried this:

$ kubectl get ReplicationController
No resources found in default namespace.

Here is the pod running the job:

$ kubectl get pod hello-27125624-kc9zw -oyaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2021-07-29T05:44:00Z"
  generateName: hello-27125624-
  labels:
    controller-uid: 26beb7de-1c60-4854-a70f-54b6d066c22c
    job-name: hello-27125624
  name: hello-27125624-kc9zw
  namespace: default
  ownerReferences:
  - apiVersion: batch/v1
    blockOwnerDeletion: true
    controller: true
    kind: Job
    name: hello-27125624
    uid: 26beb7de-1c60-4854-a70f-54b6d066c22c
  resourceVersion: "551868"
  uid: f0c10049-b3f9-4352-9201-774dbd91d7c3
spec:
  containers:
  - command:
    - /bin/sh
    - -c
    - date; echo Hello from the Kubernetes cluster
    image: kahunacohen/hello-kube:latest
    imagePullPolicy: IfNotPresent
    name: hello
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-7cw4q
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: minikube
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: OnFailure
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: kube-api-access-7cw4q
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2021-07-29T05:44:00Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2021-07-29T05:44:00Z"
    message: 'containers with unready status: [hello]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2021-07-29T05:44:00Z"
    message: 'containers with unready status: [hello]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2021-07-29T05:44:00Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - image: kahunacohen/hello-kube:latest
    imageID: ""
    lastState: {}
    name: hello
    ready: false
    restartCount: 0
    started: false
    state:
      waiting:
        message: Back-off pulling image "kahunacohen/hello-kube:latest"
        reason: ImagePullBackOff
  hostIP: 192.168.49.2
  phase: Pending
  podIP: 172.17.0.2
  podIPs:
  - ip: 172.17.0.2
  qosClass: BestEffort
  startTime: "2021-07-29T05:44:00Z"

How do I track down who is spawning these jobs and how do I stop it?

Aaron
  • 3,249
  • 4
  • 35
  • 51
  • 1
    It's possible that these pods/jobs are managed by [cronjob](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/). Try `kubectl get cronjobs` – rkosegi Jul 29 '21 at 06:09
  • 1
    Yes, my bad. It's CronJob, not Job. If you make this an answer I will accept it. – Aaron Jul 29 '21 at 06:14

2 Answers2

3

These pods are managed by cronjob controller.

Use kubectl get cronjobs to list them.

rkosegi
  • 14,165
  • 5
  • 50
  • 83
  • Yep as mentioned in the comment, I was deleting generic jobs, not CronJobs. It was confusing because kubectl was telling me the jobs were deleted, but the cronjobs were still running. – Aaron Jul 29 '21 at 15:20
0

If a Kubernetes object is created by a controller, then its owner is listed in the per-object metadata. You already see this in your Pod output:

# kubectl get pod hello-27125624-kc9zw -oyaml
metadata:
  ownerReferences:
  - apiVersion: batch/v1
    blockOwnerDeletion: true
    controller: true
    kind: Job
    name: hello-27125624
    uid: 26beb7de-1c60-4854-a70f-54b6d066c22c

This same metadata format is used by every Kubernetes object. If there are no ownerReferences: then usually the object was directly created by a user (maybe via a tool like Helm or Kustomize).

If you similarly kubectl get job hello-27125624 -o yaml you will probably see a similar ownerReferences: block with apiVersion: batch/v1, kind: CronJob, and a specific name:. That object is probably user-managed and that's the object to delete.

David Maze
  • 130,717
  • 29
  • 175
  • 215