1

I've added —-runtime-config=batch/v2alpha1=true to the kube-apiserver config like so:

      ... other stuff
      command:
        - "/hyperkube"
        - "apiserver"
        - "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota"
        - "--address=0.0.0.0"
        - "--allow-privileged"
        - "--insecure-port=8080"
        - "--secure-port=443"
        - "--cloud-provider=azure"
        - "--cloud-config=/etc/kubernetes/azure.json"
        - "--service-cluster-ip-range=10.0.0.0/16"
        - "--etcd-servers=http://127.0.0.1:2379"
        - "--etcd-quorum-read=true"
        - "--advertise-address=10.240.255.15"
        - "--tls-cert-file=/etc/kubernetes/certs/apiserver.crt"
        - "--tls-private-key-file=/etc/kubernetes/certs/apiserver.key"
        - "--client-ca-file=/etc/kubernetes/certs/ca.crt"
        - "--service-account-key-file=/etc/kubernetes/certs/apiserver.key"
        - "--storage-backend=etcd2"
        - "--v=4"
        - "—-runtime-config=batch/v2alpha1=true"
        ... etc

but after restarting the master kubectl api-versions still shows only batch/v1, no v2alpha1 to be seen.

$ kubectl api-versions
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1beta1
apps/v1beta1
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
batch/v1
certificates.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1alpha1
rbac.authorization.k8s.io/v1beta1
settings.k8s.io/v1alpha1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1

Here's my job definition:

kind: CronJob
apiVersion: batch/v2alpha1
metadata:
  name: mongo-backup
spec:
  schedule: "* */1 * * *"
  jobTemplate:
    spec:
... etc

And the error I get when I try to create the job:

$ kubectl create -f backup-job.yaml                 
error: error validating "backup-job.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"batch", Version:"v2alpha1", Kind:"CronJob"}; if you choose to ignore these errors, turn validation off with --validate=false
$ kubectl create -f backup-job.yaml --validate=false
error: unable to recognize "backup-job.yaml": no matches for batch/, Kind=CronJob

What else do I need to do?

PS. this is on Azure ACS, I don't think it makes a difference though.

Jordan Baker
  • 4,103
  • 1
  • 20
  • 18
  • You restart the vm or the service? did the file still include your addition after the restart? – itaysk Nov 19 '17 at 09:53
  • @itaysk yes, I restarted the entire VM. I checked the logs and I can see that it was started with the correct parameters. – Jordan Baker Nov 19 '17 at 17:34

3 Answers3

4

You may use the latest API versions here apiVersion: batch/v1beta1 that should fix the issue.

Bhargav Amin
  • 1,157
  • 3
  • 11
  • 21
3

The latest Kubernetes v1.21 Release Notes states that:

  • The batch/v2alpha1 CronJob type definitions and clients are deprecated and removed. (#96987, @soltysh) [SIG API Machinery, Apps, CLI and Testing]

In that version you should use apiVersion: batch/v1, see the example below:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

Notice that:

CronJobs was promoted to general availability in Kubernetes v1.21. If you are using an older version of Kubernetes, please refer to the documentation for the version of Kubernetes that you are using, so that you see accurate information. Older Kubernetes versions do not support the batch/v1 CronJob API.

Wytrzymały Wiktor
  • 11,492
  • 5
  • 29
  • 37
-1

There is an open issue for that on their github: https://github.com/kubernetes/kubernetes/issues/51939

I believe there is no other option but wait, right now i'm actually stuck on the same issue