I am creating a new Operator with Kubebuilder to deploy a Kubernetes controller to manage a new CRD Custom Resource Definition.
This new CRD (let's say is called MyNewResource
), needs to list/create/delete CronJobs.
So in the Controller Go code where the Reconcile(...)
method is defined I added a new RBAC comment to allow the reconciliation to work on CronJobs (see here):
//+kubebuilder:rbac:groups=batch,resources=cronjobs,verbs=get;list;watch;create;update;patch;delete
However after building pushing and deploying the Docker/Kubernetes controller (repo myrepo
, make manifests
, then make install
, then make docker-build docker-push
, then make deploy
), then in the logs I still see:
E0111 09:35:18.785523 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/reflector.go:167: Failed to watch *v1beta1.CronJob: failed to list *v1beta1.CronJob: cronjobs.batch is forbidden: User "system:serviceaccount:myrepo-system:myrepo-controller-manager" cannot list resource "cronjobs" in API group "batch" at the cluster scope
I also see issues about the cache, but they might not be related (not sure):
2022-01-11T09:35:57.857Z ERROR controller.mynewresource Could not wait for Cache to sync {"reconciler group": "mygroup.mydomain.com", "reconciler kind": "MyNewResource", "error": "failed to wait for mynewresource caches to sync: timed out waiting for cache to be synced"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.0/pkg/internal/controller/controller.go:234
sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.0/pkg/manager/internal.go:696
2022-01-11T09:35:57.858Z ERROR error received after stop sequence was engaged {"error": "leader election lost"}
2022-01-11T09:35:57.858Z ERROR setup problem running manager {"error": "failed to wait for mynewresource caches to sync: timed out waiting for cache to be synced"}
How can I allow my new Operator to deal with CronJobs resources?
At the moment basically I am not able to create new CronJobs programmatically (Go code) when I provide some YAML for a new instance of my CRD, by invoking:
kubectl create -f mynewresource-project/config/samples/