I am trying to create and run a pod using Airflow kubernetes pod operator. The command below is tried and confirmed to be working and I am trying to replicate the same using the kubernetes pod operator locally
kubectl run sparkairflow -n test-namespace --image=some-docker-repo.com:hello-world --serviceaccount=airflow --restart=Never -- spark-submit --deploy-mode cluster --master k8s://kubernetes.default.cluster.local:123 \
--name sparkairflow \
--conf spark.kubernetes.namespace=test-namespace \
--conf spark.kubernetes.container.image=some-docker-repo.com:hello-world \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=airflow \
...
Running into a wall here because there does not seem to be a way pass the --serviceaccount flag using airflow and that is required for my implementation and that throws the error on my side.
Exception in thread "main" io.fabric8.kubernetes.client.KubernetesClientException: pods "sparkairflow-155252344-driver" is forbidden: User "system:serviceaccount:test-namespace:default" cannot watch resource "pods" in API group "" in the namespace "test-namespace": access denied
The solutions I found up until now mostly focus on adding the default user to the namespace role but that is not possible for my case.
Any way to pass in the serviceaccount flag to airflow kubernetes operator?
Thanks!