I am running airflow via MWAA on aws and the worker nodes are running k8s. The pods are getting scheduled just fine but I am trying to use pod_template_file with KubernetesPodOperator, it's giving me a bunch of uncertain behavior.
My template file stored in S3
apiVersion: v1
kind: Pod
metadata:
name: app1
namespace: app1
spec:
containers:
- name: base
image: "alpine:latest"
command: ["/bin/sh"]
args: ["-c", "while true; do echo hi>> /data/app.log; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: claim1
My DAG file
t_1 = KubernetesPodOperator(
task_id=job_name_1,
namespace="app",
name=job_name_1,
get_logs=True,
is_delete_operator_pod=False,
in_cluster=False,
config_file=kube_config_path,
startup_timeout_seconds=240,
cluster_context='test',
pod_template_file="/opt/airflow/pod_template_file_example-1.yaml",
dag=dag)
When I go with this, I get an error that the pod spec is invalid as it's missing image field. This is surprising as image
is present in pod-template.
I also tried below, it works, but it totally ignores the pod-template file and spins up a alpine container and exits. So looks like it's totally ignoring the pod_template_file param.
full_pod_spec = k8s.V1Pod(
metadata=metadata_2,
spec=k8s.V1PodSpec(containers=[
k8s.V1Container(
name="base",
image="alpine:latest",
)
], ))
t_1 = KubernetesPodOperator(
task_id=job_name_1,
namespace="mlops",
name=job_name_1,
get_logs=True,
is_delete_operator_pod=False,
in_cluster=False,
config_file=kube_config_path,
startup_timeout_seconds=240,
cluster_context='aws',
full_pod_spec=full_pod_spec,
pod_template_file="/opt/airflow/pod_template_file_example-1.yaml",
dag=dag)
What is the correct way to reference a pod_template_file in KubernetesPodOperator in airflow?
References : medium