We are using Cloud Composer in GCP (managed Airflow on a Kubernetes cluster) for scheduling our ETL pipelines.
Our DAGs (200-300) are dynamic, meaning all of them are generated by a single generator DAG. In Airflow 1.x it was an antipattern due to the limitations of scheduler. However, scheduler is better in Airflow 2.x to handle this scenario. See the 3. point here.
We have a pretty powerful environment (see the technical details below), however we are experiencing big latency between task changes which is a bad sign for the scheduler. Additionally, lots of tasks are waiting in the queue which is a bad sign for the workers. These performance problems are present when 50-60 DAGs are getting triggered and run. This concurrency is not that big in my opinion.
We are using Cloud Composer which has autoscaling feature according to the documentation. As I mentioned, tasks are waiting in the queue for a long time, so we would expect that the resources of workers are not enough so a scaling event should take place. However, that is not the case, no scaling events the load.
Composer specific details:
- Composer version: composer-2.0.8
- Airflow version: airflow-2.2.3
- Scheduler resources: 4 vCPUs, 15 GB memory, 10 GB storage
- Number of schedulers: 3
- Worker resources: 4 vCPUs, 15 GB memory, 10 GB storage
- Number of workers: Auto-scaling between 3 and 12 workers
Airflow specific details:
- scheduler/min_file_process_interval: 300
- scheduler/parsing_processes: 24
- scheduler/dag_dir_list_interval: 300
- core/dagbag_import_timeout: 3000
- core/min_serialized_dag_update_interval: 30
- core/parallelism: 120
- core/enable_xcom_pickling: false
- core/dag_run_conf_overrides_params: true
- core/executor: CeleryExecutor
We do not explicitly set a value for worker_concurrency because it is automatically calculated according to this documentation. Furthermore, we have one pool with 100000 slots, however we have noticed that most of the time number of running slots are 8-10, number of queued slots are 65-85.
We are constantly monitoring our environment, but we were not able to find anything so far. We do not see any bottleneck related to worker/scheduler/database/webserver resources (CPU, memory, IO, network).
What could be the bottleneck? Any tips and tricks are more than welcomed. Thank you!