9

airflow 1.8.1

Scheduler, worker and webserver are running in separate dockers on AWS.

The system was operational, and now for some reason all tasks are staying in queued state...

No errors in scheduler logs.

In worker I see this error (not sure if its related since scheduler should move tasks from queued state):

[2018-01-23 20:46:00,428] {base_task_runner.py:95} INFO - Subtask: [2018-01-23 20:46:00,428] {models.py:1122} INFO - Dependencies not met for , dependency 'Task Instance State' FAILED: Task is in the 'success' state which is not a valid state for execution. The task must be cleared in order to be run.

I tried reboots, airflow clear and then resetdb commands but it did not help.

Any idea what else can be done to fix that problem?

Thanks

Gregory Danenberg
  • 519
  • 2
  • 9
  • 15
  • Something I would like help with as well. Unclear on pattern of how to trigger and observe a dag run after it's been defined/scripted. – kuanb Mar 29 '18 at 23:53
  • What is the messaging broker that you are using in your Airflow server?. Also what executor are you using in your Airflow server?. I also faced a similar issue and found out that my messaging broker RabbitMQ service was not running or got failed. When I restarted the broker, the tasks were executed as expected. A task goes into QUEUED state when the scheduler has sent that task to the messaging queue. If we kill the scheduler process and restart it again, then the scheduler will again try to send the task to messaging queue. Since you have already done this, try restarting broker. – Sai Neelakantam Oct 29 '19 at 08:17

0 Answers0