0

I met an issue that my task in a tag never got pick up by workers for some reason. When I look at the task details:

All dependencies are met but the task instance is not running. In most cases this just means that the task will probably be scheduled soon unless: - The scheduler is down or under heavy load

If this task instance does not start soon please contact your Airflow administrator for assistance.

I checked the scheduler, no errors in the log, also restarted it a few times.

I also checked the airflow websever log, only notice this:

22/11/2018 12:10:39[2018-11-22 01:10:39,747] {{cli.py:644}} DEBUG - [5 / 5] killing 1 workers 22/11/2018 12:10:39[2018-11-22 01:10:39 +0000] [43] [INFO] Handling signal: ttou 22/11/2018 12:10:39[2018-11-22 01:10:39 +0000] [348] [INFO] Worker exiting (pid: 348)

Not sure what happens, it worked fine before.

Airflow version 1.9.0, never change the version, only playing around some of the config: min_file_process_interval and dag_dir_list_interval (but I put it back to default when encounter this issue)

enter image description here

Kevin Li
  • 2,068
  • 15
  • 27

1 Answers1

0

I do notice that this happens when I am playing around with some of the airflow config and rebuild our docker airflow image, then I revert it back to the original version, which used to work. Then the problem solved.

I also notice one error occurred (but not always captured) in my celery workers when I use the newly built image:

Unrecoverable error: AttributeError("'float' object has no attribute 'items'",)

So find that it is related to the latest redis release (Celery will use redis), you can find more details.

Kevin Li
  • 2,068
  • 15
  • 27