1

I get a "SIGTERM" error when I trigger the DAG below. I did it more than 20 times manually, and everytime the SIGTERM error comes up at different points in time. Any suggestion on what to change to make it work?

The error:

{local_task_job.py:211} WARNING - State of this instance has been externally set to queued. Terminating instance.
{taskinstance.py:1411} ERROR - Received SIGTERM. Terminating subprocesses.
{taskinstance.py:1703} ERROR - Task failed with exception

The dag:

dag = DAG(
    dag_id="im_master",
    default_args=default_args,
    schedule_interval="0 1 * * *",
    tags=['DS'],
    catchup=False
)
rid
  • 61,078
  • 31
  • 152
  • 193
MM Roller
  • 29
  • 4
  • probably you are out of memory to run the task. Make sure the machine the job is executing in has enough resources – Elad Kalif Jan 03 '22 at 17:17
  • This has nothing to do with airflow.cfg it has to do with the machine that you deployed airflow on. If you use Celery then you should increase workers memory. – Elad Kalif Jan 03 '22 at 17:24
  • It has nothing to do with airflow.cfg - you just need more memory in your system - allocate more to the Docker Engine or buy more memory (or use bigger machine). – Jarek Potiuk Jan 03 '22 at 17:24

1 Answers1

0

This issue happens when the machine running the task is out of memory. This is not related to Airflow configurations but to the size of your machine.

Increasing the memory is depended on how you deployed your Airflow application. For example if you are using CeleryExecutor then you should increase the memory of your workers.

Elad Kalif
  • 14,110
  • 2
  • 17
  • 49
  • I tried many things, yet I m still puzzled. I replaced the long task I had for a "print('hello')" and I get the same SIGTERM error. Also, I have 48gb ram. Could it be something else than running out memory? – MM Roller Jan 07 '22 at 13:20