I came across a problem in the airflow with the execution of the command 'vacuum table' on the Greenplum database, in situations where the airflow does not owner the table.
If the vacuum is executed inside the PythonOperator, via cursor.execute ('vacuum table'), then everything goes supposedly OK, without any error messages.
If the vacuum is performed by the PostgresOperator, there is warning, but the task was still marked as succes
[2020-06-22 08:00:59,730] {logging_mixin.py:112} INFO - [2020-06-22 08:00:59,730] {dbapi_hook.py:174} INFO - vacuum analyze sales.rid_status_log;
[2020-06-22 08:00:59,946] {postgres_operator.py:67} INFO - WARNING: skipping "rid_status_log" --- only table or database owner can vacuum it
[2020-06-22 08:00:59,952] {taskinstance.py:1048} INFO - Marking task as SUCCESS.dag_id=v1.gp_LOAD_INC_pg_nats, task_id=vacuum_analyze_rid_status_log, execution_date=20200622T041530, start_date=20200622T050059, end_date=20200622T050059
Is it possible, by configuring the config or the dag / task parameters, to make the task get FAILED status in such cases?