1

I am using the python-rq Retry() functionality with the on_failure callback. The problem is that the on_failure function runs after every failure on the job so it does not allow handling the last retry differently from the previous reties.

In my case I would like to flag my job as failed only if it fails all retries. Is that possible? I am pretty stuck here so I have not tried anything else.

So far I have tried to use the FailedJobRegistry() to count the fails of the job but it doesn't seem to support such functionality.

Charalamm
  • 1,547
  • 1
  • 11
  • 27

1 Answers1

1

I managed to solve this issue by checking in the on_failure function that the job has 0 or None retries before running the actual on_failure logic.

So my my job was enqueued as:

job = queue.enqueue(
        _function, 
        job_timeout=timeout,
        retry=Retry(max=5),
        on_failure=report_failure,
    )

and the report_failure() as:

def report_failure(job, connection, type, value, traceback):
    """
    Flag a job as failed if it has 0 or None reties left
    """

    if job.retries_left:
        return

    #...
    # Implement failed on last retry logic
Charalamm
  • 1,547
  • 1
  • 11
  • 27