I asked the question on the python-rq GitHub project and the functionality is now included in version 1.5.0 of RQ.
RQ lets you now easily retry failed jobs. To configure retries, use RQ’s Retry object that accepts max
and interval
arguments.
Dependent jobs are kept in the deferred job registry until the job they depend upon succeeds and are executed only then.
For example:
from redis import Redis
from rq import Queue, Retry
from somewhere import randomly_failing_task, dependent_task
job_queue = Queue(connection=Redis())
randomly_failing_job = job_queue.enqueue(randomly_failing_task, retry=Retry(max=3))
dependent_job = job_queue.enqueue(dependent_task, depends_on=randomly_failing_job)
And the sample tasks:
from random import choice
def randomly_failing_task():
print('I am a task, I will fail 50% of the times :/')
success = choice([True, False])
if success:
print('I succeed :)')
else:
print('I failed :(')
raise Exception('randomly_failing_task failed!')
def dependent_task():
print('I depend upon the randomly_failing_task.')
print('I am only executed, once the randomly_failing_task succeeded.’)