RQ is a simple, lightweight, Python library for creating background jobs, and processing them.
RQ (Redis Queue) is a simple python library for queueing jobs and processing them in the background with workers. It is backed by redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.
I'm running python-rq tasks that can take anywhere from minutes to hours depending on the input data.
I can set a fixed timeout at the job scheduling with:
low = Queue('low', default_timeout=600) # 10 mins
low.enqueue_call(really_really_slow,…
Is it possible for a job to yield the worker and put itself back to the end of the queue?
The jobs in a redis queue are processed sequentially and a long-running job might be hogging the cpu. Is there a pattern for it to decide it has consumed…
As it was unclear earlier I am posting this scenario:
class Scraper:
def __init__(self,url):
self.start_page = url
def parse_html(self):
pass
def get_all_links(self):
pass
def run(self):
#parse html, get…
Given:
from redis import Redis
from rq import Queue
yesterday = Queue('yesterday', connection=Redis())
today = Queue('today', connection=Redis())
I would like to programmatically delete the Queue named 'yesterday'
I'm using django-rq, the django bindings for python-rq, to try generate a PDF asynchronously. The class TemplateProcesser initializes with two arguments and automatically generates the PDF in the __init__ function. This works fine synchronously,…
I decided I need to use an asynchronous queue system. And am setting up Redis/RQ/django-rq. I am wondering how I can start workers in my project.
django-rq provides a management command which is great, it looks like:
python manage.py rqworker high…
I have a problem with redis-py connection into redis on kubernetes. Few times a day redis pod restarts/moves on new one pod, but my python processes catches ConnectionError error:
I know, that it should bubble that exception - server is down for…
I am using the python-rq Retry() functionality with the on_failure callback. The problem is that the on_failure function runs after every failure on the job so it does not allow handling the last retry differently from the previous reties.
In my…
I was thinking about the way to secure accomplishment of all tasks stored in Redis queues in case of server shutdown e.g.
My initial thought was to create an instance of job-description and saving it to database. Something like:
class…
I have a question regarding django-rq. It is pip-installed library that functions as a small layer on top of python-rq, which runs on a redis instance. Currently, I run all of the jobs on the default queue that uses database 0 on my local redis…
I'm using python-rq to enqueue background tasks and then attempting to check their status in my web app.
First I grab all the workers attached to the queue:
workers = rq.Worker.all(queue=queue)
Before starting a task there is a single worker with…
I have a Python RQ job that downloads a resource from a webserver.
In case of a non-responding webserver, can the download-job reschedule itself and retry the download after a certain interval?
Several transformation-jobs depend on the download-job…
From the documentation of redis queue https://python-rq.org/docs, I came to know that the worker can return results only after a certain time and till then return None.
Is there any way to find out that the worker execution is complete (not with…
I need to put a class method to on an RQ queue. But it gives an error
Here is the worker.py
import os
import redis
from rq import Worker, Queue, Connection
listen = ['high', 'default', 'low']
redis_url = os.getenv('REDISTOGO_URL',…
I am fairly new to python and rq, and have come to a point I can't solve by myself.
I am using ffmpeg-python to encode livestreams, this is distributed in rq workers and displayed on a web app using flask, but since the livestreams can go on…