RQ is a simple, lightweight, Python library for creating background jobs, and processing them.
RQ (Redis Queue) is a simple python library for queueing jobs and processing them in the background with workers. It is backed by redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.
I have code which uses Python requests to kick off a task which runs in a worker that is started with rq. (Actually, the GET request results in one task which itself starts a second task. But this complexity shouldn't affect things, so I've left…
I am trying to accomplish the simple long running task by using redis queue but everytime I get timeout error even though I increase the time out value in job = q.enqueue(run_scraper, temp_file, job_timeout=16600) but no matter what it gives me time…
In my application I am using structlog as a log system. My application also uses PythonRQ. How can I make PythonRQ to use the log system I am already using in my application so that all my application logs follows the same pattern?
I am trying to test how to pass a python object to a rq worker process. I have the following classes in common.py
class Input:
def __init__(self, arr_list: List):
self.arr_list = arr_list
def compute_sum(self):
sum = 0
…
Problem
I pass a logging (logger) object, supposed to add lines to test.log, to a function background_task() that is run by the rq utility (task queues manager). logger has a FileHandler assigned to it to allow logging to test.log. Until…
I am trying to use the rq Retry functionality by following the rq documentation but it does not work when using the interval argument
python version: 3.8.0
rq version: 1.10.0
The somewhere.py
def my_func():
print('Start...')
asdsa # Here…
I'm trying to create a cog for my Discord bot that scrapes Indeed and returns info on job postings (position, company, location, etc). My bot is hosted on Heroku, which is where the issues start. I've tested my web scraper by itself and when…
Below is the function called for scheduling a job on server start.
But somehow the scheduled job is getting called again and again, and this is causing too many calls to that respective function.
Either this is happening because of multiple function…
First of all, questions about flask_context including context for RQ jobs seems to be common issue but I searched a lot and still couldn't solve my problem.
My decorator functions (tried both of them in different variations):
def…
I am unsure if I should use "celery" or "rq".
I am looking for a light-weight solution and my gut feeling told me that importing celery will be much slower than importing rq.
But the opposite is true. At least on my device:
> time python -c 'import…
I'm having issues enqueuing jobs with Python-RQ, jobs seems well enqueued but they don't run, crash or whatever they have to do.
The process I'm doing is the following:
Run redis server on localhost:
loren@RONDAN1:/mnt/c/Users/rondan$ sudo service…
I have a fairly basic (so far) queue set up in my app:
Job 1 (backup): back up the SQL table I'm about to replace
Job 2 (update): do the actual table drop/update
very simplified code:
from rq import Queue
from rq.decorators import…
Working on flask app, with Flask-RQ2, redis, and rq libraries. Queue works, but with larger datasets i get the error:
Moving job to FailedJobRegistry Work-horse process was terminated unexpectedly (waitpid returned 9)
I searched for similar error,…
Am hosting apis through flask app, along with -
running on top are 2 gunicorn workers
flask tasks are queued using 2 rq workers
First I start 2 Gunicorn workers, and I get this message -
2020-10-29 07:54:21,992 : INFO : app : 140162149197632 :…
I used Python Flask+Redis and I queued the jobs at redis queue using the below code:
with Connection(redis.from_url("redis://localhost:6379")):
queue = Queue()
task = queue.enqueue(self.redis_method, job_timeout=86400,…