Am hosting apis through flask app, along with -
- running on top are 2 gunicorn workers
- flask tasks are queued using 2 rq workers
First I start 2 Gunicorn workers, and I get this message -
2020-10-29 07:54:21,992 : INFO : app : 140162149197632 : 17641 : Started app.py!
2020-10-29 07:54:22,078 : INFO : app : 140162149197632 : 17643 : Started app.py!
Then I start 2 rq workers -
2020-10-29 07:55:21,880 : INFO : rq.worker : 140051755202368 : 17743 : *** Listening on default...
2020-10-29 07:55:21,885 : INFO : rq.worker : 140281306085184 : 17746 : *** Listening on default...
But when I post a request to an api, my logs show -
2020-10-29 07:56:12,511 : INFO : app : 140162149197632 : 17643 : app.views : Added to queue. 0 tasks in the queue
2020-10-29 07:56:12,514 : INFO : rq.worker : 140051755202368 : 17743 : default: app.tasks.run_detections({'module': '_all_', 'folder_name': 'CL201029-d7794130-5375-44f6-8334-da9bc7...) (dc319268-c696-4a9c-95a3-1a7843f12ef8)
2020-10-29 07:56:12,719 : INFO : app : 140051755202368 : 17801 : Started app.py!
^ Here the app was initialized again.
Log format is - '%(asctime)s : %(levelname)s : %(name)s : %(thread)d : %(process)d : %(message)s'
My question is, after the task was added to the queue, in the same thread (140051755202368) another pid was created and the flask was initialized again.
Why is this happening? I want to preload the app in all the workers at the beginning and not load them everytime a request comes.
What am I missing here?
My code structure -
app
1. __init__.py
2. views.py
3. tasks.py
Am starting gunicorn as -
gunicorn -b 0.0.0.0:5000 run:app --workers 2
worker.py
import sys
from rq import Queue, Connection, Worker
import logging
logger = logging.getLogger("rq.worker")
file_handler = logging.FileHandler('worker.log')
formatter = logging.Formatter('%(asctime)s : %(levelname)s : %(name)s : %(thread)d : %(process)d : %(message)s')
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
with Connection():
qs = Queue()
w = Worker(qs)
w.work()
supervisord.conf - for running rq workers
[supervisord]
[program:worker]
command=python worker.py
numprocs=2
Then running
supervisord -n