I ran into a problem when using rq and lightgbm.
I have the following structure:
- flask - for receiving requests and folding them into redis
- redis - as a broker
- rq - for asynchronous request processing
The server receives the data and the rq worker runs the data through the lightgbm model.
But there is a problem when using predict_proba. The worker just freezes and is killed by rq after the time has elapsed.
If you run the same model on flask itself, then there are no problems.
Here is the application log:
DEBUG:app.worker.services.machine_learning:{"module": "ml", "data": [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], "event": "applying model with ...", "level": "debug", "timestamp": "2022-12-22T05:06:02.072328Z", "lineno": 72, "pathname": "/app/app/worker/services/machine_learning.py", "func_name": "process_data_with_ml"}
INFO:rq.worker:Killed horse pid 25
WARNING:rq.worker:Moving job to FailedJobRegistry (work-horse terminated unexpectedly; waitpid returned None)
- Running the model on flask
- Changed the model