I have multiple celery workers running in separate docker containers that subscribe to one Rabbitmq broker. I want to track task execution in each worker including number of running tasks in each worker, number of tasks in queue, and so on.
I want to create a python script that will run in a infinite loop on one of the workers or a separate server and pull the above information at a regular interval from all the running workers and put the information in a log file.
I have read about celery_app.control.inspect(), but I am not sure how i can use that to pull information from all the workers and not just one of them because I am assuming I can only use that from within a running worker.
Another option I saw is running flower in a separate docker container and get data from it through an API call, but I wanted an option that can be more customized.