0

I have been working on a project which uses Celery beat for scheduling tasks. Locally, I have been using RabbitMQ as the broker and everything was working fine.

When I pushed my project to the remote server, I changed the broker to Redis.

The celery beat process seams to work fine as I can see in the console that it is scheduling the task. But the worker in unable to pick up the task. When I call the task asynchronously from a shell by using delay() on the task, even then the task does not get picked up by the worker.

I assumed that there could be something weird with Redis. However, that doesn't seem to be the cases. I made my project work with Redis locally. On the server, when I changed the broker to RabbitMQ, even then I was getting the same issue.

My local machine runs Mac OS and the server runs Debian 6.

What could be the issue? How can I debug this situation and just get the worker to consume tasks and do the work? I am using Python 2.7.

vaidik
  • 2,191
  • 1
  • 16
  • 22
  • I started my worker process and then tried to ping worker from another process (IPython shell) and didn't get any response. On the contrary, on my local machine this seems to work and I get the list of running workers. Don't know what's happening. – vaidik Dec 10 '13 at 18:02

1 Answers1

0

I figured out the problem and it turns out to be a really stupid one, rather shows that I was doing a bad practice.

I was using Gevent for measuring performance difference on the server and the way I was spawning Celery workers was not the right way to handle Gevent code. Not that I did not know that Celery needs a flag in the command line argument for Gevent, not using the same code on the local machine always worked for me. It just never struck my mind that I was using Gevent and that was causing the issue.

All resolved after almost 20 hours of debugging, googling and chatting on IRC.

vaidik
  • 2,191
  • 1
  • 16
  • 22