0

I am using python-rq and redis to pass the domain names and get the links from the domain.

try:
    for link in [h.get('href') for h in self.soup.find_all('a')]:
       --code goes here--
except Exception, ex:
    print ex
    pass

Whenever i run the code and any exception is caught, instead of printing and ignoring that domain it is pushed to failed queue. But in the console rq does not print pushing to failed queue.

The links are getting updated in the db but still the domain is getting pushed in the failed queue. And the count of failed queue is more than the default queue (total number of domains passed).

Why is this happening? Please help

Mannu Nayyar
  • 193
  • 1
  • 5
  • 21
  • Okay i found the reason why this is happening. Its because the workers are stopping by itself and whenever a worker stops, count on failed queue increases (i don't understand why). So seems like there is some problem with the supervisord that i am running. So is there any alternative to supervisor? I used nohup but its appending all the output to a nohup.out file which increases in size. Any idea what should i do? – Mannu Nayyar Jun 22 '16 at 16:02
  • The answer of this question is there in my other post. [python-rq worker closes automatically](http://stackoverflow.com/questions/37982703/python-rq-worker-closes-automatically) – Mannu Nayyar Jun 29 '16 at 10:00

0 Answers0