9

I am working with laravel queue jobs with Redis and using supervisor to manage multiple workers.
I have more than one numprocs which working perfectly for some days and then the supervisor goes down even if the supervisor in active state.

Because in supervisor, when supervisord fails more than startretries value set in configure file to start program/worker then the program/worker goes to FATAL STATE then it will not processing any jobs, So when all workers gone on this state the supervisor goes down. Then we need to restart the Supervisor manually to start processing again.

But this is not a proper solution for this.
My question is why supervisor fail to start and what is the solution for that ?
Reference Supervisor Doc:-http://supervisord.org/subprocess.html
My config file like this:-

[program:name]
process_name=%(program_name)s_%(process_num)02d
command=php /path/artisan queue:work --queue=queue1,queue2,queue3,queue4,default --tries=1 --daemon
autostart=true
autorestart=true
startretries=15
numprocs=150
user=root
redirect_stderr=true
stdout_logfile=/path/worker.log
stderr_logfile=/path/workerError.log

Update
My log file look like this enter image description here

My stdout log file looks like this

enter image description here Any help will be greatly appreciated.

Martijn Pieters
  • 1,048,767
  • 296
  • 4,058
  • 3,343
Bibhudatta Sahoo
  • 4,808
  • 2
  • 27
  • 51
  • Can you show you supervisor log files? – Pavel Feb 01 '18 at 08:04
  • Hi @Pavel I updated my question with log file data. Have a look. – Bibhudatta Sahoo Feb 01 '18 at 10:22
  • Well, difficult question, the only idea is to check the moment of failing: maybe the some problems with memory, or something else. So try to see other logs(nginx, php, and so on). Also maybe it's not a good idea, but you can try to increase `startretries` param. – Pavel Feb 01 '18 at 10:43
  • we can not just increase the `startretries` value, we need to find out why it is failing to start the workers. – Bibhudatta Sahoo Feb 01 '18 at 13:14
  • You need to add the logs that are created through the PHP process. The current logs only show that worker 106 produced some serious problem and was shut down. The exciting logs should be in `worker.log` or `workerError.log`. It could be everything from 'too many connections' in MySQL to something completely different. You may also have a look at your `/var/log/daemon.log`, as it will contain Fatal Errors inside your PHP process which is run by the workers. – cb0 Feb 06 '18 at 16:36
  • `stdout_logfile=/path/worker.log` `stderr_logfile=/path/workerError.log` did you check these files? do these paths exist? – 0kay Feb 06 '18 at 19:23
  • @0kay I have update the question with worker logs and I don't have workerError log till now. – Bibhudatta Sahoo Feb 07 '18 at 07:04
  • What happens when you run the cmd on the servers command line manually? `php /path/artisan queue:work --queue=queue1,queue2,queue3,queue4,default --tries=1` – 0kay Feb 07 '18 at 15:28
  • It execute my jobs and give me desired out put with out fail – Bibhudatta Sahoo Feb 08 '18 at 05:18
  • If you run `sudo su` then `cd` to your project folder and run `php /path/artisan queue:work --queue=queue1,queue2,queue3,queue4,default --tries=1` what happens? – Jamesking56 Feb 09 '18 at 13:38
  • @Jamesking56 it will process the job present in the queue – Bibhudatta Sahoo Feb 10 '18 at 06:30

2 Answers2

1

The relevant log entries are:

 exited: laravelw_106 (exit status 0; not expected)
gave up: laravelw_106 entered FATAL state, ntoo many start retries too quickly

The laravel queue worker stops immediately for some reason after being started. The queue worker is supposed to be long-running.

You need to find out why it exits; maybe you have an exit() or die() statement somewhere in your jobs.

cweiske
  • 30,033
  • 14
  • 133
  • 194
1

Your consumers / workers die very soon after they are being started. A consumer should be a process that is running in a infinite loop waiting for tasks/messages. You said you have a return() after the task is completed, maybe this is stopping the worker.

Try to run manually the worker and then produce messages on the queue. The worker should not stop after the completion of only 1 task.

bogdancep
  • 208
  • 2
  • 4
  • Hi @bogdancep, when I run the worker manually it is run perfectly but i am only returning result to the job handler from my user define function but I am not returning any thing from job handler method. So I am not stopping the worker after completing one task in my code. – Bibhudatta Sahoo Feb 10 '18 at 06:37