4

On my Linux server I have the following cron:

* * * * * php /var/www/core/v1/general-api/artisan schedule:run >> /dev/null 2>&1

The CRON works correctly. I have a scheduled command defined in my Kernel.php as such:

    protected function schedule(Schedule $schedule)
    {
        $schedule->command('pickup:save')
            ->dailyAt('01:00');
        $schedule->command('queue:restart')->hourly();
    }

The scheduled task at 1AM runs my custom command php artisan pickup:save. The only thing this command does is dispatch a Job I have defined:

    public function handle()
    {
        $job = (new SaveDailyPropertyPickup());
        dispatch($job);
    }

So this job is dispatched and since I am using the database driver for my Queues, a new row is inserted into the jobs table.

Everything works perfectly up to here.

Since I need a queue listener to process the queue and since this queue listener has to run basically forever, I start the queue listener like this:

nohup php artisan queue:listen --tries=3 &

This will write all the logs from nohup to a file called nohup.out in my /home directory

What happens is this: The first time, queue is processed and the code defined in the handle function of my SaveDailyPropertyPickup job is executed.

AFTER it is executed once, my queue listener just exits. When I check the logs nohup.out, I can see the following error:

In Process.php line 1335:

  The process "'/usr/bin/php7.1' 'artisan' queue:work '' --once --queue='default' 
  --delay=0 --memory=128 --sleep=3 --tries=3" exceeded the timeout of 60 seconds.

I checked this answer and it says to specify timeout as 0 when I start the queue listener but there are also answers not recommending this approach. I haven't tried it so I dont know if it will work in my situation.

Any recommendations for my current situation?

The Laravel version is 5.4

Thanks

user9492428
  • 603
  • 1
  • 9
  • 25

1 Answers1

1

Call it with timeout parameter, figure out how long your job takes and scale from there.

nohup php artisan queue:listen --tries=3 --timeout=600

In your config you need to update retry after, it has to be larger than timeout, to avoid the same job running at the same time. Assuming you use beanstalkd.

    'beanstalkd' => [
        ...
        'retry_after' => 630,
        ...
    ],

In more professional settings, i often end up doing a queue for short running jobs and one for long running operations.

mrhn
  • 17,961
  • 4
  • 27
  • 46
  • Hi.. Thanks for the answer. So the job in question takes quite a while. Last I checked at least 15-18 minutes. It has to query a DB, process the resultset then insert it into another DB. Another small issue I would have with explicitly specifying the timeout would be that, each day as the amount of data in the DB accumulates, the job will take longer and longer. So is it OK to, for example, specify a timeout like 1000? – user9492428 Mar 30 '19 at 08:57
  • 1
    I would create two queues one for quick jobs and one for long running jobs. High timeout is not good on quick job that crashes then it have to wait the timeout time until it can conclude it crashed. Another strategy could be to split your job into smaller jobs, lets say you want to parse 1000 elements make a job for 10 of these elements and then create a 100 jobs. – mrhn Mar 30 '19 at 09:36
  • Understood. So is it bad practice to have a arbitrary, high timeout value on a job that takes a long time? Also I updated the server with your suggested changes, I will let you know if the problem is solved at 1AM today (in 7 hours) – user9492428 Mar 30 '19 at 12:17
  • So is it bad practice to have a arbitrary, high timeout value on a job that takes a long time? yes, i seen in production environments with an environment that had problems a lot of jobs crashed, since the timeout was 4 hours, it took a while before it figured out it crashed and retried it – mrhn Mar 30 '19 at 19:37
  • And remember that 600 seconds is an arbitrary number i made up, you need to figure out how long it takes for your self :) – mrhn Mar 30 '19 at 19:38
  • 1
    to update you. The scheduler works now and it doesnt time out anymore. The first time the scheduler ran it took 21 minutes so I set the `--timeout` to 1350 and `retry_after` to 1400. It works now, but the time taken to complete this will be always increasing as the amount of data increases. I tried to break it up into smaller jobs as you suggested but I cant break the time consuming part up, unfortunately. The parts I CAN break up dont take that long. Anyway, you answered my original question so thank you very much for your help – user9492428 Mar 31 '19 at 20:38
  • If is make a new question and ping me in this thread and i can see if i can figure out a clever solution for you, have a lot of experience running code through queues, there is usual a solution :) – mrhn Mar 31 '19 at 23:16
  • thanks a lot! I really appreciate it. Legitimately. I will give it a go on my own first and I'll make another question if I get stuck somewhere. Also.. Apologize for the delayed response. Little bit busy these days. Thanks again! – user9492428 Apr 03 '19 at 17:03