I have a simple web application that is written with the Laravel 4.2 framework. I have configured the Laravel queue component to add new queue items to a locally running beastalkd server.
Essentially, there is a POST route that will add an item to the beanstalkd tube.
I then have supervisord set up to run artisan queue:listen
as three separate processes. The issue that I am seeing is that the different queue:listen
processes will end up spawning anywhere between one to three queue:worker
processes for just one inserted job.
The end result being that one job inserted into the queue is sometimes being processed by multiple workers at the same time, something I am obviously trying to avoid.
The job code is relatively simple:
<?php
use App\Repositories\URLRepository;
class ProcessDataJob {
private $urls;
public function __construct(URLRepository $urls)
{
$this->urls = $urls;
}
public function fire($job, $data)
{
$input = $data['post'];
if (!isset($data['post']) && !is_array($data['post'])) {
Log::error('[Job #'.$job->getJobId().'] $input was empty inside CreateAuditJob. Deleting job. Quitting. Bye!');
$job->delete();
return false;
}
//
// ... code that will take a few hours to run.
//
$job->delete();
Log::info('[Job #'.$job->getJobId().'] ProcessDataJob was successful, deleting the job!');
return true;
}
}
The fun part being that most of the (duplicated) queue workers fail when deleting the job with this left in the errorlog:
exception 'Pheanstalk_Exception_ServerException' with message 'Job 3248 NOT_FOUND: does not exist or is not reserved by client'
The ttr (Time to Run) is set to 172,800 seconds (or 48 hours), which is much larger then the time it would take for the job to complete.