20

I'm using Laravel 5.1.

The queues are used for data fetching/syncing between several systems.

I use the database driver, 3 "artisan queue:work --daemon" processes are running all the time.

The jobs are dispatched both by system users and scheduler (cron). Three queues are used to prioritize the jobs.

Everything seems to be working just fine - the jobs table gets filled with records, the system takes care of them and removes the ones that are done.

However after some time locking issues are starting to interfere:

SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction

and

'RuntimeException' with message 'Can't swap PDO instance while within transaction.'

and

SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction

I haven't tried using another queue driver yet. I'd really like to stay with database though. The engine is InnoDB, the jobs table has default structure and indexes.

Is there a way to solve this issue? What are your thoughts?

It might be worth mentioning that I call the DB::reconnect() inside my job classes since the queue workers are running as daemons.

The jobs are dispatched using DispatchesJobs trait as one would expect. I don't interfere with queues algorithm in any other way.

MaGnetas
  • 4,918
  • 4
  • 32
  • 52
  • Any solution for this? – Michel Ayres Sep 10 '15 at 14:40
  • 2
    Nope. Moved to beanstalkd for now. Although I liked the DB driver better because I could track the jobs and failed jobs by simply looking at DB tables... – MaGnetas Sep 10 '15 at 14:41
  • Study how Laravel deals with MySQL's "transactions". It sounds like you have started a transaction, then sat for a long time. `COMMIT` transactions as soon as practical. Check the use of `autocommit`. – Rick James Sep 12 '15 at 14:48
  • 1
    Proper indexing and usage of MySQL transactions can help you mate, back trace every module that is related to the locked table so you can pinpoint which function or part of the code 'makes' the lock. Cheers – Avidos Nov 03 '15 at 01:31
  • Well...the problem is I'm using laravel built in queues subsystem and would like not to stay away from everything behind the "disptatch" and "handle". I'm not inserting the jobs into DB by myself nor am I updating or removing them. – MaGnetas Nov 03 '15 at 20:29
  • I have the same problem. Any solutions? – jaaksarv Nov 09 '15 at 14:21
  • 1
    No, using beanstalkd for that project. Although I'd prefer DB queues – MaGnetas Nov 09 '15 at 14:51

3 Answers3

2

This may not be the answer but some info.

When using SELECT ... FOR UPDATE statements, you may observe lock contention(dead locks etc..).

select … for update where x <= y

its that range scan with <= the database locks all rows <= y, including any gaps so if you have rows with y like this: 1, 3, 5 it locks even the empty space between 1 and 3 in the index its called gap locking

can see the issue with this command:

SHOW ENGINE INNODB STATUS;

---TRANSACTION 72C, ACTIVE 755 sec
4 lock struct(s), heap size 1248, 3 row lock(s), undo log entries 1
MySQL thread id 3, OS thread handle 0x7f84a78ba700, query id 163 localhost msandbox
TABLE LOCK table test.t trx id 72C lock mode IX
RECORD LOCKS space id 19 page no 4 n bits 80 index age of table test.t trx id 72C lock_mode X
RECORD LOCKS space id 19 page no 3 n bits 80 index GEN_CLUST_INDEX of table test.t trx id 72C lock_mode X locks rec but not gap
RECORD LOCKS space id 19 page no 4 n bits 80 index age of table test.t trx id 72C lock_mode X locks gap before rec

last line

If you have lot of gaps locks in your transactions affecting the concurrency and the performance you can disable them in two different ways:

1- Change the ISOLATION level to READ COMMITTED. In this isolation level, it is normal and expected that query results can change during a transaction, so there is no need to create locks to prevent that from happening.

2- innodb_locks_unsafe_for_binlog = 1. Disables the gap locks except for foreign-key constraint checking or duplicate-key checking.

https://www.percona.com/blog/2012/03/27/innodbs-gap-locks/

guy_fawkes
  • 947
  • 8
  • 31
  • all of this might be helpful, thank you. But I'd really hate to mess with the laravel core. I believe this should be somehow solved "out of the box". Although I could do some "good practice" changes to my code or workflow to avoid the issue. – MaGnetas Jan 18 '16 at 21:14
1

I'm writing queue management system on Laravel, I have multiple jobs that has multiple users whom I should send emails. I'm running many workers with supervisor and to avoid multiple email sending to same user I wrote this code. Hope it will help somebody with this problem

DB::transaction(function () use ($job) {

            if (!count($job->jobUsers()->sharedLock()->get())) { // to share reading ability btw multiple workers

                Log::info('There is no user in this job');
                $job->status = Job::STATUS_FINISHED;
                $job->save();
                return;

            }

            foreach ($job->jobUsers as $jobUser) {
                Log::info($jobUser->user_id);

                JobUser::where('job_id', $jobUser->job_id)
                    ->where('user_id', $jobUser->user_id)
                    ->lockForUpdate()  // exclusive lock
                    ->update(['status' => JobUser::STATUS_SENT]);
            }

        });
Mikayel Margaryan
  • 764
  • 1
  • 7
  • 16
  • 1
    Thanks. This is a big solution to what I am looking for. Can you please be more detailed. What are job users and their relationships to jobs. Thanks – uncle-tee Apr 18 '20 at 07:54
  • I posted this a year ago, as I remember by "job users" I meant all users that are mentioned in job as users that are waiting for email. Also later I realized that this kind of tasks should be done using other services than only Laravel. Try to use RabbitMQ and push each email sending job for user as one task in queue, then listen that queue with Laravel command that can be run as multiple workers. With this structure You can avoid any lock problem. – Mikayel Margaryan Apr 20 '20 at 12:54
  • Thanks for this. This will go a long way. How every will I still have to use lockForUpdate() method when I updating the field on the db. – uncle-tee Apr 20 '20 at 16:44
0

In supervisor config file make

numprocs = 1

The reason is that "205 Lock wait timeout exceeded; try restarting" error occurs only when multiple queue worker is doing the same task on the same table or the same row of a table. It causes a deadlock in the table.

kmnowak
  • 804
  • 1
  • 8
  • 23