2

Using laravel on my website, i am trying to schedule a queue for deleting regularly 10 of millions or rows.

Having

Data::where('fetch_stamp','<',$limit_3m_date)->delete();

is working on my local server but creates following exception of production server :

PDOException: SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction in /home/forge/website.com/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOStatement.php:142

Reading SO and Laracast opinions, it seems everyone is pointing back to using eloquent simple delete() methode to delete bulk of rows but i wonder if this is really the efficent approach?

Any clues? thks

LearningPath
  • 612
  • 5
  • 17
  • `Lock wait timeout exceeded` usually means the table is locked, usually be a long-running process. When that happens, you'll need to log into the database and do `show processlist;` to see if there are any long-running processes. You might also want to add an index on the `fetch_stamp` column so that it can find the records faster. – aynber May 08 '20 at 17:37
  • yes there is an index added. The long running process is in fact this specific process of deletion which fails..Just i dont find any clue in documentation that argues this delete() method is the good way to process efficiently deletion of millions of rows in a database with Laravel – LearningPath May 08 '20 at 20:03
  • Using the Eloquent model may not be the best method, since it will throw out a bunch of Model deleting/deleted events. You might be better off to use the QueryBuilder, `DB::table` instead. I haven't tested it, so I could not say for certain. – aynber May 08 '20 at 20:07
  • Thank you same failure Illuminate\Database\QueryException : SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction (SQL: delete from `data_sth` where `fetch_stamp` < 2018-09-08 20:55:34) I guess i still have to dig in documentation...thks – LearningPath May 08 '20 at 20:59
  • How often does this query run? – mrhn May 08 '20 at 21:57
  • The job should be every month to delete old data but because of a bug, it didnt delete anything for one year so now it has to delete almost 60 000 000 of rows..i increased Maria DB innodb_lock_wait_timeout from 600 to 3600 as a SO question could suggest. The job didnt fail this time but now query on MySQL bench works on every table except this one which renders 00:11:00 SELECT * FROM forge.data_smth LIMIT 0, 1000 Error Code: 2013. Lost connection to MySQL server during query 30.000 sec So i cannot check if the deletion worked..but as per the disk storage check, it doesnt seem – LearningPath May 08 '20 at 22:15
  • Yes you are right actually had that problem in production – mrhn May 08 '20 at 23:38
  • I cant see your previous answer but it might work with ChunkById instead of just Chunk as per https://stackoverflow.com/a/60124428/4220347...i will try – LearningPath May 08 '20 at 23:45
  • Did u fix this? – Lucas Martins Dec 09 '21 at 17:08

0 Answers0