4

We have a job MyPrettyJob, that is queued through redis from a controller. When we run this job from the command like so, the job does succeed. When we run the job with little data the queue stays online, but when we run the job with a lot of data the queue crashes with an exit code of 12, which suggests an "Out of Memory" error.

The large job processes about 300.000 items, who mostly depend on each other. To that end, we cannot really split up this job without causing severe performance impact. In some extreme cases it could take up to hours instead of the few minutes it currently takes.

For the large job, the queue outputs the following:

$ php artisan queue:work --queue=myqueue
Processing: App\Jobs\MyPrettyJob
Processed: App\Jobs\MyPrettyJob
$ echo $?
12

The queue worker even crashes regardless if something is queued behind that job. That seems to suggest that the queue crashes through cleanup of the large job, but it does not seem to give any indication of what that is. The queue worker also crashes regardless if any database interactions are done, which rules anything related to the database.

What is the queue doing in-between jobs? Can I debug in any way why it is getting out of memory after completing the job? Does the queue write something to a log maybe, or is it doing something in redis in between jobs? It seems like a really weird time for that process to crash.

Sumurai8
  • 20,333
  • 11
  • 66
  • 100
  • push the data to queue in chunks to avoid the out-of-memory – FULL STACK DEV Apr 12 '19 at 14:20
  • We can't really without having a massive (in order of hours) penalty in overhead. – Sumurai8 Apr 12 '19 at 14:22
  • @Sumurai8 What action is the job performing? – Alex Mayo Apr 12 '19 at 15:01
  • It is processing about 300.000 items that need to be put in the database after walking through the dataset. As most items depend on other items, the most efficient way to do this is by having all data in memory. Since 300.000 updates take ages, we postpone any database interactions until the very end of this job. The queue crashes regardless if we do the database interaction, but the job does succeed in either case. I believe that means it reaches the end of the handle method of the job. – Sumurai8 Apr 12 '19 at 15:25
  • The queue also crashes regardless if anything else is queued after it. I believe that means it is during cleanup of that job. – Sumurai8 Apr 12 '19 at 15:26
  • I have been through something like this. You should dispatch them on multiple processes to save time. Strip your data to bare minimum, and strip your fancy objects to bare data arrays. And you should make the 300.000 items not depend so much on each other you should be looking at your code and your data processing methods rather then memory and queuing mechanism. I can reach the memory limit and timeout any day but then i can write code to live in my limits. – f_i Apr 13 '19 at 09:20
  • @Faiz I am not looking for advise on how to improve my queue job. The queue job should run in O(N) currently, at the cost of increased memory usage. It already uses nothing but arrays, and the memory consumption of the job, while large, is not ridiculous. We can run several of these worst-case scenario jobs side-by-side without crashing the server. I specifically asked **what is the queue doing in-between jobs**, because it consumes *MORE* memory after my job finished and the resources should be freed and thus should consume *LESS* memory. I am not sure how to debug what is happening there. – Sumurai8 Apr 13 '19 at 11:30

1 Answers1

6

Exit code 12 happens when the queue worker system determines that it has used more memory than is allowed (see https://github.com/laravel/framework/blob/5.8/src/Illuminate/Queue/Worker.php#L199-L210 for the specific section of code). If you run php artisan queue:work --memory=<digit> where memory is enough to fully run your job (for example, 1024 for 1GB), you should be able to allow your job to complete and continue running after the fact.

Glen Solsberry
  • 11,960
  • 15
  • 69
  • 94