0

I have an application using Bull for a queue. Is there a parameter that I can pass it to set a TTL (time to live) for each entry automatically when it's created?

const Queue = require('bull')
const webApiQueue = new Queue('webApi', {redis: REDIS_URL })

// Producer
const webApiProducer = (data) => {
  webApiQueue.add(data, { lifo: true })
}

If setting a key with Redis directly, you an use setex key_name 10000 key_data

But how can I implement such in Bull? It's just an API processing queue, and I want it to delete entries after 24hrs automatically.

I'm not seeing anything in the documentation: https://github.com/OptimalBits/bull#documentation

Ben in CA
  • 688
  • 8
  • 22
  • 1
    I have this same question. My ElastiCache is out of memory and is giving the error, `-OOM command not allowed when used memory > 'maxmemory'.` What solution did you find? – rinogo Apr 01 '22 at 00:43
  • I've just been clearing the queue monthly. There is also https://github.com/OptimalBits/bull/blob/master/REFERENCE.md#queueclean – Ben in CA Nov 14 '22 at 22:00
  • I'm also wondering if removeOnComplete and removeOnFail set to a number (e.g. 20,000) would achieve my purposes. I prefer a log of the last week or so, but after that I don't care. https://github.com/OptimalBits/bull/blob/master/REFERENCE.md#queueadd seems to indicate that "A number specified the amount of jobs to keep." - vs. a boolean value, default being false. – Ben in CA Nov 14 '22 at 22:03

1 Answers1

3

From what I gather, it seems like explicitly setting a TTL (e.g. 24 hours) on the Redis keys is not the recommended way to solve this.

It seems like the canonical approach is to only clear keys when necessary (e.g. when we run out of memory).

This Bull Issue pointed me in the right direction.

If you'd like to have Bull manage its memory a little more, ahem, reasonably, try specifying removeOnComplete and removeOnFail as discussed in the documentation (note that both default to false).

A totally different approach would be to solve the memory management issue with your Redis configuration by setting the maxmemory-policy to allkeys-lru as discussed in the Redis docs.

If you're using AWS ElastiCache instead, Amazon has some documentation on these same techniques. ElastiCache uses a maxmemory-policy of volatile-lfu by default which will cause memory issues with Bull since Bull doesn't set TTLs. I'd recommend changing this to allkeys-lru.

For what it's worth, my guess is that the most performant solution is to modify maxmemory-policy in the Redis/ElastiCache configuration. That way, Redis itself is managing keys instead of Bull adding overhead for completed/failed job removal.

rinogo
  • 8,491
  • 12
  • 61
  • 102