1

I'm using Redisson distributed executor service. A task class A is designed to process a given queue identified by its queue ID. There are multiple such queues. I need to ensure for any time in the cluster, there are no more than one instance of task class A that works on the same queue.

Allowed:

Task class A instance1 working on queue1    
Task class A instance2 working on queue2

Not allowed:

Task class A instance1 working on queue1
Task class A instance2 working on queue1

There are a large amount of such queues and they change dynamically. So pre-allocation of task instances is not an option.

The counterpart, in the world of the Quartz distributed scheduler, is something called an exclusive job, achieved by @DisallowConcurrentExecution. Another similar thing, in the world of Akka framework, is automatically achieved by the actor model by default.

How can this be achieved with Redisson executor? If not directly, can this be achieved with some distributed lock (which is pretty handy in Redisson)?

Nan Wang
  • 65
  • 6

1 Answers1

2

I was looking for something similar and ended up writing my own distributed task logic which uses a Redis cache under the hood for coordination. The redis cache had a distributed lock (any key of our choosing) as well as a next execution time field. Not the prettiest thing but it worked.

I don't believe anything exists out of the box.

Sovietaced
  • 133
  • 1
  • 11