Implementing this functionality is not simple since you will either need to have separate queues per target (so the waiting code becomes far more complicated), or one queue from which you then skip over targets that are at capacity (incurring a performance overhead). You can try to extend ExecutorService to achieve this, but the extension appears to be non-trivial.
Updated answer / solution:
After thinking about this a little bit more, the easiest solution to the blocking problem is to have both a blocking queue (as per normal) as well as a map of queues (one queue per target, as well as a count of available threads per target). The map of queues is used only for tasks that have been passed over for execution (due to too many threads already running for that target) after the task is fetched from the regular blocking queue.
So the execution flow would look like this:
- task is submitted (with specific target) by calling code.
- Task is put onto the blocking queue (likely wrapped here by your own task class that includes target information).
- thread (from the thread pool) is waiting on the blocking queue (via take()).
- thread takes the submitted task.
- thread synchronizes on lock.
- thread checks the available count for that target.
if the available count > 0
- then the thread decreases count by 1, releases lock, runs task.
- else the thread puts the task into the map of target to task queue (this map is the passed over task map), releases lock, and goes back to waiting on the blocking queue.
when a thread finishes execution of a task it:
- synchronizes on lock.
- checks the count for the target it just executed.
- if the count == 0
- then check the passed over task map for any tasks for this target, if one exists, then release lock and run it.
- if count was not 0 or no task for the same target was on the passed over map / queue, then increase the available count (for that target), release the lock, and go back to waiting on the blocking queue.
This solution avoids any significant performance overhead or having a separate thread just to manage the queue.