8

I am using the Executors framework in Java to create thread pools for a multi-threaded application, and I have a question related to performance.

I have an application which can work in realtime or non-realtime mode. In case it's realtime, I'm simply using the following:

THREAD_POOL = Executors.newCachedThreadPool();

But in case it's not realtime, I want the ability to control the size of my thread pool. To do this, I'm thinking about 2 options, but I don't really understand the difference, and which one would perform better.

Option 1 is to use the simple way:

THREAD_POOL = Executors.newFixedThreadPool(threadPoolSize);

Option 2 is to create my own ThreadPoolExecutor like this:

RejectedExecutionHandler rejectHandler = new RejectedExecutionHandler() {
@Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
    try {
        executor.getQueue().put(r);
    } catch (Exception e) {}
}
};          
THREAD_POOL = new ThreadPoolExecutor(threadPoolSize, threadPoolSize, 0, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(10000), rejectHandler);

I would like to understand what is the advantage of using the more complex option 2, and also if I should use another data structure than LinkedBlockingQueue? Any help would be appreciated.

Charles
  • 50,943
  • 13
  • 104
  • 142
Charles Menguy
  • 40,830
  • 17
  • 95
  • 117

1 Answers1

14

Looking at the source code you'll realize that:

Executors.newFixedThreadPool(threadPoolSize);

is equivalent to:

return new ThreadPoolExecutor(threadPoolSize, threadPoolSize, 0L, MILLISECONDS,
                              new LinkedBlockingQueue<Runnable>());

Since it doesn't provide explicit RejectedExecutionHandler, default AbortPolicy is used. It basically throws RejectedExecutionException once the queue is full. But the queue is unbounded, so it will never be full. Thus this executor accepts inifnite1 number of tasks.

Your declaration is much more complex and quite different:

  • new LinkedBlockingQueue<Runnable>(10000) will cause the thread pool to discard tasks if more than 10000 are awaiting.

  • I don't understand what your RejectedExecutionHandler is doing. If the pool discovers it cannot put any more runnables to the queue it calls your handler. In this handler you... try to put that Runnable into the queue again (which will fail in like 99% of the cases block). Finally you swallow the exception. Seems like ThreadPoolExecutor.DiscardPolicy is what you are after.

    Looking at your comments below seems like you are trying to block or somehow throttle clients if tasks queue is too large. I don't think blocking inside RejectedExecutionHandler is a good idea. Instead consider CallerRunsPolicy rejection policy. Not entirely the same, but close enough.

To wrap up: if you want to limit the number of pending tasks, your approach is almost good. If you want to limit the number of concurrent threads, the first one-liner is enough.

1 - assuming 2^31 is infinity

Tomasz Nurkiewicz
  • 334,321
  • 69
  • 703
  • 674
  • The `RejectedExecutionHandler` is actually blocking, the call to `executor.getQueue().put(r);` will block until the queue frees up, so in the end my handler allows to keep a bounded queue without aborting any task. Unless I'm mistaken. +1 for the other details. – Charles Menguy Jan 11 '13 at 20:59
  • @CharlesMenguy: thank you for clarification, my bad, I'll update my question. But what do you want to achieve by blocking inside `RejectedExecutionHandler`? I believe it might have some really unexpected side effects like blocking caller thread. Maybe you need `CallerRunsPolicy`? – Tomasz Nurkiewicz Jan 11 '13 at 21:02
  • Actually after looking at it, `CallerRunsPolicy` sounds really promising for what I want to do, I will give it a try thanks ! Maybe you can add that in the answer and I will accept your answer. – Charles Menguy Jan 11 '13 at 21:12