1

I'm using ThreadPoolExecutor with LinkedBlockingQueue(Integer.MAX_VALUE) for multiple tasks, but why it throws RejectedExecutionException when submit(Callable task) within 2000 tasks in websphere? Shouldn't the queue be able to contain 2.1 billion tasks theoretically? Any information will be appreciated.

Two soap requests sent to my application will do two different jobs, each of these two jobs processing by a job class, and the will instance an custom ServiceExecutionFactory(bean with prototype) which acts as consumer and producer factory. The factory has an workQueue(size 1000) field to contain tasks the job class produced, and the consumer will take out the task in workQueue and put the task into thread pool. BTW,I can't reproduce this with Tomcat.

instance ThreadPoolExecutor

BlockingQueue<Runnable> executorQueue = new LinkedBlockingQueue<Runnable>();
ThreadPoolExecutor tpe = (new ThreadPoolExecutor(poolSize, poolSize, 10, TimeUnit.SECONDS, executorQueue, tf));
tpe.allowCoreThreadTimeOut(true);

the executorService.submit(serviceInfo.getCallable()) may throw RejectedExecutionException because the threads are all occupied(including extern threads) and the Queue is full(I don’t know why this happens as I mentioned above). Then the program will catch the Exception and attempts to add it back to the queue, at this moment the workQueue(size 1000) of ServiceExecutionFactoryBase may be totally full and can’t add to it, it will throw queue full Exception using add() method. One more thing I feel also weird that the ConsumerService will rework after 3 hours later, but this ConsumerService thread should stop because of throw e statement, shouldn't it?

public class ConsumerService implements Callable<Object> {

       public Object call() throws Exception {

           // loop until interrupted
           try {
               while (true) {

                   // check the work queue for available item
                   ServiceInfo serviceInfo = queue.take();
                   // When an item is available, feed it to the executor, and save the future
                   try {
                       Future<Object> future = executorService.submit(serviceInfo.getCallable());
                       serviceInfo.setFuture(future);
                       serviceInfo.setCallable(null);
                   } catch (Exception e) {
                       if (serviceInfo.getRetrys() < getMaximumRequestRetries()) {
                           serviceInfo.setRetrys(serviceInfo.getRetrys() + 1);
                           queue.add(serviceInfo);
                       } else {
                           ServiceCaller sc = serviceInfo.getCallable();
                           sc.factory.notifyServiceAborted(sc.serviceKey, e);
                       }
                   }
               }
           } catch (InterruptedException e) {
               // the job is done
               return null;
           } catch (Exception e) {
               getLogger().error(e);
               throw e;
           }
       }

   }

I expect no RejectExecutionException occurs, but it does.

Joey
  • 11
  • 3
  • I resolved this problem by alterring queue size from 1000 to 2000. And the reason for this issue is that one working thread is still alive when the thread pool shut down and start another thread pool for another loop. – Joey Nov 01 '19 at 02:51

1 Answers1

0

The classes you are using, java.util.concurrent.ThreadPoolExecutor and java.util.concurrent.LinkedBlockingQueue are provided by the JDK and shouldn't have anything to do with which application server you are using. Are you using the same JDK in both cases? If so, another possible cause might be higher memory consumption in one of the environments such that the ThreadPoolExecutor is unable to allocate a new thread and rejects the submit request on that basis. Even though you have corePoolSize set to the same as maximumPoolSize, you are also setting allowCoreThreadTimeOut, making it possible for the number of threads to drop and requiring new threads to be created.

njr
  • 3,399
  • 9
  • 7
  • Thanks for you quick reply, njr. yes, I use the same jdk in both cases. I considered that cause you mentioned before, but when this illegalStatementExcetption pop out(because when reject exception occurs, code attempts to add it back to queue size 1000) which stop the consumer thread, I check the log, the program will still completed 1999 tasks, because producer thread loop 2000 and block to get the future result once a time, and one task aborted due to exception. – Joey Mar 29 '19 at 01:54