0

I'm using the Fork/Join Framework to do a lot of calculations in parallel. I've implemented the Producer-Consumer Pattern with a simple logger to create console outputs while the programm is calculating. The producer and consumer share a BlockingQueue and I use the put() method so I don't miss an update.

I've recognized that the performance is horrible under some circumstances and VisualVM showed me that the put() method is the cause.

When there's a problem putting new messages into the BlockingQueue my RecursiveTasks have to wait but the ForkJoinPool continues to fork new tasks so 2000-5000 tasks are trying to access the put() method which results in very high concurrency.

Is there a proper way to handle such situations?

I thought something like

if(!blockingQueue.offer(message) {
    blockingQueue.put(message);
}

could be more performant when using a theoretically unlimited BlockingQueue.

So my Question is: Is there a proper and performant way to put Objects into a BlockingQueue without loosing an update?

Thank you in advance!

cmtjk
  • 359
  • 2
  • 12
  • Surely offering-then-putting like that is just the same as putting? – Andy Turner Nov 26 '15 at 15:24
  • Add enough code so we can see the problem. Exactly how are you using the F/J? etc. – edharned Nov 26 '15 at 15:28
  • @AndyTurner Is it? So the snippet I mentioned doesn't make sense at all? – cmtjk Nov 26 '15 at 15:44
  • @parboiledRice well, think about it: if offer succeeds, it's like calling `put` when there is capacity in the queue. if offer fails, you call `put` and block until there is capacity in the queue. Either way, you get the blocking behaviour of `put`. – Andy Turner Nov 26 '15 at 15:47
  • @AndyTurner yes that's true, but I thought `offer()` could be much faster when there is no concurrency because it's not synchronized and doesn't need to aquire any locks. The `put()` method in afterwards makes sure the update doesn't get lost when `offer()` returns false what should not be the case. – cmtjk Nov 26 '15 at 15:54
  • Have you already tried ThredPoolExecutor with bounded queue? – Ravindra babu Nov 26 '15 at 15:57
  • @parboiledRice you've not actually specified what implementation of `BlockingQueue` you are using. There is nothing in the interface documentation which says that particular methods are synchronized (as in, acquiring monitors) or not, in any stronger way than "using internal locks or other forms of concurrency control". – Andy Turner Nov 26 '15 at 15:57
  • @AndyTurner I'm using the `LinkedBlockingQueue` and but after reading the doc I see the only time this method blocks is if the queue is full. I'm using an unlimited `LinkedBlockingQueue` without a fixed capacity. So the producers should not have any problem putting new messages into the queue. – cmtjk Nov 26 '15 at 20:36

1 Answers1

2

If your pool is spawning 2000-5000 tasks then that is your problem. Once that many tasks get going you will start seeing thread contention in BlocingQueue.put which will push up the statistics for put.

The whole point of using a BlockingQueue is so that if the consumer is slower (even temporarily) than the producer then the producer will block until the consumer has caught up. This should then cause the up-stream processes to wait too. If this is causing your upstream process (presumably the FJP) to tank the system rather than just block then that will be a problem.

I would suggest you use a fixed-capacity FJP.

OldCurmudgeon
  • 64,482
  • 16
  • 119
  • 213
  • Most times the `getActiveThreadCount()` returns 3-5 while running, but when one task what seems to get stuck in the `put()` method the `getActiveThreadCount()` will raise to ~3500 and fall again. [After your edit] That sounds right for me. So I'll try to use a fixed-capacity FJP as you suggested and test it again. Thank you! – cmtjk Nov 26 '15 at 15:37