0

The brief description of java.util.concurrent.LinkedBlockingQueue says it is a FIFO queue, which means if thread A adds a bunch of entries (a1, a2, ... an) into the queue first and then thread B adds some more stuff into the queue (b1, b2, ... bm), then some consumer threads should exhaust all entries from A before taking on these from B (thus FIFO). But what I have seen is that the entries from A and these from B are interleaved even though B adds its entries much later than A. I was at a code review for some Tomcat + Jersey application, and it uses a singleton LinkedBlockingQueue plus a handful of asynchronous worker threads to process the request entries from clients.

I was questioning about the fairness of the code since late arriving requests would have to wait in the queue until earlier entries are all exhausted (a client can submit thousands of entries per request), but to my surprise, the late arrival clients got their responses back almost immediately. So does this mean LinkedBlockingQueue is not FIFO?? Please help for I am very confused.

Anthony Accioly
  • 21,918
  • 9
  • 70
  • 118
Jim
  • 161
  • 2
  • 10

1 Answers1

3

The queue is FIFO, the order in which the objects are removed from the queue by the threads is FIFO. Once the threads get hold of the objects and start running methods, the FIFO ordering is lost.

How can you tell that 'the entries from A and these from B are interleaved'?

Martin James
  • 24,453
  • 3
  • 36
  • 60
  • Also, how can you be sure B adds its entries "much later than" A? – Louis Wasserman Sep 20 '12 at 22:39
  • @LouisWasserman - in a test, it's fairly easy - have B Wait() until A has queued up all its objets and then Notify() B. Monitoring how the threads remove objects from the queue is, at least, 'very difficult'. You can't just lob in a load of printf calls - printf has a lock of its own and so modifies the operation that you are trying to monitor. Any thread blocking will cause another work thread to run, get an object and then likely block as well. If there are far more work threads than cores, (likely in a network app), the objects will just all spill out into the threads. – Martin James Sep 20 '12 at 23:13
  • I agree that it's fairly easy to set it up, but I want to know what in the OP's code makes him sure -- that seems like a plausible explanation for the reported behavior as well. – Louis Wasserman Sep 20 '12 at 23:16
  • @LouisWasserman - yes, maybe... working out exactly what is happening in these systems is one of those Heisenberg/Observer things - you cannot find out without affecting the system – Martin James Sep 20 '12 at 23:17
  • Thanks everyone. It turned out to be an illusion from the client side. The LinkedBlockingQueue is indeed FIFO. The seemingly interleaving of the requests is due to the fact that backend request handlers process requests in the queue blazingly fast but sending them back to the client is orders of magnitude slower. By the time a second client sends its requests, the queue is already empty so its requests get processed immediately as well, and then both clients receive results at a slower pace. Sorry about that. – Jim Sep 21 '12 at 18:05