0

Do disruptor queues provide superior performance when there are N Producers and 1 consumer? I wrote a program with Multiple Producer and Single Consumer using Disruptor Queues. I find the results are neck-on-neck with blocking arraybounded queues. The latter performs better. Am I doing something wrong here?

public void asynchronous_execution( )
{
 // Set up the executor and disruptor
     ExecutorService exec = Executors.newCachedThreadPool();
     Disruptor<valueEvent> disruptor =
     new Disruptor<valueEvent>( valueEvent.EVENT_FACTORY,
                        globalVariables.RING_SIZE,       
                        exec );

 // Handler of Events
 final EventHandler<valueEvent> handler = new EventHandler<valueEvent>()
 {
     public void onEvent(final valueEvent event, final long sequence, 
                 final boolean endOfBatch) throws Exception
     {  .... }       

 };

 // Building Dependency Graph
 disruptor.handleEventsWith(handler);

 // Starting the disruptor
 final RingBuffer<valueEvent> ringBuffer = disruptor.start();


 // Producer Thread
 final long[] runtime = new long [globalVariables.NUMBER_OF_THREADS];

 final class ProducerThread extends Thread {
  ...

    public void run()
    {
      ..
          long sequence = ringBuffer.next();
          valueEvent event = ringBuffer.get(sequence);

          event.setValue(globalVariables.WRITE_CODE); 

          // make the event available to EventProcessors
          ringBuffer.publish(sequence);  

        ...
       }

       ...
 };


 ProducerThread[] threads = new ProducerThread[globalVariables.NUMBER_OF_THREADS];
 for (int i = 0; i < globalVariables.NUMBER_OF_THREADS; i++) {
    threads[i] = new ProducerThread( i );
    threads[i].start();
 }

 ....

}

  • 2
    I suspect your event handler is very quick, and so the dominating time in your example would be the producer writing to the rings - which is not done by a single thread but shared across all threads; same as ABQ would do. Common strategy is to break your MP1C ring into a lot of 1P1C and poll each 1P1C in turn. Otherwise, can you provide a minimal fully working snippet to replicate the issue - it's hard to know whether the problem is an interaction within the producer, your handler or something else. – jasonk Sep 20 '13 at 03:03
  • Thanks That makes sense. I modified the code so that the event handler spends more time servicing the request. But the behaviour is still the same. I think splitting the queues to 1P1C will make more sense. Thanks for your valuable comment. I need multiple levels of permission to share the code snippet. – user39617 Sep 20 '13 at 15:30
  • 2
    For your 1P1C queue you may want to check out some of Martin Thompson's latest work on that, as well as the discussion at http://psy-lob-saw.blogspot.com/2013/04/lock-free-ipc-queue.html?goback=.gmp_3943775.gde_3943775_member_235869723#! – Andrew Bissell Sep 20 '13 at 23:01
  • Very Interesting. Thanks. I was thinking about false sharing in cache lines. I guess this could be in applied in C too – user39617 Sep 22 '13 at 17:03

0 Answers0