0

I am using Glassfish 3.1.1 with an EJB 3.1 architecture combined with the smack library to process incoming XMPP packets.

For this i have a thread started from a Singleton which processes my incoming packets.

Packet packet = collector.nextResult();
if (packet != null)
  processPacket(packet); // here i lookup my processing EJB and start working

What i need is a queue which queues up the packets to process one packet per sender at a time. At the moment i process every received packet in parallel which can makes it impossible for me to keep the order of the packets.

Any ideas how i could solve this as elegantly as possible?

greetings m

PS: The first approach is to store which client is processing packets at the moment and i iterate over the collected packets and look for a sender which is not processing anything. But i am afraid this costs a lot of iterations if none of the packets in the buffer are allowed to be processed.

mkuff
  • 1,620
  • 6
  • 27
  • 39

1 Answers1

0

If you know the senders ahead of time, then you can register a PacketFilter which matches each sender. Thus each collector will queue up packets from each sender.

If you do not know, then you can accomplish the same thing but you will have to route the messages yourself. Use a PacketListener, instead of the collector and route each message into a sender queue as they are received. You can then create the queue on demand if it doesn't exist already.

Robin
  • 24,062
  • 5
  • 49
  • 58
  • I tried the PacketListener before, but i prefered the PacketCollector because i have more control when i want to stop processing packets. Under heaver load the packet listener keeps calling the callback function until its buffer is empty even when i canceled it. With the collector i can also have monitoring by getting the number of queued packets (reflection) to determine its health status. But anyway. When i receive the new package i need an efficient queuing per client. Do you have any ideas for that? I postet my first approach in the PS. – mkuff Apr 20 '12 at 07:10