0

The following issue is impairing my production system. So i have multiple MDB's packaged as EAR, WAR applications and are deployed in JBOSS. When there is considerable amount of traffic in my website then these MDB's stop listening to messages being written to queues in HornetQ and I am forced to restart my system. The last time this happened i wrote a standalone message listener and was able to listen to messages from the same hq server. This pointed out the issue to be at my application server/ application level. I am attaching the following:

  1. A typical MDB

    @MessageDriven(activationConfig = {
        @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"), 
        @ActivationConfigProperty(propertyName = "destination", propertyValue = Queues.CHAT_HANDLER),})
    @ResourceAdapter("hornetq-ra.rar")
    public class ChatHandlerQueueListener implements MessageListener {
    
     public static final Logger logger = LoggerFactory.getLogger(ChatHandlerQueueListener.class);
    
     @Inject
     IChatManager chatManager;
    
     public void onMessage(Message message) {
        ObjectMessage objectMessage = (ObjectMessage) message;
        ComponentMessage routingEngineResponse = null;
        try {
           routingEngineResponse = (ComponentMessage) objectMessage.getObject();
           boolean messageRedelivered = message.getJMSRedelivered();
           if (logger.isTraceEnabled())
              logger.trace("ChatHandlerQueueListener.callingChatManager Incoming response is {}", JsonUtils.toJson(routingEngineResponse));
           if (routingEngineResponse == null)
              return;
           if (messageRedelivered) {
              // Sending the message acknowledgement manually
              message.acknowledge();
           }
    
        } catch (JMSException e) {
           logger.error("ChatHandlerQueueListener.onMessage Type: null", e);
        }
        if (routingEngineResponse.getType().equals(MessageType.ChatAction) || routingEngineResponse.getType().equals(MessageType.ChatTransfer)) {
           try {
              logger.debug("ChatHandlerQueueListener.callingChatManager {}", JsonUtils.toJson(routingEngineResponse));
              chatManager.processRoutingEngineResponseMessage(routingEngineResponse);
           } catch (UnknownReActionTypeException e) {
              logger.error("ChatHandlerQueueListener.onMessage Type: UnknownReActionTypeException {}", e);
           }
        } else if (routingEngineResponse.getType().equals(MessageType.InboundSms)) {
           logger.debug("Calling request for agent {}", JsonUtils.toJson(routingEngineResponse));
           try {
              chatManager.processChatMessage(routingEngineResponse);
           } catch (ChatServiceUnavailableException | JMSException | ApplicationException e) {
              logger.error("ChatHandlerQueueListener.onMessage Type: Exception {}", e);
           }
        } else if (routingEngineResponse.getType().equals(MessageType.ChatMessage)) {
           try {
              chatManager.processChatMessage(routingEngineResponse);
           } catch (ChatServiceUnavailableException | JMSException | ApplicationException e) {
              logger.error("ChatHandlerQueueListener.onMessage Type: Exception {}", e);
           }
        } else if (routingEngineResponse.getType().equals(MessageType.TropoSmsDelivery)) {
           logger.debug("Calling smsDelvieryHandler {}", JsonUtils.toJson(routingEngineResponse));
           try {
              chatManager.processSmsDeliveryMessage(routingEngineResponse);
           } catch (Exception e) {
              logger.error("ChatHandlerQueueListener.onMessage Type: Exception {}", e);
           }
        } else {
           try {
              logger.trace("Unexpected message seletor found: {}", message.getStringProperty("MESSAGE_TYPE"));
           } catch (JMSException e) {
              logger.error("ChatHandlerQueueListener.onMessage Type: Exception {}", e);
           }
        }
     }
    

    }

  2. JBOSS CONFIGRATION FILE

    <hornetq-server>
       <persistence-enabled>true</persistence-enabled>
       <connectors>
          <connector name="netty">
             <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
             <param key="host" value="${jboss.bind.remote.hq.address}"/>
             <param key="port" value="${jboss.bind.remote.hq.port}"/>
          </connector>
       </connectors>
       <jms-connection-factories>
          <connection-factory name="RemoteConnectionFactory">
             <connectors>
                <connector-ref connector-name="netty"/>
             </connectors>
             <entries>
                <entry name="RemoteConnectionFactory"/>
             </entries>
          </connection-factory>
          <pooled-connection-factory name="hornetq-ra">
             <transaction mode="xa"/>
             <connectors>
                <connector-ref connector-name="netty"/>
             </connectors>
             <entries>
                <entry name="java:/JmsXA"/>
             </entries>
          </pooled-connection-factory>
       </jms-connection-factories>
    </hornetq-server>
    
Justin Bertram
  • 29,372
  • 4
  • 21
  • 43

1 Answers1

0

When the MDBs stop receiving messages then you need to get a series of thread dumps to see what the MDB threads are doing (if anything). Oftentimes issues like this are caused by application-specific problems which are simple to identify with thread dumps.

Justin Bertram
  • 29,372
  • 4
  • 21
  • 43
  • I have taken a thread dump for the file. There was one thing I noticed and that is all the queues were showing consumer count as 0 and my EC2 instance was also not showing any ESTABLISHED connection with the 5445 port. I have another query, I am creating connection from a connection factory which is also mapped to the same resource from where my listeners gets connected and this connection i am using to post messages to the hq server after every message is being sent I am closing the connection. Will this also impact or contribute to the scenario that I am running into. – Shirshendu Shekhar Das Dec 06 '18 at 07:46
  • Are you using a ``? If so, then there's no problem with opening/closing the connection for each message sent because the physical connection isn't actually closed; it just goes back to the pool. – Justin Bertram Dec 06 '18 at 15:57
  • Thanks Justin, yes I am using pooled-connection-factory. – Shirshendu Shekhar Das Dec 11 '18 at 08:04