1

I am using jboss-5.1 to deploy message driven bean which is used to subscribe messages from a third party queue.

Around 16 messages were posted to that queue but they remained pending in our subscriber queue. I restarted the server and the messages were readily picked.

As much as I have analysed, I think maxsize and maxsession could have affected it, as both are 15. But I do not understand if there was some real issue, how it got solved by just restarting.

The logs were in error mode. I did not get the full stack trace.

This is the snippet of that error log.

[2012-10-30 17:01:00,228] [MQQueueAgent (GQH1_PLANNING_MDM_001)]
[ERROR] STDERR: 2012.10.30 17:01:00 MQJMS1023E rollback failed

[2012-10-30 17:01:00,228] [exceptionDelivery0] [WARN ]
org.jboss.resource.adapter.jms.inflow.JmsActivation: Failure in jms activation 
org.jboss.resource.adapter.jms.inflow.JmsActivationSpec@85d0d(ra=org.jboss.resource.adapter.jms.JmsResourceAdapter@b21aae
destination=remotewsmq/NOTIFICATION_PLANNING_MDM_001.SUBQ
destinationType=javax.jms.Queue tx=true durable=false reconnect=10 provider=RemoteWSMQJMSProvider
 user=null maxMessages=1 minSession=1 maxSession=5 keepAlive=60000 useDLQ=false)

GQH1_PLANNING_MDM_001: The name of the queue used for subscribing.

The files that I use to configure the properties of the MDBs are as follows.

1.ejb3-interceptors-aop.xml

  <domain name="Message Driven Bean" extends="Intercepted Bean" inheritBindings="true">
      <bind pointcut="execution(public * *->*(..))">
         <interceptor-ref name="org.jboss.ejb3.security.AuthenticationInterceptorFactory"/>
         <interceptor-ref name="org.jboss.ejb3.security.RunAsSecurityInterceptorFactory"/>
      </bind>

      <!-- TODO: Authorization? -->

      <bind pointcut="execution(public * *->*(..))">
         <interceptor-ref name="org.jboss.ejb3.tx.CMTTxInterceptorFactory"/>
         <interceptor-ref name="org.jboss.ejb3.stateless.StatelessInstanceInterceptor"/>
         <interceptor-ref name="org.jboss.ejb3.tx.BMTTxInterceptorFactory"/>
         <interceptor-ref name="org.jboss.ejb3.AllowedOperationsInterceptor"/>
         <interceptor-ref name="org.jboss.ejb3.entity.TransactionScopedEntityManagerInterceptor"/>
         <!-- interceptor-ref name="org.jboss.ejb3.interceptor.EJB3InterceptorsFactory"/ -->
         <stack-ref name="EJBInterceptors"/>
      </bind>

      <annotation expr="class(*) AND !class(@org.jboss.ejb3.annotation.Pool)">
         @org.jboss.ejb3.annotation.Pool (value="StrictMaxPool", maxSize=15, timeout=10000)
      </annotation>
   </domain>

2.standardjboss.xml

<invoker-proxy-binding>
      <name>message-driven-bean</name>
      <invoker-mbean>default</invoker-mbean>
      <proxy-factory>org.jboss.ejb.plugins.jms.JMSContainerInvoker</proxy-factory>
      <proxy-factory-config>
        <JMSProviderAdapterJNDI>DefaultJMSProvider</JMSProviderAdapterJNDI>
        <ServerSessionPoolFactoryJNDI>StdJMSPool</ServerSessionPoolFactoryJNDI>
        <CreateJBossMQDestination>false</CreateJBossMQDestination>

        <!-- WARN: Don't set this to zero until a bug in the pooled executor is fixed -->

        <MinimumSize>1</MinimumSize>
        <MaximumSize>15</MaximumSize>
        <KeepAliveMillis>30000</KeepAliveMillis>
        <MaxMessages>1</MaxMessages>

        <MDBConfig>
          <ReconnectIntervalSec>10</ReconnectIntervalSec>
          <DLQConfig>
            <DestinationQueue>queue/DLQ</DestinationQueue>
            <MaxTimesRedelivered>10</MaxTimesRedelivered>
            <TimeToLive>0</TimeToLive>
          </DLQConfig>
        </MDBConfig>

      </proxy-factory-config>
    </invoker-proxy-binding>

   <activation-config-property>
        <activation-config-property-name>maxSession</activation-config-property-name>
        <activation-config-property-value>15</activation-config-property-value>
   </activation-config-property>

3.jms-ds.xml

<?xml version="1.0" encoding="UTF-8"?>

<connection-factories>

  <!-- ==================================================================== -->
  <!-- JMS Stuff                                                            -->
  <!-- ==================================================================== -->

   <!--
       The JMS provider loader. Currently pointing to a non-clustered ConnectionFactory. Need to
       be replaced with a clustered non-load-balanced ConnectionFactory when it becomes available.
       See http://jira.jboss.org/jira/browse/JBMESSAGING-843. 
   -->
   <mbean code="org.jboss.jms.jndi.JMSProviderLoader"
          name="jboss.messaging:service=JMSProviderLoader,name=JMSProvider">
      <attribute name="ProviderName">DefaultJMSProvider</attribute>
      <attribute name="ProviderAdapterClass">org.jboss.jms.jndi.JNDIProviderAdapter</attribute>
      <attribute name="FactoryRef">java:/XAConnectionFactory</attribute>
      <attribute name="QueueFactoryRef">java:/XAConnectionFactory</attribute>
      <attribute name="TopicFactoryRef">java:/XAConnectionFactory</attribute>
   </mbean>

   <!-- JMS XA Resource adapter, use this to get transacted JMS in beans -->
   <tx-connection-factory>
      <jndi-name>JmsXA</jndi-name>
      <xa-transaction/>
      <rar-name>jms-ra.rar</rar-name>
      <connection-definition>org.jboss.resource.adapter.jms.JmsConnectionFactory</connection-definition>
      <config-property name="SessionDefaultType" type="java.lang.String">javax.jms.Topic</config-property>
      <config-property name="JmsProviderAdapterJNDI" type="java.lang.String">java:/DefaultJMSProvider</config-property>
      <max-pool-size>20</max-pool-size>
      <security-domain-and-application>JmsXARealm</security-domain-and-application>
      <depends>jboss.messaging:service=ServerPeer</depends>
   </tx-connection-factory>

</connection-factories>

Please help.

T.Rob
  • 31,522
  • 9
  • 59
  • 103
dev
  • 132
  • 1
  • 3
  • 11

2 Answers2

2

If the listener did not try to reconnect, then it might be the messages pending which caused it to fail.

1

According to the error, a transaction ROLLBACK call failed. After the failure, the queue manager probably held those messages in an outstanding unit of work. Restarting the server would have closed the connection at which point the queue manager will have rolled back the transaction on behalf of the application. On restart, the application will create a new UOW and retrieve the messages.

Look in WebSphere MQ's queue manager error logs and global error logs to determine whether the error was caused by a resource shortage. It may be necessary to increase the size of the queue manager transaction logs or to tune transaction parameters such as MAXUOW.

You may also need to update the MQ client version or Queue Manager version. According to this Technote, WebSphere MQ JMS classes were updated as of 6.0.2.3 to fix a bug that resulted in MQJMS1023E errors. If you need to update the client version, it is available as a free download as SupportPac MQC75. A new client is able to run with any back level queue manager. After upgrading, the app benefits from the bug fixes and performance enhancements of the new client code and provides API functionality appropriate for the version of Queue Manager to which it connects. What version of WebSphere MQ JMS client is currently installed? What version of WebSphere MQ queue manager is currently installed?

T.Rob
  • 31,522
  • 9
  • 59
  • 103
  • Is this MAXUOW applicable to jboss? If yes, do you know of a way to set it. In our case, once we receive a message, we just commit it. So in case some problem happens, we simply do not process it. So there is no question of a transaction getting failed. – dev Nov 07 '12 at 13:17
  • In my understanding, this could be the reason. – dev Nov 07 '12 at 13:22
  • When a batch of JMS messages is delivered to an MDB in the quantity of the prefetch, each are assigned an instance from this pool and are delivered to that instance via the onMessage function. If the message prefetch exceeds the maxSize of this pool then messages wait for an MDB instance. If the time from message delivery to calling onMessage exceeds the pool timeout for any message, an EJBException will be thrown. For large prefetch and long average onMessage time, message towards the end of the queue will begin to fail. But again i do not understand, what server restart possibly changed? – dev Nov 07 '12 at 13:28
  • You may be thinking of `MAXUMSGS`, I know I was when I incorrectly wrote `MAXUOW` on the list server and SO a few times. `MAXUMSGS` is a QMgr attribute. Did you determine the versions of the QMgr and client? What did the error logs on the QMgr side say? – T.Rob Nov 07 '12 at 19:02