2

I'm not an ActiveMQ expert, but I've tried to search a lot on the Internet for similar problems and I'm still quite confused. I have the following problem.

Running a web application in Tomcat 8.x, Java 8, Spring Framework 4.3.18. My web application both sends and receives messages with ActiveMQ, using org.apache.activemq:activemq-spring:5.11.0 dependency.

I'm setting up an ActiveMQ connection factory in this way:

<amq:connectionFactory id="amqJmsFactory" brokerURL="${jms.broker.url}" />
<bean id="jmsConnectionFactory"
    class="org.apache.activemq.pool.PooledConnectionFactory" destroy-method="stop">
    <property name="connectionFactory" ref="amqJmsFactory" />
    <property name="maxConnections" value="2" />
    <property name="idleTimeout" value="60000" />
    <property name="timeBetweenExpirationCheckMillis" value="600000" />
    <property name="maximumActiveSessionPerConnection" value="10" />
</bean>

The last property (maximumActiveSessionPerConnection) has been set to try to solve the following problem (the default seems to be 500, which is quite high IMHO), but I'm not sure it really helped, because I'm still getting OutOfMemory errors.

This connection factory is referenced by a listener container factory:

<jms:listener-container factory-id="activationJmsListenerContainerFactory"
    container-type="default" connection-factory="jmsConnectionFactory"
    concurrency="1" transaction-manager="centralTransactionManager">
</jms:listener-container>

by one Spring Integration 4.3.17 inbound adapter:

<int-jms:message-driven-channel-adapter id="invoiceEventJmsInboundChannelAdapter" 
    channel="incomingInvoiceEventJmsChannel"
    connection-factory="jmsConnectionFactory"
    destination-name="incomingEvent"
    max-concurrent-consumers="2"
    transaction-manager="customerTransactionManager"
    error-channel="unexpectedErrorChannel" />

and by two outbound adapters:

<int-jms:outbound-channel-adapter id="invoiceEventJmsOutboundChannelAdapter"
    channel="outgoingInvoiceEventJmsChannel" destination-name="outgoingEvent"
    connection-factory="jmsConnectionFactory" explicit-qos-enabled="true" delivery-persistent="true" 
    session-transacted="true" />

<int-jms:outbound-channel-adapter
    id="passwordResetTokenSubmitterJmsOutboundChannelAdapter"
    channel="passwordResetTokenSubmitterJmsChannel"
    destination-name="passwordReset"
    connection-factory="jmsConnectionFactory" explicit-qos-enabled="true"
    delivery-persistent="false" session-transacted="false" />

Things work well, but what I observe is that ActiveMQ, as a message producer (for the invoiceEventJmsOutboundChannelAdapter adapter), is accumulating a lot of objects in memory and it's causing OutOfMemory errors in my application. My messages may be some KBs long, because their payloads are XML files, but nevertheless I don't expect to hold so much memory for a long time.

Here are my findings on a heap dump produced on the most recent OutOfMemory error (using Eclipse MAT to investigate). Two leak suspects are found and both lead to ConnectionStateTracker.

Here is one of the two accumulators:

Class Name                                                                                                  | Shallow Heap | Retained Heap
-------------------------------------------------------------------------------------------------------------------------------------------
java.util.concurrent.ConcurrentHashMap$HashEntry[4] @ 0xe295da78                                            |           32 |    58.160.312
'- table java.util.concurrent.ConcurrentHashMap$Segment @ 0xe295da30                                        |           40 |    58.160.384
   '- [15] java.util.concurrent.ConcurrentHashMap$Segment[16] @ 0xe295d9e0                                  |           80 |    68.573.384
      '- segments java.util.concurrent.ConcurrentHashMap @ 0xe295d9b0                                       |           48 |    68.573.432
         '- sessions org.apache.activemq.state.ConnectionState @ 0xe295d7e0                                 |           40 |    68.575.312
            '- value java.util.concurrent.ConcurrentHashMap$HashEntry @ 0xe295d728                          |           32 |    68.575.344
               '- [1] java.util.concurrent.ConcurrentHashMap$HashEntry[2] @ 0xe295d710                      |           24 |    68.575.368
                  '- table java.util.concurrent.ConcurrentHashMap$Segment @ 0xe295d6c8                      |           40 |    68.575.440
                     '- [12] java.util.concurrent.ConcurrentHashMap$Segment[16] @ 0xe295d678                |           80 |    68.575.616
                        '- segments java.util.concurrent.ConcurrentHashMap @ 0xe295d648                     |           48 |    68.575.664
                           '- connectionStates org.apache.activemq.state.ConnectionStateTracker @ 0xe295d620|           40 |    68.575.808
-------------------------------------------------------------------------------------------------------------------------------------------

As you can see, an instance of ConnectionStateTracker is retaining around 70 MB of heap space. There are two instances of ConnectionStateTracker (one for each outbound adapter I guess) which are retaining a total of about 120 MB of heap. They are accumulating it in two instances of ConnectionState, which in turn have a map of "sessions" containing a cumulated total of 10 SessionState instances, which in turn have a ConcurrentHashMap of producers holding a cumulated total of 1,258 ProducerState instances. These are retaining those 120 MB of heap in their transactionState field, which is of type TransactionState, which in turn has a commands ArrayList which seems to be retaining the whole messages I'm sending out.

My question is: why ActiveMQ is keeping in memory the messages already sent out? There are also some security concerns in keeping all those messages in memory.

Mauro Molinari
  • 1,246
  • 2
  • 14
  • 24

3 Answers3

2

Here is how I finally solved this.

I think the main problem here was bug AMQ-6603. So, the first thing we did was to upgrade to ActiveMQ 5.15.8. I think this would have been enough to fix the leak.

However, we also changed our configuration a bit after discovering that using a pooled connection factory with a listener container factory is discouraged. I think that ActiveMQ documentation is confusing and that proper JMS configuration is more complicated than it should be. Anyway, if you read DefaultMessageListenerContainer documentation you'll read:

Don't use Spring's org.springframework.jms.connection.CachingConnectionFactory in combination with dynamic scaling. Ideally, don't use it with a message listener container at all, since it is generally preferable to let the listener container itself handle appropriate caching within its lifecycle. Also, stopping and restarting a listener container will only work with an independent, locally cached Connection - not with an externally cached one.

The same must apply to ActiveMQ PooledConnectionFactory then. However, ActiveMQ documentation says, instead:

Spring's MessagListenerContainer should be used for message consumption. This provides all the power of MDBs - efficient JMS consumption and pooling of the message listeners - but without requiring a full EJB container.

You can use the activemq-pool org.apache.activemq.pool.PooledConnectionFactory for efficient pooling of the connections and sessions for your collection of consumers, or you can use the Spring JMS org.springframework.jms.connection.CachingConnectionFactory to achieve the same effect.

So, ActiveMQ documentation is suggesting the opposite. I opened bug AMQ-7140 for this. For this reason, I'm now injecting a PooledConnectionFactory (or a Spring CachingConnectionFactory) to JMS clients only, and the plain non-cached ActiveMQ connection factory (built with <amq:connectionFactory>) when building listener container factories with either Spring <jms:listener-container> or with Spring Integration <int-jms:message-driven-channel-adapter> and I rather set their attributes related to concurrency and cache level.

The additional difficulty was that we are passing a locally constructed transaction manager to the listener container factories in order to synchronise the JMS message commit with the database commit. This indeed causes the listener container factory to completely disable its connection caching mechanism by default, unless you also set an explicit cache level (see org.springframework.jms.listener.DefaultMessageListenerContainer.setCacheLevel(int) Javadoc).

I close this answer by saying that it was hard to get to the solution, especially because I got no feedback at all by any ActiveMQ developer, either on the mailing list or on the issue tracker, even if IMHO this may be seen as a security problem. This inactivity, together with the lack of alternatives, makes me think about whether I should still consider JMS for my next project.

Community
  • 1
  • 1
Mauro Molinari
  • 1,246
  • 2
  • 14
  • 24
  • Looks like Im facing same or very similar problem ... Not sure if I can see what really needs to be done to resolve it ... https://stackoverflow.com/questions/61406593/tracing-memory-leak-in-spring-azure-qpid-jms-code – JavaDude Apr 26 '20 at 17:28
0

We're encountering the same problem at a client's site.

Looking at the ActiveMQ code there's a class RemoveTransactionAction in ConnectionStateTracker that deletes the entries in response to an OpenWire RemoveInfo (type 12) message. This message seems to be generated by ActiveMQ once it receives the message.

SimonD
  • 31
  • 1
  • 4
  • I finally solved the problem, I think in our case we've hit AMQ-6603 bug: https://issues.apache.org/jira/browse/AMQ-6603. I'm going to add a reply with further details. – Mauro Molinari Mar 13 '19 at 15:08
0

TL;DR

Disable anonymousProducers on your PooledConnectionFactory.


We had a very similar problem with a service running out of memory when using JMS transactions and a PooledConnectionFactory configured with a maximum of 8 connections.

However, we weren't using using DefaultMessageListenerContainer or Spring, and were only sending on one producer.

This producer was responsible for sending a large number of messages off a batch job, and we found that when the connection failed over it would leave those messages on the ConnectionStateTracker attached to the old connection. After a number of failovers, these messages would accumulate on old connections to the point where we ran out of heap.

It seems the only way to clear these messages from memory is to close the producer after committing the JMS transaction. This removes the ProducerState instance from SessionState on the ConnectionStateTracker.

The call to RemoveTransactionAction that SimonD mentioned in his answer (which happens automatically after committing the JMS transaction) just removes the TransactionState from the ConnectionState, but still leaves the producer and its messages on the SessionState object.

Unfortunately calling close() on the producer doesn't work out of the box with PooledConnectionFactory, as by default it uses anonymous producers - calling the close() method on an anonymous producer has no effect. You must first call setUseAnonymousProducers(false) on PooledConnectionFactory for closing the producer to have any effect.

Its also worth pointing out that you must call close() on the producer - calling close() on the session will not result in the producer being closed, despite what the JavaDoc for ActiveMQSession suggests. Instead it calls the dispose() method on the producer.

movint
  • 1
  • 1
  • Thanks for sharing, interesting to know. Unfortunately, when using Spring, you usually don't have direct control on these low-level components (and you probably shouldn't have to), so while you can certainly disable anonymous producers at factory level, closing the producer isn't that straight. I'm just curious to know if you looked at [AMQ-6603](https://issues.apache.org/jira/browse/AMQ-6603). – Mauro Molinari Apr 16 '19 at 08:11