0

I'm trying to configure retry capability in my Spring Integration project where I'm trying to connect to Rabbit servers following the details provided here in this article section 3.3.1. But looks like the retry policy isnt kicking in. This is what I have in my configuration:

<!-- Spring AMQP Template -->
<rabbit:template id="amqpTemplate" connection-factory="connectionFactory"   retry-template="retryTemplate"
    exchange="myExchange" />

<bean id="retryTemplate" class="org.springframework.retry.support.RetryTemplate">
    <property name="backOffPolicy">
        <bean class="org.springframework.retry.backoff.ExponentialBackOffPolicy">
            <property name="initialInterval" value="8" />
            <property name="multiplier" value="100.0" />
            <property name="maxInterval" value="100000" />
        </bean>
    </property>
    <property name="retryPolicy">
        <bean class="org.springframework.retry.policy.SimpleRetryPolicy">
            <property name="maxAttempts" value="3"/>
        </bean>
    </property>         
</bean>
<!-- Spring AMQP Admin -->
<rabbit:admin connection-factory="connectionFactory" />

Based on the snippet, I'm expecting the retry to happen 3 times at an exponential interval. But based on the logs I'm seeing the re-try attempt being made at 7 sec interval and it goes on forever (doesn't stop after 3 times).

Wondering if someone could point out what is wrong in my configuration.

ignatan
  • 101
  • 1
  • 3
  • 14

1 Answers1

2

First, maxattempts=3 means 3 attempts (2 retries) so you should see the initial attempt, a second attempt 8ms later then a final attempt 800ms later.

A multiplier of 100 seems excessive - the next attempt (if maxattempts was 4) would be 80 seconds later.

I suggest you turn on DEBUG logging to follow the retry progress.

Gary Russell
  • 166,535
  • 14
  • 146
  • 179
  • Thanks for your response. When I turned up logging, I'm seeing that its the SimpleMessageListenerContainer's _DEFAULT_RECOVERY_INTERVAL that is being used.From the logs: 09:01:52,077 DEBUG [org.springframework.amqp.rabbit.listener.BlockingQueueConsumer] (SimpleAsyncTaskExecutor-4) Starting consumer Consumer: tags=[{}], channel=null, acknowledgeMode=AUTO local queue size=0 09:01:53,079 DEBUG [org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer] (SimpleAsyncTaskExecutor-4) Recovering consumer in 5000 ms._ So it was 5 sec and not 7 but wondering why RetryPolicy isnt kicking in – ignatan Mar 31 '15 at 13:20
  • No; you are misunderstanding retry in the `RabbitTemplate` - the retry there is independent of the message listener container - of course if there's a connection problem then both will be impacted. The message arrives in the container; presumably you are calling the template on the container thread. Retries will be attempted; when exhausted, the exception will be thrown back to the container. You need to provide much more info about your application if you need help. If you can't figure it out from the logs, post them somewhere (or a Gist if they're too big) together with complete config. – Gary Russell Mar 31 '15 at 15:59
  • Wondering if I'm trying to use RetryTemplate incorrectly. My app picks messages from rabbitMQ (Server A), transforms the data and puts then on a different rabbitMQ (ServerB). What I jsut realized is if I bring down ServerB, the retry works....I can see the atempts being made based on the retryPolicy and throws exception after the attemps are exhausted. Now, when I bring down the ServerA, I can see repeated attempts to connect to Server A by the listener container at 5 sec interval. Is there a way to apply retrypolicy at the listener container level? – ignatan Mar 31 '15 at 18:42
  • Add an advice chain to the listener container with a [stateful retry advice](http://docs.spring.io/spring-amqp/docs/1.4.3.RELEASE/reference/html/amqp.html#async-listeners) - requires the sender to provide a `messageId` header to manage state for the retries. A stateless advice will retry within the container without throwing back to the container (until retries are exhausted and there is no recoverer). – Gary Russell Mar 31 '15 at 19:11
  • Thanks for the link Gary. From the article: _If the failure is caused by a dropped connection (not a business exception), then the consumer that is collecting messages for the listener has to be cancelled and restarted. The SimpleMessageListenerContainer loops endlessly trying to restart the consumer..... One side effect is that if the broker is down when the container starts, it will just keep trying until a connection can be established._ Wondering if there is a way to apply the retrypolicy so that we can control the restart of the container... – ignatan Apr 01 '15 at 00:21
  • No; reconnecting to the broker when it is down is continuous and retried according to the `recoveryInterval`. Using an exponential retry (and eventually giving up) makes little sense for that scenario - how would the container ever find out that the broker is back? If you really want to do that, you could use some external monitor to keep trying to get a connection from the factory and, after some period of time, call `stop()` on the listener container. And then, presumably, call `start()` some time later. – Gary Russell Apr 01 '15 at 06:50