3

I am setting up a cluster of Artemis in Kubernetes with 3 group of master/slave:

activemq-artemis-master-0                               1/1     Running
activemq-artemis-master-1                               1/1     Running
activemq-artemis-master-2                               1/1     Running
activemq-artemis-slave-0                                0/1     Running
activemq-artemis-slave-1                                0/1     Running
activemq-artemis-slave-2                                0/1     Running

The Artemis version is 2.17.0. Here is my cluster config in master-0 broker.xml. The configs are the same for other brokers except the connector-ref is changed to match the broker:

<?xml version="1.0"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->
<configuration xmlns="urn:activemq" xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
  <core xmlns="urn:activemq:core" xsi:schemaLocation="urn:activemq:core ">
    <name>activemq-artemis-master-0</name>
    <persistence-enabled>true</persistence-enabled>
    <!-- this could be ASYNCIO, MAPPED, NIO
           ASYNCIO: Linux Libaio
           MAPPED: mmap files
           NIO: Plain Java Files
       -->
    <journal-type>ASYNCIO</journal-type>
    <paging-directory>data/paging</paging-directory>
    <bindings-directory>data/bindings</bindings-directory>
    <journal-directory>data/journal</journal-directory>
    <large-messages-directory>data/large-messages</large-messages-directory>
    <journal-datasync>true</journal-datasync>
    <journal-min-files>2</journal-min-files>
    <journal-pool-files>10</journal-pool-files>
    <journal-device-block-size>4096</journal-device-block-size>
    <journal-file-size>10M</journal-file-size>
    <!--
       This value was determined through a calculation.
       Your system could perform 1.1 writes per millisecond
       on the current journal configuration.
       That translates as a sync write every 911999 nanoseconds.

       Note: If you specify 0 the system will perform writes directly to the disk.
             We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false.
      -->
    <journal-buffer-timeout>100000</journal-buffer-timeout>
    <!--
        When using ASYNCIO, this will determine the writing queue depth for libaio.
       -->
    <journal-max-io>4096</journal-max-io>
    <!--
        You can verify the network health of a particular NIC by specifying the <network-check-NIC> element.
         <network-check-NIC>theNicName</network-check-NIC>
        -->
    <!--
        Use this to use an HTTP server to validate the network
         <network-check-URL-list>http://www.apache.org</network-check-URL-list> -->
    <!-- <network-check-period>10000</network-check-period> -->
    <!-- <network-check-timeout>1000</network-check-timeout> -->
    <!-- this is a comma separated list, no spaces, just DNS or IPs
           it should accept IPV6

           Warning: Make sure you understand your network topology as this is meant to validate if your network is valid.
                    Using IPs that could eventually disappear or be partially visible may defeat the purpose.
                    You can use a list of multiple IPs, and if any successful ping will make the server OK to continue running -->
    <!-- <network-check-list>10.0.0.1</network-check-list> -->
    <!-- use this to customize the ping used for ipv4 addresses -->
    <!-- <network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> -->
    <!-- use this to customize the ping used for ipv6 addresses -->
    <!-- <network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> -->
    <!-- how often we are looking for how many bytes are being used on the disk in ms -->
    <disk-scan-period>5000</disk-scan-period>
    <!-- once the disk hits this limit the system will block, or close the connection in certain protocols
           that won't support flow control. -->
    <max-disk-usage>90</max-disk-usage>
    <!-- should the broker detect dead locks and other issues -->
    <critical-analyzer>true</critical-analyzer>
    <critical-analyzer-timeout>120000</critical-analyzer-timeout>
    <critical-analyzer-check-period>60000</critical-analyzer-check-period>
    <critical-analyzer-policy>HALT</critical-analyzer-policy>
    <page-sync-timeout>2244000</page-sync-timeout>
    <!-- the system will enter into page mode once you hit this limit.
           This is an estimate in bytes of how much the messages are using in memory

            The system will use half of the available memory (-Xmx) by default for the global-max-size.
            You may specify a different value here if you need to customize it to your needs.

            <global-max-size>100Mb</global-max-size>

      -->
    <acceptors>
      <!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
      <!-- amqpCredits: The number of credits sent to AMQP producers -->
      <!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->
      <!-- amqpDuplicateDetection: If you are not using duplicate detection, set this to false
                                      as duplicate detection requires applicationProperties to be parsed on the server. -->
      <!-- amqpMinLargeMessageSize: Determines how many bytes are considered large, so we start using files to hold their data.
                                       default: 102400, -1 would mean to disable large mesasge control -->
      <!-- Note: If an acceptor needs to be compatible with HornetQ and/or Artemis 1.x clients add
                    "anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.
                    See https://issues.apache.org/jira/browse/ARTEMIS-1644 for more information. -->
      <!-- Acceptor for every supported protocol -->
      <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
      <!-- AMQP Acceptor.  Listens on default AMQP port for AMQP traffic.-->
      <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>
      <!-- STOMP Acceptor. -->
      <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>
      <!-- HornetQ Compatibility Acceptor.  Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
      <acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
      <!-- MQTT Acceptor -->
      <acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
    </acceptors>
    <security-settings>
      <security-setting match="#">
        <permission type="createNonDurableQueue" roles="amq"/>
        <permission type="deleteNonDurableQueue" roles="amq"/>
        <permission type="createDurableQueue" roles="amq"/>
        <permission type="deleteDurableQueue" roles="amq"/>
        <permission type="createAddress" roles="amq"/>
        <permission type="deleteAddress" roles="amq"/>
        <permission type="consume" roles="amq"/>
        <permission type="browse" roles="amq"/>
        <permission type="send" roles="amq"/>
        <!-- we need this otherwise ./artemis data imp wouldn't work -->
        <permission type="manage" roles="amq"/>
      </security-setting>
    </security-settings>
    <address-settings>
      <!-- if you define auto-create on certain queues, management has to be auto-create -->
      <address-setting match="activemq.management#">
        <dead-letter-address>DLQ</dead-letter-address>
        <expiry-address>ExpiryQueue</expiry-address>
        <redelivery-delay>0</redelivery-delay>
        <!-- with -1 only the global-max-size is in use for limiting -->
        <max-size-bytes>-1</max-size-bytes>
        <message-counter-history-day-limit>10</message-counter-history-day-limit>
        <address-full-policy>PAGE</address-full-policy>
        <auto-create-queues>true</auto-create-queues>
        <auto-create-addresses>true</auto-create-addresses>
        <auto-create-jms-queues>true</auto-create-jms-queues>
        <auto-create-jms-topics>true</auto-create-jms-topics>
      </address-setting>
      <!--default for catch all-->
      <address-setting match="#">
        <dead-letter-address>DLQ</dead-letter-address>
        <expiry-address>ExpiryQueue</expiry-address>
        <redelivery-delay>0</redelivery-delay>
        <!-- with -1 only the global-max-size is in use for limiting -->
        <max-size-bytes>-1</max-size-bytes>
        <message-counter-history-day-limit>10</message-counter-history-day-limit>
        <address-full-policy>PAGE</address-full-policy>
        <auto-create-queues>true</auto-create-queues>
        <auto-create-addresses>true</auto-create-addresses>
        <auto-create-jms-queues>true</auto-create-jms-queues>
        <auto-create-jms-topics>true</auto-create-jms-topics>
      </address-setting>
    </address-settings>
    <addresses>
      <address name="DLQ">
        <anycast>
          <queue name="DLQ"/>
        </anycast>
      </address>
      <address name="ExpiryQueue">
        <anycast>
          <queue name="ExpiryQueue"/>
        </anycast>
      </address>
    </addresses>
    <!-- Uncomment the following if you want to use the Standard LoggingActiveMQServerPlugin pluging to log in events
      <broker-plugins>
         <broker-plugin class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
            <property key="LOG_ALL_EVENTS" value="true"/>
            <property key="LOG_CONNECTION_EVENTS" value="true"/>
            <property key="LOG_SESSION_EVENTS" value="true"/>
            <property key="LOG_CONSUMER_EVENTS" value="true"/>
            <property key="LOG_DELIVERING_EVENTS" value="true"/>
            <property key="LOG_SENDING_EVENTS" value="true"/>
            <property key="LOG_INTERNAL_EVENTS" value="true"/>
         </broker-plugin>
      </broker-plugins>
      -->
    <cluster-user>clusterUser</cluster-user>
    <cluster-password>aShortclusterPassword</cluster-password>
    <connectors>
      <connector name="activemq-artemis-master-0">tcp://activemq-artemis-master-0.activemq-artemis-master.svc.cluster.local:61616</connector>
      <connector name="activemq-artemis-slave-0">tcp://activemq-artemis-slave-0.activemq-artemis-slave.svc.cluster.local:61616</connector>
      <connector name="activemq-artemis-master-1">tcp://activemq-artemis-master-1.activemq-artemis-master.svc.cluster.local:61616</connector>
      <connector name="activemq-artemis-slave-1">tcp://activemq-artemis-slave-1.activemq-artemis-slave.svc.cluster.local:61616</connector>
      <connector name="activemq-artemis-master-2">tcp://activemq-artemis-master-2.activemq-artemis-master.svc.cluster.local:61616</connector>
      <connector name="activemq-artemis-slave-2">tcp://activemq-artemis-slave-2.activemq-artemis-slave.svc.cluster.local:61616</connector>
    </connectors>
    <cluster-connections>
      <cluster-connection name="activemq-artemis">
        <connector-ref>activemq-artemis-master-0</connector-ref>
        <retry-interval>500</retry-interval>
        <retry-interval-multiplier>1.1</retry-interval-multiplier>
        <max-retry-interval>5000</max-retry-interval>
        <initial-connect-attempts>-1</initial-connect-attempts>
        <reconnect-attempts>-1</reconnect-attempts>
        <message-load-balancing>ON_DEMAND</message-load-balancing>
        <max-hops>1</max-hops>
        <!-- scale-down>true</scale-down -->
        <static-connectors>
          <connector-ref>activemq-artemis-master-0</connector-ref>
          <connector-ref>activemq-artemis-slave-0</connector-ref>
          <connector-ref>activemq-artemis-master-1</connector-ref>
          <connector-ref>activemq-artemis-slave-1</connector-ref>
          <connector-ref>activemq-artemis-master-2</connector-ref>
          <connector-ref>activemq-artemis-slave-2</connector-ref>
        </static-connectors>
      </cluster-connection>
    </cluster-connections>
    <ha-policy>
      <replication>
        <master>
          <group-name>activemq-artemis-0</group-name>
          <quorum-vote-wait>12</quorum-vote-wait>
          <vote-on-replication-failure>true</vote-on-replication-failure>
          <!--we need this for auto failback-->
          <check-for-live-server>true</check-for-live-server>
        </master>
      </replication>
    </ha-policy>
  </core>
  <core xmlns="urn:activemq:core">
    <jmx-management-enabled>true</jmx-management-enabled>
  </core>
</configuration>

My consumer is defined as a JmsListener in a Spring Boot app. During the consumption of the messages in a queue, the Spring Boot app crashed which results in kubernetes deleting the pod and recreating a new one. However, I noticed that the new pod did not connect to the same Artemis node, thus the left over messages from the previous connection were never consumed.

I thought the whole point of using the cluster is to have all the Artemis nodes act as one unit to deliver messages to the consumer regardless of which node it connect to. Am I wrong? If the cluster cannot reroute the consumer connection to the correct node (which holds the left over messages from previous consumer) then what is the recommended way to deal with this situation?

lnthai2002
  • 154
  • 11

1 Answers1

3

First, it's important to note that there's no feature to make a client reconnect to the broker from which it disconnected after the client crashes/restarts. Generally speaking the client shouldn't really care about what broker it connects to; that's one of the main goals of horizontal scalability.

It's also worth noting that if the number of messages on the brokers and the number of connected clients is low enough that this condition arises frequently that almost certainly means you have too many brokers in your cluster.

That said, I believe the reason that your client isn't getting the messages it expects is because you're using the default redistribution-delay (i.e. -1) which means messages will not be redistributed to other nodes in the cluster. If you want to enable redistribution (which is seems like you do) then you should set it to >= 0, e.g.:

      <address-setting match="#">
        ...
        <redistribution-delay>0</redistribution-delay>
        ...
      </address-setting>

You can read more about redistribution in the documentation.


Aside from that you may want to reconsider your topology in general. Typically if you're in a cloud-like environment (e.g. one using Kubernetes) where the infrastructure itself will restart failed pods then you wouldn't use a master/slave configuration. You'd simply mount the journal on a separate pod (e.g. using NFSv4) such that when a node fails it would be restarted and then reconnect back to its persistent storage. This effectively provides broker high availability (which is what master/slave is designed for outside of cloud environments).

Also, a single instance of ActiveMQ Artemis can handle millions of messages per second depending on the use-case so you may not actually need 3 live nodes for your expected load.

Note, these are general recommendations about your overall architecture and are not directly related to your question.

Justin Bertram
  • 29,372
  • 4
  • 21
  • 43
  • Thank you for your input. I tried the redistribution-delay and it worked as expected. My concern with redistribution is that it basically move the remaining message to the new queue where the revived consumer connect to. For it to work, the redistribution-delay has to be more than the time it takes for the consumer to com back to life and reconnect (so the queue can be auto-created), is it correct? – lnthai2002 May 11 '21 at 21:00
  • While your suggestion to use redistribution does solve the problem, it does not answer the question why the revived consumer does not reconnect to the same node. Is my cluster setup incorrectly? I appreciate your recommendation about not using slave in kubernetes environment and I think it makes sense since my testing shows that the slave actually kick in and yield within few secs when the master is down. However, I don't think the use of slave prevent the revived consumer from reconnect to the same Artemis node. – lnthai2002 May 11 '21 at 21:13
  • I updated my answer to address your comments. However, I don't understand your question about redistribution-delay length and queue auto-creation. Redistribution only happens when a queue on one node has messages but no consumers and that same queue has consumers but not messages on another node. The delay is configurable to deal with situations where clients disconnect and reconnect quickly to the same node (i.e. without crashing). Hope that helps! – Justin Bertram May 11 '21 at 23:52
  • Thank you Justin for clarification on the answer, I though the distribution delay only kick in if there is a corresponding queue with consumer, if redistribution-delay is set for 1 min but the consumer takes 5 min to come back to life then the messages will not be redistribute to the new queue. I gonna do a test for this case – lnthai2002 May 12 '21 at 13:47
  • I did a test with a short redistribution-delay and long revive of consumer and I can see the left over messages transferred and consumed by the revived consumer. – lnthai2002 May 13 '21 at 13:02
  • This is the expected behavior. Thanks for confirming. – Justin Bertram May 13 '21 at 13:18