0

I am running a clustered application on WildFly 25 in a Kubernetes cluster. Users are distributed across the cluster via the the ingress. Currently all WildFly nodes in the cluster share the same EAR and have the same configuration, based on standalone-full.xml.

I would like extend the application to use a JMS client and submit messages to a single JMS queue from any node in the cluster and have them processed by a single JMS consumer running on a specific node in the cluster. It basically works, and managing/controlling the JMS consumer on a single node is not a problem. Messages created on the same node are processed correctly by the consumer. However, in my quick research and tests I have not managed to find a simple solution to achieve my goal of messages produced on any node in the cluster being processed by the single JMS consumer.

I tried using the JDBC journal instead of the file-based journal, thinking that the central DB would be used for by all JMS clients in the cluster and I could simply start my single consumer to process the queue:

                <journal datasource="mariadb" 
                         database="my_db" 
                         messages-table="jms_m"
                         large-messages-table="jms_lm" 
                         page-store-table="jms_ps" 
                         jms-bindings-table="jms_b"/>

This worked to the extent that the DB tables were created and the messages were added and processed for each individual node. So with this configuration I still need a consumer per node (on the node) to process messages created by that node (... and reading more docs, I think this setup is wrong and ActiveMQ expects distinct tables per node - the docs discuss a prefix rather than full db table names as is configurable with WildFly). So anyways, this not what I wanted.

Post-solution addition for posterity (as noted in the accepted answer):

Having all the nodes share the same tables will cause undefined behavior (i.e. it will break).

How can I configure the JMS queue (or JMS client) to allow me to submit messages from from any node in the cluster to be processed (in order) by the consumer on a single node? I was hoping to avoid having specific standalone configurations per node (consumer/producer) if possible.

sprockets
  • 981
  • 1
  • 6
  • 16
  • How did you configure the messaging cluster ? Did you introduce replication r do you want a shared store ? – ehsavoie Jul 03 '23 at 14:34
  • @ehsavoie I guess that is the primary purpose of my question, to determine what the best approach. Until now, I only tried (and failed) configuring to use a jdbc journal. Before attempting additional alternative solutions (it seems there are several), I wanted to ask the SO hive mind for recommendations. If it is of value, I can post my activemq subsysem configuration, but it is pretty vanilla, just adding my queue to the default server config. – sprockets Jul 03 '23 at 14:47

1 Answers1

1

You just need to cluster your brokers. WildFly ships with an example of how to do this in standalone/configuration/standalone-full-ha.xml, e.g.:

        <subsystem xmlns="urn:jboss:domain:messaging-activemq:13.0">
            <server name="default">
                <security elytron-domain="ApplicationDomain"/>
                <cluster password="${jboss.messaging.cluster.password:CHANGE ME!!}"/>
                <statistics enabled="${wildfly.messaging-activemq.statistics-enabled:${wildfly.statistics-enabled:false}}"/>
                <security-setting name="#">
                    <role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
                </security-setting>
                <address-setting name="#" dead-letter-address="jms.queue.DLQ" expiry-address="jms.queue.ExpiryQueue" max-size-bytes="10485760" page-size-bytes="2097152" message-counter-history-day-limit="10" redistribution-delay="1000"/>
                <http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/>
                <http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput">
                    <param name="batch-delay" value="50"/>
                </http-connector>
                <in-vm-connector name="in-vm" server-id="0">
                    <param name="buffer-pooling" value="false"/>
                </in-vm-connector>
                <http-acceptor name="http-acceptor" http-listener="default"/>
                <http-acceptor name="http-acceptor-throughput" http-listener="default">
                    <param name="batch-delay" value="50"/>
                    <param name="direct-deliver" value="false"/>
                </http-acceptor>
                <in-vm-acceptor name="in-vm" server-id="0">
                    <param name="buffer-pooling" value="false"/>
                </in-vm-acceptor>
                <jgroups-broadcast-group name="bg-group1" jgroups-cluster="activemq-cluster" connectors="http-connector"/>
                <jgroups-discovery-group name="dg-group1" jgroups-cluster="activemq-cluster"/>
                <cluster-connection name="my-cluster" address="jms" connector-name="http-connector" discovery-group="dg-group1"/>
                <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
                <jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
                <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
                <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/>
                <pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" transaction="xa"/>
            </server>
        </subsystem>

Note the cluster, jgroups-broadcast-group, jgroups-discovery-group, and cluster-connection which are not part of the default standalone-full.xml.

I'm not exactly sure what you mean by "in order," but there are no strict ordering guarantees among all the nodes in the cluster since clustering involves moving messages between nodes behind-the-scenes.

Lastly, you definitely don't need to use a centralized database for all the cluster nodes. The file-based journal is the recommended approach. Having all the nodes share the same tables will cause undefined behavior (i.e. it will break).

Justin Bertram
  • 29,372
  • 4
  • 21
  • 43
  • 1
    gotta admit, I was hoping for a quick and easy solution from you, and voila. Thanks! Lemme test this out and assuming it works, accept the answer. As for the message order, yeah, I know there is a risk of order getting lost in transit. Luckily, I am not working in a bank. – sprockets Jul 03 '23 at 15:02
  • 1
    Thanks again Justin, as you indicated, the standalone-full-ha.xml file was the key to the solution. Had I oriented myself on it instead of the standalone-full.xml I might have recognized the jgroups additions as the key to the solutoin. Once adjusted as you proposed, this JMS "just worked" (with clustered brokers). – sprockets Jul 03 '23 at 17:45
  • Always glad to help, @sprockets! Thanks for following up. – Justin Bertram Jul 03 '23 at 18:42