0

I have two Wildfly 18 instances running locally: n1 and n2. I would like instance n2 to consume instance n1's produced messages in order to take steps towards a HA scenario. After reading the RH EAP docs, I have done the following:

1- Defined a Exposed JMS Queue on n2. Also, I added security settings and Remote Factory in the ActiveMQ Submodule:

[...]
<server name="default">
   <security-setting name="#">
       <role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
   </security-setting>
[...]
   <jms-queue name="testQueue" entries="queue/test java:jboss/exported/jms/queue/test"/>
   <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/>
</server>
[...]

2- I configured JGroups via TCPPING with an initial list of nodes to join the cluster, in order to achieve cluster discovery:

[...]
<protocol type="org.jgroups.protocols.TCPPING">
   <property name="initial_hosts">127.0.0.1[8600]</property>
     <property name="port_range">0</property>
 </protocol>
[...]

3- Then I brought up the two instances, and there I get the following messages in the app logs:

(Thread-12 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@7124120f)) AMQ221027: Bridge ClusterConnectionBridge@c6997b5 [name=$.artemis.internal.sf.my-cluster.f3561996-f354-11ea-83cc-4c32759d60cf, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.f3561996-f354-11ea-83cc-4c32759d60cf, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=c9af42f1-f354-11ea-8e25-4c32759d60cf], temp=false]@2747e684 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@c6997b5 [name=$.artemis.internal.sf.my-cluster.f3561996-f354-11ea-83cc-4c32759d60cf, queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.f3561996-f354-11ea-83cc-4c32759d60cf, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=c9af42f1-f354-11ea-8e25-4c32759d60cf], temp=false]@2747e684 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=<port_number>&host=localhost], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@1775690639[nodeUUID=c9af42f1-f354-11ea-8e25-4c32759d60cf, connector=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8323&host=localhost, address=jms, server=ActiveMQServerImpl::serverUUID=c9af42f1-f354-11ea-8e25-4c32759d60cf])) [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEndpoint=http-acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=<port_number>&host=localhost], discoveryGroupConfiguration=null]] is connected

But when I try to send messages from n1 to n2 using the following JNDI conf,

java.naming.factory.initial = org.wildfly.naming.client.WildFlyInitialContextFactory
java.naming.provider.url = remote://localhost:8323
java.naming.security.principal = ***
java.naming.security.credentials = ***
Connection Factory JNDI name = jms/RemoteConnectionFactory
Queue JNDI name = jms/queue/test

... I get this error after a certain timeout (~30s):

javax.naming.CommunicationException: WFNAM00018: Failed to connect to remote host [Root exception is java.io.IOException: JBREM000202: Abrupt close on Remoting connection 4ba0f2c1 to localhost/127.0.0.1:8323 of endpoint (anonymous)

I have tried to connect to the same queue using a simple JMS client (https://plugins.jetbrains.com/plugin/10949-jms-messenger), and I was actually able to connect, as I at least got the following error:

ERROR [com.my.app.Receiver] (Thread-14 (ActiveMQ-client-global-threads)) Unknown message: ActiveMQMessage[ID:5f71e993-f377-11ea-acfc-169f02eb582c]:PERSISTENT/ClientMessageImpl[messageID=442, durable=true, address=jms.queue.test,userID=5f71e993-f377-11ea-acfc-169f02eb582c,properties=TypedProperties[__AMQ_CID=5f684ca0-f377-11ea-acfc-169f02eb582c,_AMQ_ROUTING_TYPE=1]]

Could you please hint me on what is wrong and explain why that is? Thanks a lot

LoreV
  • 575
  • 5
  • 25
  • Why exactly are you trying to cluster the two nodes together? Why not simply configure `n2` to consume messages from `n1`? Also, what on `n2` will be consuming the messages? An MDB? Something else? Please elaborate. – Justin Bertram Sep 10 '20 at 13:59
  • Hi Justin. I had in mind to have `n2` consuming messages sent by `n1`. In fact, `n1` should have an MDB with a JNDI connection factory to connect to n2's exposed JMS queue. – LoreV Sep 10 '20 at 15:12
  • 1
    I'm confused. If `n2` is supposed to consume messages from `n1` then why would `n1` have the MDB? Also, you didn't explain why you clustered the two brokers. – Justin Bertram Sep 10 '20 at 15:19
  • Sorry for the confusion. Let me try to explain: `n1`, has a sender bean. It sends messages to `n2`, which has a receiver bean. The latter is used to receive and deserialize messages. I would like to cluster the two instances in order to do some load balancing. – LoreV Sep 10 '20 at 15:26
  • Why do you need load-balancing if the producer is on `n1` and the consumer is on `n2`? Typically in a load-balancing scenario you have producers and consumers connected to all nodes (i.e. in order to balance the load). – Justin Bertram Sep 10 '20 at 15:30
  • Hi Justin, you are right - that would be the next step. – LoreV Sep 10 '20 at 15:36
  • Then this problem is moot. If you cluster your brokers and have producers and consumers on each node then you won't have to explicitly connect to other nodes. All the clients can work with local resources and the cluster will take care of balancing load. That said, a single instance of ActiveMQ Artemis can potentially handle millions of messages per second so depending on your use-case you might not need load balancing at all. I recommend you conduct performance benchmarking before you complicate your architecture with a cluster. – Justin Bertram Sep 10 '20 at 16:14
  • Any update on this? – Justin Bertram Sep 11 '20 at 18:47
  • Hi there, yes. I managed to connect to the `n2` node by providing the right jndi properties on the client side – LoreV Sep 11 '20 at 19:20
  • You should provide your solution in an answer and mark it as correct. – Justin Bertram Sep 11 '20 at 19:52

1 Answers1

0

I solved this issue by working on the Wildfly and JNDI configuration. Though the error message was very generic, at least in my case, the following Wildfly config:

<subsystem xmlns="urn:jboss:domain:messaging-activemq:8.0">
  <server name="default">
     <http-acceptor name="http-acceptor-throughput" http-listener="messaging">
                <param name="batch-delay" value="50"/>
                <param name="direct-deliver" value="false"/>
            </http-acceptor>
     ...
     <http-connector name="http-connector-throughput" socket-binding="messaging-throughput" endpoint="http-acceptor-throughput">
                <param name="batch-delay" value="50"/>
            </http-connector>
    ...
    <jms-queue name="test" entries="queue/test java:jboss/exported/jms/test"/>
    <broadcast-group name="bg-group1" jgroups-cluster="activemq-cluster" broadcast-period="5000" connectors="http-connector"/>
    <discovery-group name="dg-group1" jgroups-cluster="activemq-cluster"/>        
    ...
    <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/> 
</subsystem>
...
<subsystem xmlns="urn:jboss:domain:remoting:4.0">
   <http-connector name="messaging-remoting-connector" connector-ref="messaging-http" security-realm="ApplicationRealm"/>
</subsystem>
...
<socket-binding-group ... >
   ...
   <socket-binding name="messaging" port="8323"/>
   <socket-binding name="messaging-throughput" port="8324"/>
   ...
</socket-binding-group>

Worked with the following JNDI config:

java.naming.factory.initial = org.wildfly.naming.client.WildFlyInitialContextFactory
java.naming.provider.url = remote://localhost:8323
java.naming.security.principal = ***
java.naming.security.credentials = ***
Connection Factory JNDI name = jms/RemoteConnectionFactory
Queue JNDI name = jms/test

Also, as the principal/credentials were not part of the ApplicationRealm, I started getting a 403 HTTP response code (upon calling the messaging endpoint). In order to get that working too, I had to add the user and related credential using the add-user.sh script (found in the Wildfly /bin folder).

LoreV
  • 575
  • 5
  • 25