2

I have a multiple nodes of Jboss on different VMs running in standalone mode. I am using Distributed Infinispan Cache. Below is the code which I'm using currently.

JChannel jchannel = new JChannel();
jchannel.setDiscardOwnMessages(false);
jchannel.setName("losci_qa");
JGroupsTransport transport = new JGroupsTransport(jchannel);

manager = new DefaultCacheManager(GlobalConfigurationBuilder.defaultClusteredBuilder()
          .transport().transport(transport).nodeName(cacheClusterName+"-node").clusterName(cacheClusterName)
          .build());

ConfigurationBuilder c = new ConfigurationBuilder();
c.clustering().cacheMode(CacheMode.DIST_SYNC).hash().numOwners(numOwners).numSegments(numSegments).capacityFactor(capacityFactor).build();
c.invocationBatching().enable();
c.transaction().transactionMode(TransactionMode.TRANSACTIONAL).lockingMode(LockingMode.PESSIMISTIC);
manager.defineConfiguration(DIST, c.build());

The above code is running successfully on individual node. Issue is, when I want to communicate this cache with both the nodes its not working. When I run above code its print below logs.

Server 1 Logs:

2020-09-24 12:16:45,637 INFO  [org.infinispan.factories.GlobalComponentRegistry] (default task-1) ISPN000128: Infinispan version: Infinispan 'Infinity Minus ONE +2' 9.4.11.Final
2020-09-24 12:16:45,823 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (default task-1) ISPN000078: Starting JGroups channel losci_qa
2020-09-24 12:16:45,839 INFO  [stdout] (default task-1)
2020-09-24 12:16:45,839 INFO  [stdout] (default task-1) -------------------------------------------------------------------
2020-09-24 12:16:45,840 INFO  [stdout] (default task-1) GMS: address=losci_qa, cluster=losci_qa, physical address=10.100.101.82:60774
2020-09-24 12:16:45,840 INFO  [stdout] (default task-1) -------------------------------------------------------------------
2020-09-24 12:16:47,845 INFO  [org.jgroups.protocols.pbcast.GMS] (default task-1) losci_qa: no members discovered after 2003 ms: creating cluster as first member
2020-09-24 12:16:47,858 INFO  [org.infinispan.CLUSTER] (default task-1) ISPN000094: Received new cluster view for channel losci_qa: [losci_qa|0] (1) [losci_qa]
2020-09-24 12:16:47,865 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (default task-1) ISPN000079: Channel losci_qa local address is losci_qa, physical addresses are [10.10.10.82:60774]

Server 2 Logs:

2020-09-24 17:17:07,686 INFO  [org.infinispan.factories.GlobalComponentRegistry] (default task-1) ISPN000128: Infinispan version: Infinispan 'Infinity Minus ONE +2' 9.4.11.Final
2020-09-24 17:17:07,936 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (default task-1) ISPN000078: Starting JGroups channel losci_qa
2020-09-24 17:17:07,958 INFO  [stdout] (default task-1)
2020-09-24 17:17:07,958 INFO  [stdout] (default task-1) -------------------------------------------------------------------
2020-09-24 17:17:07,958 INFO  [stdout] (default task-1) GMS: address=losci_qa, cluster=losci_qa, physical address=10.100.101.83:39828
2020-09-24 17:17:07,958 INFO  [stdout] (default task-1) -------------------------------------------------------------------
2020-09-24 17:17:09,966 INFO  [org.jgroups.protocols.pbcast.GMS] (default task-1) losci_qa: no members discovered after 2007 ms: creating cluster as first member
2020-09-24 17:17:09,981 INFO  [org.infinispan.CLUSTER] (default task-1) ISPN000094: Received new cluster view for channel losci_qa: [losci_qa|0] (1) [losci_qa]
2020-09-24 17:17:09,989 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (default task-1) ISPN000079: Channel losci_qa local address is losci_qa, physical addresses are [10.10.10.83:39828]

no members discovered by infinispan cache.

Network Domain is same, cluster name is same. What I am doing wrong here? How can i make a cluster of both the nodes ? how can i communicate with each node?

TIA

1 Answers1

0

First problem could be a firewall. Disable firewalls on both nodes and test without them (you should start them again after fixing the rules).

With no-arg JChannel constructor you default to udp.xml; The PING protocol multicasts hello messages by default on 228.8.8.8:45588; check that multicasts are routed correctly on your machine (e.g. using netcat/nc).

If this won't help, enable TRACE logging to get more insight.

Radim Vansa
  • 5,686
  • 2
  • 25
  • 40
  • Thanks for your response, 1. **Firewall** is disabled on both machines. (doesn't solve my problem) 2. As u said, I haven't pass any arg in JChannel then it use default **udp.xml**. So, I have checked this with below commands on machine level. `On Node 1(10.10.10.82): netcat -u -l 45588` `On Node 2(10.10.10.83): netcat -u 10.10.10.82 45588` connection was established successfully and I can send messages from **Node 2 to Node 1** but again this was on machine level then I checked this with Jgroups. But no luck, I cannot send messages from one node to another using jgroups jar. – Fasih Ur Rehman Sep 27 '20 at 06:29
  • I have followed below link to check with JGroups, (https://access.redhat.com/documentation/en-us/red_hat_data_grid/7.0/html/administration_and_configuration_guide/sect-test_multicast_using_jgroups) **Procedure 30.1. Test Multicast Using JGroups** – Fasih Ur Rehman Sep 27 '20 at 06:31
  • I think my port 45588 is open but with this **-mcast_addr 228.8.8.8** I cannot communicate. Is this an issue? – Fasih Ur Rehman Sep 27 '20 at 06:40
  • If multicast doesn't work JGroups won't discover other members and it won't cluster. Check that it is really listening on the mcast port (e.g. `netstat -lpn`) and then check also `route` if the mcasts go to proper iface (you can even create a rule to be sure). Then test with `netcat`, using the mcast address. – Radim Vansa Sep 30 '20 at 07:00
  • Also, are you on Mac or Linux? And have you forced `-Djava.net.preferIPv4Stack=true`? (JGroups can work with IPv6, too, but the defaults are set for IPv4) – Radim Vansa Sep 30 '20 at 07:04