1

I followed these instructions from Stuart Douglas video to enable Wildfly to balance request without the need of apache + mod_cluster, feature that is available since Wildfly 9.

It worked like in the video. But then, instead of adding the 3rd backend server to the same host, I created another host and added the backend3 server to it, which was also added to the backend-servers group.

So I had the following layout:

Server one (host controller and load balancer):

  • Backend1
  • Backend2

Server two (slave):

  • Backend3

I started the 2nd host as a slave and I could access the clustering-demo using its ip and the backend3 port. Besides, the host controller was able to register the slave:

[Host Controller] 10:05:52,198 INFO  [org.jboss.as.domain.controller] (Host Controller Service Threads - 56) WFLYHC0019: Registered remote slave host "srv217", JBoss WildFly Full 10.0.0.Final (WildFly 2.0.10.Final)  

However, when I accessed the main server, the load was still being balanced only to backend1 and backend2. I tried to stop both and let only backend3 started, but then I couldn't access clustering-demo through the load balancer anymore.

Anyone know if an addicional configuration is required for the load balancer to work with a slave host?

EDIT:

I'm adding my host controller and slave log.

Host controller: http://pastebin.com/nyaDiPzS Slave: http://pastebin.com/kMS72E4U

These lines caught my attention:

[Server:backend2] 08:56:58,956 INFO  [org.infinispan.CLUSTER] (remote-thread--p7-t1) ISPN000310: Starting cluster-wide rebalance for cache clustering-demo.war, topology CacheTopology{id=1, rebalanceId=1, currentCH=DefaultConsistentHash{ns=80, owners = (1)[master:backend2: 80+0]}, pendingCH=DefaultConsistentHash{ns=80, owners = (2)[master:backend2: 40+40, master:backend1: 40+40]}, unionCH=null, actualMembers=[master:backend2, master:backend1]}
[Server:backend2] 08:56:59,023 INFO  [org.infinispan.CLUSTER] (remote-thread--p7-t1) ISPN000310: Starting cluster-wide rebalance for cache routing, topology CacheTopology{id=1, rebalanceId=1, currentCH=DefaultConsistentHash{ns=80, owners = (1)[master:backend2: 80+0]}, pendingCH=DefaultConsistentHash{ns=80, owners = (2)[master:backend2: 40+40, master:backend1: 40+40]}, unionCH=null, actualMembers=[master:backend2, master:backend1]}
[Server:backend2] 08:56:59,376 INFO  [org.infinispan.CLUSTER] (remote-thread--p7-t2) ISPN000336: Finished cluster-wide rebalance for cache clustering-demo.war, topology id = 1

It seems to confirm that slave:backend3 is not detected.

2 Answers2

1

Change on your slave host default interface addresses to be visible for the master.

ie:

<interfaces>
    <interface name="management">
        <inet-address value="${jboss.bind.address}"/>
    </interface>
    <interface name="public">
        <inet-address value="${jboss.bind.address}"/>
    </interface>
    <interface name="private">
        <inet-address value="${jboss.bind.address}"/>
    </interface>
    <interface name="unsecure">
        <inet-address value="${jboss.bind.address}"/>
    </interface>
</interfaces>

where jboss.bind.address is a real IP of slave host. And do the same on master host.

n1cr4m
  • 221
  • 2
  • 7
  • My slave host.xml already has this configuration with its ip adress set. – Humberto Ferreira Da Luz Mar 22 '16 at 21:04
  • So, do the same in your domain.xml config. It works for me on two windows hosts. When all interfaces (master host, slave hoste and domain controler) have public IPs cluster and load balancing should work. – n1cr4m Mar 23 '16 at 12:23
  • I set all interfaces in master's host.xml and domain.xml and also in slaves's host.xml and slave.xml, but it still doesn't work. I don't know if the issue is related to interfaces, because slave is connecting successfully to master. However the balancer still doesn't send requests to slave hosts. By the way, it works if I use mod_cluster + apache as the load balancer. – Humberto Ferreira Da Luz Mar 29 '16 at 17:54
  • So now your slave node is registering correctly in cluster but only load balancing doesn't work? And to be clear, on master you need domain.xml and host.xml (or other one) but on slave host only slave.xml is needed. If you give me your email address I will send you my fully working config for master and slave. – n1cr4m Mar 31 '16 at 10:47
  • Actually slave was registering to master since the beginning, sorry if I was not clear. Please, send your config to hfluz@uel.br. Thanks a lot. – Humberto Ferreira Da Luz Apr 01 '16 at 15:12
0

I am heaving the same issue.My master and slaves log output looks exactly the same.

I followed the video tutorial and the related project on GitHub: https://github.com/stuartwdouglas/modcluster-example

I have set all public IP addresses on the master and slave servers. The host controller registers the slaves, infinispan displays log messages for cluster rebalancing displaying the 2 slaves. But the load balancing does not work. It also appears that the session replication also does not work, despit the fact that infinispan displays all slaves as members of the cluster.

If I repeat the steps as described in the above mentioned GitHub project on a single machine, that is all servers are on the same IP, but on different ports - everything works.

I also found this: http://wildfly9.blogspot.bg/2015/10/wildfly-9-reverse-proxy-config-with.html?m=1, where he mentions adding the public IP addresses as default-host aliases. Did that too and still it is not working.

Can someone point us in the right direction. I presume it is something small, but substantial that is missing from the documentation and there is no adequate demo/tutorial on the net that demonstrated the set-up when all the slaves and the load balancer reside on different hosts (different IP addresses).

Alex
  • 1
  • 1