0

I'm trying to implement Wildfly 8.1.0.Final domain setup, profile full-ha, with 1 master and 2 slaves with load balancing by mod_cluster.

My environment: 1) host master on VPS (DigitalOCean) Ubuntu 14.04 LTS x64, Wildfly 8.1.0.Final and Apache Web Server 2.4.7 with mod_cluster 1.3.1.Alpha3-SNAPSHOT; 2) host slave1 on VPS (DigitalOCean) Ubuntu 14.04 LTS x64 and Wildfly 8.1.0.Final; 3) host slave2 on VPS (DigitalOCean) Ubuntu 14.04 LTS x64 and Wildfly 8.1.0.Final.

I had to compile mod_cluster due incompatibility of version 1.2.6 with Apache Web Server 2.4.7.

I see the following errors:

a) on host master (/var/log/apache2/error.log): "(111)Connection refused: AH00957: ajp: attempt to connect to host_slave1 failed"

b) on host slave1 (/opt/wildfly/domain/configuration/servers/server-one/server.log): "2014-09-18 20:50:55,169 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending STATUS command to host_master, configuration will be reset: MEM: Can't read node"

So the load balancing virtual host with mod_cluster is unable to connect to hosts slave1 and slave2.

How to solve this issue, please?

cviniciusm
  • 139
  • 1
  • 2
  • 13

4 Answers4

0

At first, you could use mod_cluster 1.3.1.Final now that is fully integrated with Apache HTTP Server 2.4.x.

The answer

The problem is definitely in you network/host isolation. It is not enough that your worker node can access the EnableMCPMReceive enabled VirtualHost; your Apache HTTP Server must be able to reach back to the worker.

Take a look at the IP address (hostname) the host_slave1 reported to the Apache HTTP Server and make sure it is possible to contact the host_slave1 on that address:port from the Apache HTTP Server machine.

One may easily verify the status on Mod_cluster maanger console, enabled in a virtual host with:

    <Location /mod_cluster_manager>
      SetHandler mod_cluster-manager
      # This is super sensitive, don't open to the world...
      Require ip 127.0.0.1
   </Location>

HTH

0

your Apache HTTP Server must be able to reach back to the worker.

ajping is a small easy to install and use script. Install on the load balancer and invoke:

loadbalancer$ ajping host_slave1:8009
Reply from 172.26.XXX.XXX: 7 bytes in 0.002 seconds

This verifies the load balancer can talk AJP to the node.

DaveC
  • 53
  • 6
0

Your problem is the JBOSS server could not send data to apache server, indeed apache could not redirect your request to jboss application.

The reason could be anyone, maybe if you specify https the cert is not configured, maybe in your jboss, wilfly the configuration have an error the most simple is to follow the exemple on modcluster website or to send standalone or domain.xml and apache configuration.

I think since 2 years he have found the solution, he should post here for person in the futur with same problem...

cyril
  • 872
  • 6
  • 29
0
  • Add 'PersistSlots On' in your Apache HTTPD mod_cluster.conf file.

  • The metadata associated with nodes, aliases and contexts is sent by the worker nodes during the registration process, and then subsequently updated via messages, but is not persisted, by default. Thus, if a httpd node is stopped and then restarted, it loses the metadata, so it "forgets" about the backend nodes, unless the worker nodes explicitly re-register.

During that time then the httpd does have complete knowledge about each EAP node in the backend cluster, it is not able to correctly load-balance when it received valid (but unknown to it) JVM Routes.

This behavior can be avoided by setting PersistSlots to "on".

There are no damaging side-effects to setting "PersistSlots" to "on".

LoadModule slotmem_module modules/mod_slotmem.so
LoadModule manager_module modules/mod_manager.so
LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
LoadModule advertise_module modules/mod_advertise.so

Listen *:6666
PersistSlots On        <<<<<<----------------- THIS ENTRY ---------#####        

<VirtualHost *:6666>
.
.
.
</VirtualHost>