Skip to end of metadata
Go to start of metadata

Overview

In addition to load balancing the worker nodes, sometimes we also want to clustering the load balancers for high availability. For example, we may want to run two httpd servers with mod_clusters, and these two httpd servers are connecting to the same JBoss EAP6 server nodes in behind. In this way, when one httpd server crashed, another httpd server could still serving user requests. In this way, we are using two httpd servers to form a cluster of load balancers.

In addition, we need to load balancing these two httpd load balancers. There are many ways to achieve this. The solution provided by Red Hat is to use LVS+Heartbeat. LVS is an IP level load balancer, and heartbeat is a supplement to LVS that could help to detect the liveness of the worker nodes. You can write some scripts for Heartbeat to use to judge the liveness of mod_cluster.

Here is the deployment diagram that shows the above structure:

The usages and clustering of LVS+Heartbeat is out of the scope of this document. You can refer to Red Hat official document to see for its usages(Btw, Red Hat has provided a GUI management tool called Piranha that could help you to manage LVS+Heartbeat).

In this document I'd like to show you how to set up multiple mod_cluster instances in a clustering environment. But before that, we need to have some basic understandings on mod_cluster design.

mod_cluster design

mod_cluster uses three channels for its functions:

  • An advertising channel that mod_cluster module in httpd uses to publish its address.
  • A management channel that controls the worker nodes
  • A proxy channel that used to forward user requests from httpd to JBoss EAP6

In this article, we need to focus on the advertising channel.

Advertising Channel

Advertising channel is an IP multicast group. And by default its address is 224.0.1.105:23364. You can find this default setting in jboss-eap-6.1/domain/configuration/domain.xml:

In httpd side, you don't have to do any configuration to use this default setting.

The mod_cluster library in httpd side will publish it's management channel address in this multicast group. Here is a sample message mod_cluster from httpd side published in the multicast group:

In above message, we see "X-Manager-Address" is "10.0.1.44:6666", which corresponds to my settings in my httpd configuration:

From the above configuration we can see the address "10.0.1.44" is the IP address of the httpd server, and port 6666 is used as management channel, because we have set "EnableMCPMReceive" in this virtual host.

Because the modcluster subsystem on EAP6 side also join this channel, so it can fetch the advertising message published by httpd. After EAP6 located the address of mod_cluster, it will negotiate with httpd using the management channel. After the connection established with httpd, httpd will work as EAP6 server's load balancer and forward user requests to EAP6 by using the proxy channel.

So the key point is: mod_cluster on httpd side will use advertising channel to publish its address, and the advertising channel is actually an IP multicast group.

This is the key to set multiple mod_clusters. Actually there is no difference than using a single mod_cluster in a clustering environment. We just need to configure all the mod_cluster on httpd side and all the modcluster subsystem on EAP6 side to use same advertising channel, and then each httpd instance will publish its mod_cluster management channel address in same multicast group, and EAP6 servers will find multiple httpd servers as load balancers in the group.

As we know, the default settings of advertising channel on both httpd and EAP6 sides are the same, which are all by default "224.0.1.105:23364", so we really don't need to do any additional work to enable it. Now let's see the example on my local environment.

An example of multiple mod_clusters

I've set an sample deployment in my local environment. In my environment, there are two EAP6 servers and two httpd servers. The two EAP6 servers are running in domain mode, so they have modcluster subsystem enabled. The two httpd servers has mod_cluster installed.

All the four servers are on different machines, so they have different IP addresses. Here are their hostnames and addresses:

  • 10.0.1.44 fedora: an httpd server is install on this machine, and its mod_cluster module is properly configured to work as load balancer.
  • 10.0.1.45 fedora2: an httpd server is install on this machine, and its mod_cluster module is properly configured to work as load balancer.
  • 10.0.1.46 master: an EAP6 server is installed on this machine and running in domain mode, which is acting as domain controller.
  • 10.0.1.47 slave: an EAP6 server is installed on this machine and running in domain mode, which accepts management of EAP6 server on master.

The deployment diagram is shown in following:

As you can see, we are using two httpd servers working as load balancers in this clustering environment, which are "fedora" and "fedora2". Here are the mod_cluster settings in the httpd servers of these two machines. First is "fedora":

Then it's "fedora2":

After I've started these two httpd servers, and then monitor the messages in advertising channel on one of the EAP6 server machine, I can see the following messages:

We can see two httpd servers from the above messages in advertising channel. So these two servers are visible to the two EAP6 servers, because they have subscribed into the default multicast group, and these two EAP6 servers will register themselves to both of these two httpd servers.

By looking at the mod_cluster management consoles in these two httpd servers, we can see the EAP6 servers are registered on both of them:

That means I can access either of these two load balancers to access my EAP6 cluster:

Conclusion

In this article we have learned how to set two httpd+mod_cluster instances in the clustering environment. The key point is to put EAP6 servers and mod_cluster modules in httpd into same advertising channel, so EAP6 servers could join the cluster of both mod_cluster instances.

I haven't touched how to load balancing the two httpd servers in this article, nevertheless, you can seek the solutions like LVS+heartbeat to solve this problem.

Labels:
eap6 eap6 Delete
mod_cluster mod_cluster Delete
modcluster modcluster Delete
jboss jboss Delete
load load Delete
balancing balancing Delete
ha ha Delete
high high Delete
availability availability Delete
eap eap Delete
wildfly wildfly Delete
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.
  1. Dec 17, 2013

    Hi Li,

    Thanks a lot for your effort. This one is really helpful.

    I am facing some problem to post Jms message from Non-cluster Jboss server to the above cluster setup JMS Queue from the Java Code. I have attached my Client code, Mod_cluster config & Jboss configuration file for your reference. Please guide me, how to discover the Cluster group address from Java code,if possible please share me some sample code which I can use to post message to the Cluster JMS Queue.

    ClusterProgramTest.zip

    Thanks & Regards,

    Jonbon Dash

    1. Dec 18, 2013

      Hi Jonbon,

      HornetQ subsystem uses its own clustering system, and it doesn't related with our httpd+mod_cluster load balancers.

      Here is a good article on HornetQ clustering: http://blog.akquinet.de/2012/11/24/clustering-of-the-messaging-subsystem-hornetq-in-jboss-as7-and-eap-6/

      If you have any questions on using HornetQ, you can ask questions here: https://community.jboss.org/en/hornetq?view=discussions

      Hope the above info useful to you

  2. Jul 16, 2014

    May I know whether httpd+mod_cluster#2 will get the same worker base on its sticky session when httpd+mod_cluster#1 goes down? To make it happen, httpd+mod_cluster#2 must have the same sticky table as what httpd_mod_cluster#1 has and that is what I don't know. I guess that there should be no impact if I enable session-replication across all JBoss nodes. I just want to know how it works on such multiple httpd+mod_cluster in clustering environment. Thanks.