In this article, I'd like to show you how to set up WildFly 8 in domain mode and enable clustering so we get HA and session replication working among the nodes. It's a step-by-step guide so you can follow the instructions in this article and build the sandbox yourself .
We need to prepare two hosts (or virtual hosts) to carry out the experiment. We will use the two hosts as follows:
- Install Fedora 16 on them (other Linux versions are probably fine as well but I'll use Fedora 16 in this article)
- Make sure that they are in same local network (subnet)
- Make sure that they can access each other via different TCP/UDP ports (best to turn off the firewall and disable SELinux during the experiment or these will cause network problems)
Here are some details on what we are going to do:
- Let's call one host 'master' and the other 'slave'.
- Both master and slave will run WildFly 8: master will run as domain controller, and slave will under the domain management of master.
- Apache httpd will be run on master, and in httpd we will enable the mod_cluster module. The WildFly 8 on master and slave will form a cluster and be discovered by httpd.
- We will deploy a demo project into domain, and verify that the project is deployed into both master and slave by domain controller. Thus we will see that domain management provide us a single point to manage the deployments across multiple hosts in a single domain.
- We will access the cluster URL and verify that httpd has distributed the request to one of the WildFly host. So we could see the cluster is working properly.
- We will try to make a request of the cluster, and if the request is forwarded to master, we then kill the WildFly process on master. After that we will go on making requests of the cluster and we should see the request forwarded to slave without the session being lost. Our goal is to verify that HA is working and sessions are replicated.
- After the previous step has finished, we reconnect the master by restarting it. We should see the master is registered back into the cluster, and we should also see slave recognising master as domain controller again and connect to it.
Please don't worry if you cannot digest so many details currently. Let's move on and you will get the points step by step.
First we should download WildFly 8 from the website:
The version I downloaded is JBoss AS 7.1.0.CR1b, please don't use the version prior to this one, or you will meet this bug when running in clustering mode:
After download finished, I got the zip file:
Note: The name of your archive will differ slightly due to version naming conventions.
Then I unzipped the package to master and try to make a test run:
If all is well, we should see WildFly successfully start in domain mode:
Now exit master and let's repeat the same steps on slave. Once we get WildFly 8 running on both master and slave, we can move on to the next step.
In this section we'll set up both master and slave to run in domain mode, and configure master to be the domain controller.
First open the host.xml in master for editing:
The default settings for interface in this file are:
We need to change the address to the management interface so slave can connect to master. The public interface allows the application to be accessed by non-local HTTP, and the unsecured interface allows remote RMI access. My master's IP address is 10.211.55.7, so I change the config to:
Now we will set up the interfaces on slave. First we need to remove the domain.xml from slave, because slave will not act as domain controller and it will operate under the management of master. I just rename the domain.xml so it won't be processed by WildFly:
|From JBoss AS 7.1.Final you don't need to rename domain.xml anymore.|
Then let's edit host.xml. Similar to the steps on master, open host.xml first:
The configuration we'll use on slave is a little bit different, because we need to let slave connect to master. First we need to set the hostname. We change the name property from:
Then we need to modify the domain-controller section so slave can connect to master's management port:
As we know, 10.211.55.7 is the IP address of master.
Finally, we also need to configure interfaces section and expose the management ports to the public address:
10.211.55.2 is the IP address of the slave. Refer to the domain controller configuration above for an explanation of the management, public, and unsecured interfaces.
|It is easier to turn off all firewalls for testing, but in production, you need to enable the firewall and allow access to the following ports: TBD|
If you start WildFly on both master and slave now, you will see that slave cannot be started, with following error:
This is because we haven't properly set up the authentication between master and slave. Now let's work on it:
In the bin directory there is a script called add-user.sh. We'll use it to add new users to the properties file used for domain management authentication:
As shown above, we have created a user named 'admin' whose password is '123123'. Then we add another user called 'slave':
We will use this user for slave host to connect to master.
|Notice that the username must be equal to the name given in slave's host element. This means that you need a user for each additional host.|
In newer versions of WildFly, the add-user.sh script will let you choose the type of the user. Here we need to choose the 'Management User' type for both the 'admin' and 'slave' accounts:
On slave we need to configure host.xml for authentication. We should change the security-realms section as following:
We've added server-identities into security-realm, which is used for authenticating the host when slave tries to connect to master. Because slave's host name is set to 'slave', we should use the 'slave' user's password on master. In the secret value property we have 'MTIzMTIz=', which is the Base64 code for '123123'. You can generate this value by using a base64 calculator such as the one at http://www.webutils.pl/index.php?idx=base64.
Then in the domain controller section we also need to add the security-realm property:
So the slave host could use the authentication information we provided in 'ManagementRealm'.
Now everything is set for the two hosts to run in domain mode. Let's start them by running domain.sh on both hosts. If everything goes fine, we can see from the log on master:
That means all the configurations are correct and two hosts are running in domain mode now as expected. Hurrah!
Now we can deploy a demo project into the domain. I have created a simple project located at:
We can use the git command to fetch a copy of the demo:
In this demo project we have a very simple web application. In web.xml we've enabled session replication by adding following entry:
And it contains a jsp page called put.jsp which will put the current time into a session attribute called 'current.time':
Then we can obtain this value using get.jsp:
It's an extremely simple project but it could help us to test the cluster later: we will access put.jsp from the cluster and check that the request is distributed to master, then we disconnect master and access get.jsp. We should see the request is forwarded to slave but the 'current.time' value is held by session replication. We'll cover this in more detail later.
Let's go back to this demo project. Now we need to create a war from it. In the project directory, run the following command to get the war:
It will generate cluster-demo.war. Then we need to deploy the war into domain. First we should access the http management console on master (because master is acting as domain controller):
It will prompt you for an account name and password. We can use the 'admin' account we added a short while ago. After logging in we see the 'Server Instances' window. By default there are three servers listed, which are:
We can see that server-one and server-two have the status 'running' and that they belong to main-server-group; server-three has the status 'idle', and belongs to other-server-group.
All these servers and server groups are set in domain.xml on master as7. What we are interested in is the 'other-server-group' in domain.xml:
We can see that this server-group is using the 'ha' profile, which then uses the 'ha-sockets' socket binding group. It enables all the modules we need to establish the cluster later (including infinispan, jgroup and mod_cluster modules). So we will deploy our demo project into a server that belongs to 'other-server-group', so 'server-three' is our choice.
|In newer versions of WildFly, the profile 'ha' changes to 'full-ha':|
Let's go back to the domain controller's management console:
Now server-three is not running, so let's click on 'server-three' and then click the 'start' button at the bottom right of the server list. Wait a moment and server-three should start.
Now we should also enable 'server-three' on slave: from the top of the menu list on the left-hand side of the page, we can see that we are currently managing master. Click on the list, select 'slave', then choose 'server-three'. We are now on the host management page for slave.
Then repeat these steps on master to start 'server-three' on slave.
|server-three on master and slave are two different hosts; their names can be different.|
After server-three on both master and slave are started, we will add our cluster-demo.war for deployment. Click on the 'Manage Deployments' link at the bottom of left menu list.
We should ensure that server-three is started on both master and slave.
After the enter 'Manage Deployments' page, click 'Add Content' at the top right-hand corner. Then choose the cluster-demo.war, and follow the instructions to add it into our content repository.
Now we can see cluster-demo.war is added. Next we click the 'Add to Groups' button and add the war to 'other-server-group' and then click 'save'.
After a few seconds, the management console will say that the project is deployed into 'other-server-group'.：
Please note that we have two hosts participating in this server group, so the project should be deployed in both master and slave now - that's the power of domain management.
Now let's verify this, trying to access cluster-demo from both master and slave:
Now that we have finished the project deployment and seen the usage of the domain controller, we will then use these two hosts to establish a cluster
|Why is the port number 8330 instead of 8080? This has to do with the port-offset property of socket-bindings. Check host.xml on both master and slave:
The port-offset is set to 250, so 8080 + 250 = 8330
Now we quit the WildFly process on both master and slave. We have some work left to do on the host.xml configurations. Open host.xml on master, and make some modifications to the servers section:
We've set auto-start to true so we don't need to enable it in the management console each time WildFly restarts. Now open slave's host.xml, and modify the server-three section:
Besides setting auto-start to true, we've also renamed the 'server-three' to 'server-three-slave'. We need to do this because mod_cluster will fail to register the hosts with same name in a single server group, due to a name conflict.
After finishing the above configuration, let's restart the two as7 hosts and examine the cluster configuration.
We will use mod_cluster + apache httpd on master as our cluster controller here. WildFly 8 has been configured to support mod_cluster out of box, so it's the easiest way.
|The WildFly 8 domain controller and httpd don't have to be on same host. But in this article they are all installed on master for convenience.|
First we need to ensure that httpd is installed:
And then we need to download newer version of mod_cluster from its website:
The version I downloaded is:
|Jean-Frederic has suggested using mod_cluster 1.2.x, since1.1.x is affected by CVE-2011-4608.
With mod_cluster-1.2.0 you need to add EnableMCPMReceive in the VirtualHost.
Then we extract it into:
Then we edit httpd.conf:
We should add the modules:
Note that we should comment out the proxy_balancer_module:
This is because it will otherwise conflict with the cluster module. And then we need to make httpd listen on the public address so we can perform the testing. Because we installed httpd on master host, we know its IP address:
Then we do the necessary configuration at the bottom of httpd.conf:
|For more details on mod_cluster configuration please see this document:|
All being well, we can start the httpd service:
Now we access the cluster:
We should see from the WildFly log that the request is distributed to one of the hosts (master or slave). In this instance, the request is sent to master:
Now I disconnect master by using the management interface. Select 'runtime' and the server 'master' in the upper corners.
Select 'server-three' and click the stop button. The active icon should change.
Killing the server by using system commands will cause the Host-Controller to restart the instance immediately!
After a few seconds, access the cluster:
Now the request should be served by slave, which should see borne out in slave's log:
And from get.jsp we should see that the time returned is the same gathered by 'put.jsp'. Thus it's proven that the session has correctly replicated to slave.
Now we restart master and should see that the host is registered back to the cluster.
|It doesn't matter if you found the request is sent to slave the first time. In that case, just disconnect slave and perform the same test: the request should be sent to master instead. The point is that we should see the request redirected from one host to another and the session is maintained.|
Wolf-Dieter Fink has contributed the updated add-user.sh usages and configs in host.xml from 7.1.0.Final.
Jean-Frederic Clere provided the mod_cluster 1.2.0 usage.
Misty Stanley-Jones has given a lot of suggestions and helps to make this document readable.