JBoss Community Archive (Read Only)

RHQ 4.9

Advanced Build Notes

Below are some notes and information regarding the RHQ build system. They provide some tips that should enable you to more efficiently build and develop the RHQ platform. If you are looking for basic step-by-step instructions on how to build RHQ, see the Building RHQ page.

settings.xml

You can customize how Maven performs its builds by creating for yourself a settings.xml file and placing that file in the $HOME/.m2 directory. There is an example settings.xml checked into the git repository at etc/m2/settings.xml. Please note that the file in the etc directory is only an example - for it to take effect, you must put it in a location where Maven will be looking for it. By default, that is $HOME/.m2/settings.xml. Therefore, if you need to customize your Maven settings, you will need to copy the settings.xml located in the /etc directory and place the new copy in your $HOME/.m2 directory.

If you look at that file, you'll see you can do things like:

  • Enable the dev profile by default (see below for more information on this RHQ-defined profile).

  • Disable the tests by default (Maven's default is to always enable tests unless you specify -Dmaven.test.skip or -DskipTests; you can define this in your settings.xml if you do not want to run the tests by default)

    • Note: if you want to disable tests, use -DskipTests instead of -Dmaven.test.skip. Some modules in RHQ produce test JARs on which other modules depend. If you build with -Dmaven.test.skip, those test JARs are not produced and the build will fail with an error message about missing dependencies. -Dmaven.test.skip cause Maven to skip both compiling and executing tests whereas -DskipTests results in Maven bypassing the execution of tests but still compiling them.

  • Define locations of several databases, if you want to compile and test with different databases

  • Define locations of additional Maven repositories

  • Define the location of your external RHQ Server location (rhq.containerDir - see below for more)

  • et. al...

Preparing To Run Tests

If you want to run the unit and integration tests, you should not specify -DskipTests on the mvn command line. However, before you run the tests, you should ensure your settings.xml is configured. To begin running the tests successfully (assuming you've pulled down the source for RHQ):

  1. copy the default maven settings override file from your RHQ source working copy (etc/m2/settings.xml) to the default Maven home location ($HOME/.m2/). If that directory does not already exist, you will need to create it.

  2. Modify your $HOME/.m2/settings.xml in the following ways:

    • Make sure an active database profile (e.g. <activeProfile>postgres</activeProfile>) is uncommented and exists for your chosen database only. In other words, only one Postgres or Oracle profile should be active at any one time.

    • Modify the node <profiles><profile><properties><rhq.rootDir>/home/spinder/workspace/rhq</rhq.rootDir> so its value points to your specific RHQ root direction.

    • If you do not want to specify -Pdev everytime in your Maven command line, you can put an uncommented <activeProfile>dev</activeProfile> element in the 'active profiles' section. This is optional.

Once you have a properly configured settings.xml file, you can run the tests by simply not specifying the -Dmaven.test.skip=true argument on the mvn command line.

Incidentally, if you specified <skipTests>true</skipTests> in your settings.xml, you will never run the tests, even if you do not specify -DskipTests=true on the mvn command line.

Integration tests

The module integration-tests is meant for integration tests that require third party
applications to be available like e.g.
the AS7 plugin tests that require that an AS7 instance is running in domain mode.

As not every user has AS7 available when building RHQ, the integration-tests module is disabled
by default and can be enabled by either using the integration-tests profile or the integration.test property
as in

mvn -Pdev -Dintegration.tests test

or
mvn -Pdev,integration-tests test

Performing Full Builds and Module Specific Builds

The first time you build RHQ, you perform a "full" build. This means you run the "mvn" command from the root project directory (what is termed "<rhq-working-copy-root>"). After that, as you are developing within different subsystems, you might not want to perform full builds just to test your changes because full builds typically take a minute or more to complete. You'll want to build the specific modules you are changing. This goes much faster because you are only building a subset of the source code. If you are, for example, only modifying domain objects, you'll want to do a "mvn" build inside the "<rhq-working-copy-root>/modules/core/domain" module. If you want to deploy your changes to an already installed development server (aka a "dev container"), you can build this module using the -Pdev profile. Read more below about Maven profiles and how they speed up RHQ development considerably.

When you do a full build, you must do it from the root directory (i.e. <rhq-working-copy-root>/). Do not run "mvn install" from <rhq-working-copy-root>/modules because that will not perform a complete full build - specifically, if a 3rd party dependency has been updated since your last build, you will not pick up the changes to the dependencies. Building out of root is required to rebuild the project metadata that contains the dependency version information.

If you are switching between branches that change the version you are going to build (such as going from master branch to an older release branch, e.g. going from 4.1.0 to 4.0.0), you should manually delete all target directories using your standard operating system commands. This is to avoid having to work around dependency issues when maven attempts to clean. On UNIX, you can do this manual removal of all target directories by doing this:

cd <rhq-working-copy-root>
find . -name target | xargs rm -rf

Purging and Updating the Database Schema

Sometimes when developing, you need to completely purge your database schema of all data. This is most helpful when writing your custom plugins and you are changing alot of the resource hierarchy information and metadata. If you want to delete all the data from your database but keep the schema intact, execute the following commands:

cd <rhq-working-copy-root>/modules/core/dbutils
mvn -Ddbsetup install

-Ddbsetup tells the dbutils module to uninstall any old schema currently existing and install a new schema. This essentially purges all data but rebuilds the schema. In the end, you will have all the RHQ database tables created but they will be empty of all data.

If you already have a database schema and you want to keep the data you have, but you need to upgrade the schema to pick up some changes another developer made, you can use -Ddbsetup-upgrade instead of -Ddbsetup.

The build uses two different DBs - one for tests and the other for the dev-container. This way you know stuff you're doing in your dev-container will not interfere with tests and vice-versa. The dbutils module can be used to update either of these DBs. The DB it uses is defined by the "db" sysprop (e.g. -Ddb=test or -Ddb=dev). If the dev profile is active, the default value for the "db" sysprop is "dev", and otherwise it is "test".

The rhq.test.ds.* props define the test DB, and the rhq.dev.ds.* props define the dev DB. The defaults are set as follows in the root pom:

<!-- defaults for datasource used by integration tests -
     these may be overridden in ~/.m2/settings.xml -->
<rhq.test.ds.connection-url>jdbc:postgresql://127.0.0.1:5432/rhq</rhq.test.ds.connection-url>
<rhq.test.ds.driver-class>org.postgresql.Driver</rhq.test.ds.driver-class>
<rhq.test.ds.xa-datasource-class>org.postgresql.xa.PGXADataSource</rhq.test.ds.xa-datasource-class>
<rhq.test.ds.user-name>rhqadmin</rhq.test.ds.user-name>
<rhq.test.ds.password>rhqadmin</rhq.test.ds.password>
<rhq.test.ds.type-mapping>PostgreSQL</rhq.test.ds.type-mapping>
<rhq.test.ds.server-name>127.0.0.1</rhq.test.ds.server-name>
<rhq.test.ds.port>5432</rhq.test.ds.port>
<rhq.test.ds.db-name>rhq</rhq.test.ds.db-name>
<rhq.test.ds.hibernate-dialect>org.hibernate.dialect.PostgreSQLDialect</rhq.test.ds.hibernate-dialect>
<rhq.test.quartz.driverDelegateClass>org.quartz.impl.jdbcjobstore.PostgreSQLDelegate</rhq.test.quartz.driverDelegateClass>
<rhq.test.quartz.selectWithLockSQL>SELECT * FROM {0}LOCKS ROWLOCK WHERE LOCK_NAME = ? FOR UPDATE</rhq.test.quartz.selectWithLockSQL>
<rhq.test.quartz.lockHandlerClass>org.quartz.impl.jdbcjobstore.StdRowLockSemaphore</rhq.test.quartz.lockHandlerClass>

<!-- defaults for datasource used by the dev container build (see dev docs on the 'dev' profile) -
     these may be overridden in \~/.m2/settings.xml -->
<rhq.dev.ds.connection-url>jdbc:postgresql://127.0.0.1:5432/rhqdev</rhq.dev.ds.connection-url>
<rhq.dev.ds.driver-class>org.postgresql.Driver</rhq.dev.ds.driver-class>
<rhq.dev.ds.xa-datasource-class>org.postgresql.xa.PGXADataSource</rhq.dev.ds.xa-datasource-class>
<rhq.dev.ds.user-name>rhqadmin</rhq.dev.ds.user-name>
<rhq.dev.ds.password>rhqadmin</rhq.dev.ds.password>
<rhq.dev.ds.password.encrypted>1eeb2f255e832171df8592078de921bc</rhq.dev.ds.password.encrypted>
<rhq.dev.ds.type-mapping>PostgreSQL</rhq.dev.ds.type-mapping>
<rhq.dev.ds.server-name>127.0.0.1</rhq.dev.ds.server-name>
<rhq.dev.ds.port>5432</rhq.dev.ds.port>
<rhq.dev.ds.db-name>rhqdev</rhq.dev.ds.db-name>
<rhq.dev.ds.hibernate-dialect>org.hibernate.dialect.PostgreSQLDialect</rhq.dev.ds.hibernate-dialect>
<rhq.dev.quartz.driverDelegateClass>org.quartz.impl.jdbcjobstore.PostgreSQLDelegate</rhq.dev.quartz.driverDelegateClass>
<rhq.dev.quartz.selectWithLockSQL>SELECT * FROM {0}LOCKS ROWLOCK WHERE LOCK_NAME = ? FOR UPDATE</rhq.dev.quartz.selectWithLockSQL>
<rhq.dev.quartz.lockHandlerClass>org.quartz.impl.jdbcjobstore.StdRowLockSemaphore</rhq.dev.quartz.lockHandlerClass>

You can of course override these in your settings.xml. If you really wanted to, you could even make the two sets of props point at the same DB.

If you are using Postgres, dbutils can also be used to create the DB user and schema, e.g.:

mvn -Ddb=dev -Ddbreset

would drop and create the dev DB schema and then run dbsetup to populate it. And:

mvn -Ddb=test -Ddbreset

would drop and create the test DB schema and then run dbsetup to populate it.

Updating the Storage Node DB (4.8+)

RHQ Storage Nodes are introduced in version 4.8. They use Cassandra to store metric data in a scalable fashion. In development environments it may be desirable to also update or reset the storage DB when doing so to the RDB. To extend the operation to the storage DB add -Dstorage-schema. There are a few rules:

  • The storage node must be running because it must interact with the database.

    • Note, it is not useful or valid to use this option before installing the storage node, like on a fresh build.

  • -Pdev must be specified.

    • It is only useful for dev builds

  • Only -Ddbreset and -Ddbsetup-upgrade are recognized, not -Ddbsetup

So, to recreate your dev db and your storage schema:

mvn -Pdev -Ddbreset -Dstorage-schema

Building an upgrade database

There are times, particularly for testing, when you will want to build a database that is upgraded from some past release. In the dbutils module we can generate a JON 2.3.1 database and then upgrade it to whatever is in HEAD.

mvn -Ddbreset -Djon.release=2.3.1 -Ddb=test

The above command line does several things. First, it drops and recreates the test database. Secondly, it runs the JON 2.3.1 dbsetup scripts against the database. The scripts include both the schema and data scripts. Lastly, the current dbupgrade script is run against the database.

You can easily add support for additional dbsetup scripts from other releases by following these steps:

  • Create the directory dbutils/src/main/scripts/dbsetup/<release> where <release> is the release number you are targeting.

  • Copy the dbsetup schema and data files into dbutils/src/main/scripts/dbsetup/<release>/. The file names must conform to the following naming conventions:

    • db-schema-combined-<RELEASE>.xml

    • db-data-combined-RELEASE.xml

Building Oracle without Running Tests or Validating Schemas

Run the following command to build oracle without running tests or validating schemas:

mvn --settings settings.xml --activate-profiles enterprise,dist,ojdbc-driver --errors --debug -Ddbsetup-do-not-check-schema=true \
   -DskipTests -Drhq.test.db.type=oracle -Dmaven.repo.local=${WORKSPACE}/.m2/repository clean install

Maven Profiles

RHQ is separated into several Maven modules. Examples of these modules are the core-domain module and the server-jar module. Each module has a Maven pom.xml that defines metadata about that module (such as its name, version and dependencies). When you run "mvn", it builds one or more of these modules depending on which module your current working directory is in and which Maven profiles you have enabled. See Maven Profiles and Properties for a description of the different Maven profiles RHQ has created. For information on how to determine which profiles are activated for a given mvn run, see http://www.sonatype.com/books/mvnref-book/reference/profiles-sect-activation.html.

The enterprise Profile

The RHQ Maven build infrastructure defines a Maven profile called enterprise. This effectively builds the fully contained and ready-to-run RHQ Server container (which includes the JBoss Application Server and all its custom configuration and deployment files that go with it).

To build the RHQ Server container, you enable this enterprise profile by passing the command line option -Penterprise to the mvn executable when building from the root module: e.g. mvn -Penterprise install. -Penterprise is only valid when building from the root module.

In order to understand things like profiles and modules, you should be familiar with Maven. Read the Maven documentation for more information.

The enterprise profile is very simple - all it does is enable the building of the modules/enterprise/server/appserver Maven module when building RHQ from the root module. You get the same effect as -Penterprise if you were to change your current directory to modules/enterprise/server/appserver/ and execute mvn install.

After you have built the RHQ Server container, you will find it under the directory modules/enterprise/server/appserver/target/rhq-server-<version>. This is a fully contained and ready-to-run RHQ Server. See Building RHQ#Run RHQ for the detailed steps on running the RHQ Server from this location.

Note that this modules/enterprise/server/appserver module will, by default, build an RHQ Server that needs to have its installer run. This is because the user needs to tell the RHQ Server things like database connection information, the IP address that the RHQ Server should bind to, etc. See Running the Installer for information on the installation process.

If you are developing RHQ, you usually build with the -Pdev profile so you can build a "predeployed" RHQ Server container that doesn't require the installer to be manually run by you (it will run it automatically under the covers). See below for information on this dev profile.

The dev Profile

The RHQ Maven build infrastructure defines a Maven profile called dev. You typically enable this profile when you are developing RHQ and building it often. The dev profile helps speed up the building process and will copy the RHQ modules' build artifacts to an external RHQ Server location, allowing you to have an RHQ Server that you constantly update so you can avoid having to build a full RHQ Server every time you want to run it.

When you first built RHQ, you probably enabled both the enterprise and dev profiles by passing to mvn the command line option -Penterprise,dev (which is the same as if you specified -Penterprise -Pdev). Because you enabled the enterprise profile, you told Maven to build the modules/enterprise/server/appserver module (see the section above for more info on this). But because you also specified the dev profile, you told Maven to take the RHQ Server container that the enterprise profile built and copy it to an external container directory (by default, it will be a new directory under your <rhq-working-copy-root> directory called dev-container). This external container directory (<rhq-working-copy-root>/dev-container by default) is a fully contained and ready-to-run RHQ Server and you configure and run it like any other.

Your $HOME/.m2/settings.xml Maven configuration file can be used to tune how certain things are built. In order to use the dev profile, you should set the rhq.rootDir property to the full path to the directory where RHQ <rhq-working-copy-root> is checked out (e.g. C:/Projects/rhq-src). The dev profile will then use "<rhq.rootDir>/dev-container" as the external container location. Alternatively, if you want your external container to live somewhere other than under the RHQ <rhq-working-copy-root> directory, you can set the rhq.containerDir property to the full path of the directory where you want your external container live.

Now that you have built your container, you do not have to build it again (unless, of course, something in the container module changed, in which case you will need to rebuild it again). Now you can simply build with the -Pdev profile enabled, but you do not need to enable the -Penterprise profile. All of the RHQ Maven modules have rules defined that will run when the dev profile is enabled - usually it means the Maven module will simply copy its build artifacts to your external container location (i.e. your rhq.containerDir).

Let's go over an example for how this is helpful and speeds development. Suppose you have already built your RHQ Server and have it stored in your external location at /my-rhq-server (that is, your settings.xml defines rhq.containerDir as "/my-rhq-server"). Suppose that I modified a GWT page in the UI and I want to see my change. I can go to the modules/enterprise/gui/coregui module, and build it with the dev profile: mvn -Pdev install (to make it even faster, I can turn off the unit tests by passing in the -Dmaven.test.skip property). The coregui module will build the war and, because dev profile is enabled, will copy all of its build artifacts to your external location under /my-rhq-server. You do not have to rebuild the entire server again, you just need to rebuild the module that changed. The dev profile will copy the changed artifacts to their appropriate locations within your external container.

If the RHQ Server was already running, you do not have to shut it down and restart it when changing plugin code; just rebuild it using -Pdev and the plugin jar will be copied in the appropriate location in the RHQ Server. The RHQ Server will pick up the change, deploy the plugin properly and your agents will then be free to update their plugins to pick up the new one (see the agent's "plugins update" prompt command for one way to do this).

The dev Database

The dev profile uses a different database than the database used by the unit tests. This way you know stuff you're doing in your dev-container will not interfere with tests and vice-verse. The dev database is defined via the rhq.dev.ds.* properties, which you would set in the dev profile section in your settings.xml, e.g.:

      <profile>
         <id>dev</id>
         <properties>
            <!-- Set the below prop to the absolute path of your RHQ source dir (e.g. /home/bob/projects/rhq).
                (${rhq.rootDir}/dev-container will be used as the dev container dir) -->
            <rhq.rootDir>/home/ips/Projects/rhq</rhq.rootDir>

            <!-- Alternatively, if you don't want to use the default location of {rhq.rootDir}/dev-container/
                 for your dev container, then set the below prop to the desired location. -->
            <!--<rhq.containerDir>C:/home/bob/rhq-dev-container</rhq.containerDir>-->

            <rhq.dev.ds.connection-url>jdbc:postgresql://127.0.0.1:5432/rhqdev</rhq.dev.ds.connection-url>
            <rhq.dev.ds.user-name>rhqadmin</rhq.dev.ds.user-name>
            <rhq.dev.ds.password>rhqadmin</rhq.dev.ds.password>
            <rhq.dev.ds.type-mapping>PostgreSQL</rhq.dev.ds.type-mapping>
            <rhq.dev.ds.driver-class>org.postgresql.Driver</rhq.dev.ds.driver-class>
            <rhq.dev.ds.xa-datasource-class>org.postgresql.xa.PGXADataSource</rhq.dev.ds.xa-datasource-class>
            <rhq.dev.ds.server-name>127.0.0.1</rhq.dev.ds.server-name>
            <rhq.dev.ds.port>5432</rhq.dev.ds.port>
            <rhq.dev.ds.db-name>rhqdev</rhq.dev.ds.db-name>
            <rhq.dev.ds.hibernate-dialect>org.hibernate.dialect.PostgreSQLDialect</rhq.dev.ds.hibernate-dialect>
            <!-- quartz properties -->
            <rhq.dev.quartz.driverDelegateClass>org.quartz.impl.jdbcjobstore.PostgreSQLDelegate</rhq.dev.quartz.driverDelegateClass>
            <rhq.dev.quartz.selectWithLockSQL>SELECT * FROM {0}LOCKS ROWLOCK WHERE LOCK_NAME = ? FOR UPDATE</rhq.dev.quartz.selectWithLockSQL>
            <rhq.dev.quartz.lockHandlerClass>org.quartz.impl.jdbcjobstore.StdRowLockSemaphore</rhq.dev.quartz.lockHandlerClass>
         </properties>
      </profile>

If the rhq.dev.ds.* properties are not specified in your settings.xml, the defaults will be used, which are the values shown above (i.e. jdbc:postgresql://127.0.0.1:5432/rhqdev).

Once you have configured your dev DB connection settings, you will need to create the DB - do this as follows:

cd modules/core/dbutils
mvn -Pdev -Ddbreset -Dmaven.test.skip=true

The -Pdev activates the dev profile, which tells the dbutils module to use the dev DB, rather than the test DB by default. To use the test DB instead, you can either deactivate the dev profile using -P'!dev' or explicitly tell the dbutils module to use the test DB via -Ddb=test.

And if you ever want to wipe all data from your dev DB and/or upgrade it to the latest schema, run the following commands:

cd modules/core/dbutils
mvn -Pdev -Ddbsetup -Dmaven.test.skip=true

The rhq.dev.ds.* should not be confused with the rhq.test.ds.* properties, which define the DB that is used by the domain and server-jar unit tests.

The dev Storage Node (4.8+)

Starting with version 4.8 RHQ will supplement its RDB database (typically Postgres or Oracle) with a Cassandra database for scalable metric storage and analysis. To deal with the added complexity RHQ is introducing some more comprehensive installation and control mechanisms. For developers, this has some impact on the dev-container.

There is now another RHQ component:

  • RHQ Server

  • RHQ Agent

  • RHQ Storage (New!)

The new RHQ Control Script is bin/rhqctl and now controls the three components in production and in the dev environment. See Building RHQ for more on building the dev container as well as above for applying the -Pdev profile.

Installation Defaults

Before installing the Storage node (See RHQ Control Script for more on the install command) you can change defaults. The dev container provides an initial rhq-storage.properties in the bin directory. It lowers memory consumption and also defaults your Cassandra hostname to 127.0.0.1.

The default jmx port is set to 7299.

Notes:

  • dev-container/rhq-server/rhq-storage/... is the storage node install directory

  • dev-container/rhq-agent/... is the agent install directory

  • rootDir/rhq-data/... is the root dir for cassandra data storage (rootDir is typically the parent dir for dev-container)

  • Use rhqctl stop|start|status to further manipulate the services

    • To target a single service use the options -storage|server|-agent

  • To run in console mode stop the background service and use the --console option for the specific service

Deploying Multiple Dev Storage Nodes (4.8+)

Running multiple nodes relies in part on using localhost aliases. If you are on a Linux platform, you should not have to create the aliases. On other platforms like Mac OS X, you will have to create the aliases which can be done as follows,

$ sudo ifconfig lo0 alias 127.0.0.2 up
$ sudo ifconfig lo0 alias 127.0.0.3 up

On RHEL you can explicitly set those up via

$ sudo ifconfig lo add 127.0.0.2
$ sudo ifconfig lo add 127.0.0.3

There is a script named storage_setup.groovy in the appserver module that gets executed when the dev profile is active. It does not install storage nodes. It sets up the necessary directory structure so that you can easily install nodes. Let's say we are rebuilding our dev-container and want to deploy 3 storage nodes.

$ mvn clean package -Pdev -Drhq.storage.num-nodes=3

This will create dev-container/rhq-server which is the regular, full dev-container server, and it also creates dev-container/rhq-server-2 and dev-container/rhq-server-3. The rhq-server-2 and rhq-server-3 directories are minimal shells that provide the necessary pieces for running storage nodes. Each of these additional server directories contains the necessary scripts along with a configured rhq-storage.properties and a series of symlinks to provide a fast, lightweight solution for deploying multiple storage nodes with the dev-container.

Now let's suppose we already have our dev-container built and want to deploy additional an storage node. From the appserver module run,

$ mvn -o groovy:execute -Pdev -Dsource=src/main/script/storage_setup.groovy -Drhq.storage.num-nodes=2

This will generate dev-container/rhq-server-2. If we later decide that we want two more nodes, run,

$ mvn -o groovy:execute -Pdev -Dsource=src/main/script/storage_setup.groovy -Drhq.storage.num-nodes=4

The script will detect that you already have rhq-server and rhq-server-2 and will then only set up rhq-server-3 and rhq-server-4. Each node has to be (and is) configured with a unique JMX port. If you deploy multiple storage nodes prior to running the server installer, you will need to configure the rhq.cassandra.seeds property in rhq-server.properties by hand. So if you have two nodes, e.g., dev-container/rhq-server/rhq-storage and dev-container/rhq-server-2/rhq-storage, the property should look like,

rhq.cassandra.seeds=127.0.0.1|7299|9142,127.0.0.2|7300|9142

Given that the implementation relies on symlinks, I do not expect that this will work on Windows unless you are running something like cygwin; however, as I mentioned at the beginning, there should be no changes whatsoever to dev-container/rhq-server regardless of whether you run one or multiple nodes.

-Ddbsetup-upgrade/-Ddbreset

Remember that now there are two repositories involved, the RDB and Cassandra. Just as Postgres or Oracle must be running to perform a -Ddbsetup, so must Cassandra. The difference is that Cassandra is under the RHQ umbrella, and therefore RHQ is your control program for Cassandra.

Make sure that prior to performing -Ddbsetup-upgarde or -Ddbreset that the RHQ Storage service is running. In most dev environments it's possible to just leave the RHQ Storage service running after installation, similar to your RDB.

Property Overrides

In general it shouldn't be necessary to override the defaults for your dev environment. But if necessary there are properties that can be set for your Cassandra connection, typically in your M2 settings.xml. For example

<rhq.dev.cassandra.username>cassandra</rhq.dev.cassandra.username>  // default=cassandra
<rhq.dev.cassandra.password>cassandra</rhq.dev.cassandra.password>  // default=cassandra
<rhq.dev.cassandra.seeds>HostOrIP|JmxPort|CqlPort</rhq.dev.cassandra.seeds> // default=127.0.0.1|7299|9142

Plugin-Specific Profiles

In modules/plugins/pom.xml, all but a few essential plugins (platform, rhq-agent, and jmx) are split out into separate profiles (jboss-plugins, linux-plugins, etc.), grouping similar plugins together. These profiles are activated by default by activating them if the java.home sysprop is set (which it always is). So if you just run 'mvn install', all plugins are built. However, using maven's profile disablement feature (e.g. -P'!profileName'), you can tell maven to not build some of the plugins. For example:

mvn -P'!linux-plugins' -P'!misc-plugins' -P'!validate-plugins'

tells maven to not build the linux plugins and the miscellaneous plugins. It also will not validate the plugins. All other plugins will be built. Here is a list of the plugin profiles currently configured in the plugins and ear poms:

  • jboss-plugins

  • linux-plugins

  • misc-plugins

  • validate-plugins (doesn't build any plugins, just validates them all)

GWT Compilation For Different Browsers

The RHQ user interface uses the GWT framework. The GWT code is mainly found in the coregui maven module. By default, the coregui module will be gwt-compiled for all browsers that GWT supports, and compiler optimizations are enabled. These are the settings we want for CI/QA builds and releases, but for everyday development, developers will want to only compile for the browser they're using (e.g. Firefox 3) and disable the compiler optimizations, in order to minimize the time it takes to build the coregui war. The following comments from coregui/pom.xml document the two Maven properties (gwt.userAgent and gwt.draftCompile) that can be used to override these two settings:

<properties>
   <!--
      This property is substituted, by the resource plugin during the resources phase, as the
      value of the user.agent property in RHQDomain.gwt.xml and CoreGUI.gwt.xml. The default
      value results in these GWT modules being compiled into JavaScript for all supported
      browsers. To limit compilation to your preferred browser(s) to speed up compile time,
      specify the gwt.userAgent property on the mvn command line (e.g. -Dgwt.userAgent=gecko1_8)
      or in your ~/.m2/settings.xml

      As of GWT 2.0.4, the recognized agents (defined in
      gwt-user.jar:com/google/gwt/user/UserAgent.gwt.xml) are as follows:

        ie6: IE6/IE7
        ie8: IE8
        gecko: FF2
        gecko1_8: FF3
        safari: Safari/Chrome
        opera: Opera

      Multiple agents can be specified as a comma-delimited list, as demonstrated by the
      default value below.
   -->
   <gwt.userAgent>ie6,ie8,gecko,gecko1_8,safari,opera</gwt.userAgent>

   <!-- Override this via mvn command line or your ~/.m2/settings.xml to speed up compilation. -->
   <gwt.draftCompile>false</gwt.draftCompile>
</properties>

Here is what a typical developer's ~/.m2/settings.xml could look like:

<profile>
   <id>dev</id>
   <properties>
   ...
      <!-- Only gwt-compile JavaScript for Firefox 3.x. -->
      <gwt.userAgent>gecko1_8</gwt.userAgent>
      <!-- Enable faster, but less-optimized, gwt compilations. -->
      <gwt.draftCompile>true</gwt.draftCompile>
   </properties>
</profile>

GWT Compilation Memory Requirements

The root pom.xml has some defaults that should work across all build environments. But if the GWT compiler fails due to an OutOfMemoryError, try to bump up the memory and/or adjust the worker threads used by the Maven GWT plugin through these settings in your settings.xml:

      <gwt-plugin.extraJvmArgs>-Xms512M -Xmx768M -XX:PermSize=128M -XX:MaxPermSize=256M</gwt-plugin.extraJvmArgs>
      <gwt-plugin.localWorkers>2</gwt-plugin.localWorkers>

GWT Compilation for Different Locales

You can limit the locales that the GWT build compiles. This also helps to further reduce the memory requirements of the build. You can put the the following settings in your settings.xml file:

     <gwt.locale>en,de</gwt.locale>

This will only compile RHQ with English and German locales. The value is a comma-separated list of locale names.

Tests

Skipping Tests

To skip running tests specify

-DskipTests

on the maven command line. To skip building and running tests specify

-Dmaven.test.skip

on the maven command line.

Running Specific Test Classes and Tests

To run an individual unit test class, pass the "test" system property to mvn, setting its value to the non-qualified unit test class you want to run. For example, if you want to run the tests in "org.abc.MyCustomTest", you would execute:

mvn test -Dtest=MyCustomTest

To run an individual test you can further narrow using the # separator, For example:

mvn test -Dtest=MyCustomTest#testABC

Actually, wildcards can be provided, so you can even do something like run a subset of test classes. To run all "My" test classes:

mvn test -Dtest=My*Test

Please note that you must not use the -Dmaven.test.skip property when using -Dtest, otherwise the unit test will not be executed.

Unit Tests

Debugging

If you do not set -Dmaven.test.skip or -DskipTests when you run an mvn build, the unit tests will execute. If you wish to debug a unit test with a JPDA-enabled IDE, you can pass in the system property -Dtest.debug which will launch the TestNG environment with JPDA enabled, listening on the socket port #8797. Connect your JPDA-enabled IDE to that port and you'll be able to step through the code. Combine that with the -Dtest property, and the following example will show you how you can JPDA-debug a specific unit test class:

mvn test -Dtest=MyCustomTest -Dtest.debug

Integration Tests

Debugging

Integration tests are run using Arquillian deployments. Basically, there are two phases. Phase 1 is the test setup and deployment of the tests. To debug the process running in phase 1 use -Dtest.debug. Phase 1 doesn't actually run the tests, it builds the deployments that contain the tests. Phase 2 actually runs the test code. To debug the actual integration test, in the deployment process, use -Ditest.debug (note the "i"). The JPDA port is 8798.

For example, the server/itests-2 module is the most extensive set of integration tests. Arquillian will assemble a test rhq.ear deployment, start up an EAP instance, deploy the test ear (which contains the test classes as well), and then run the tests inside the EAP container. To test the code that creates the test ear deployment, use -Dtest.debug. To debug the test class code, the server slsb calls, etc, use -Ditest.debug.

Server Integration Tests

The integration tests will fire up 2 Cassandra nodes, that listen on 127.0.0.1 and 127.0.0.2, You may need to configure 127.0.0.2 manually.
This is described above

The server/itests-2 module is the most extensive set of integration tests. They are found in modules/enterprise/server/itests-2.

It is important to realize that these tests deploy a test ear that is built from the rhq.ear artifact in your M2 repository. So, if you change code that is ultimately packaged in the ear you will need to rebuild the ear before re-running the itests. The most common example is a change to an SLSB, in server/jar. In this case, for example, you would need to rebuild server/jar, then rebuild server/ear, then run the itests. Note that changes to actual test class code does not require the ear be rebuilt.

-Ditest.use-external-storage-node (Currently only in the feature/cassandra-backend branch)

With the introduction of RHQ Storage Nodes (Cassandra) the server i-tests need a test Cassandra backend as well as the test RDBMS. The itests-2 infrastructure will create, start and destroy a test storage node automatically and for most users this will be sufficient. But, if you prefer to use an existing storage node to test against you can specify:

-Ditest.use-external-storage-node.

By default the external storage node seed is 127.0.0.1|7199|9042. To specify a different storage node specify:

-Drhq.cassandra.seeds="host|jmxport|cqlport"

On Windows you must use -Ditest.use-external-storage-node. This is because the nature of Windows prevents happy interaction between the spawned Arquillian, EAP and Cassandra processes.

Building With Oracle

Due to licensing restrictions, the RHQ project is not permitted to host the Oracle JDBC drivers on a public Maven repository. Because of this, the default RHQ build will not attempt to pull down the Oracle JDBC drivers. If you have access to a Maven repository that contains the Oracle JDBC drivers, and you set up Maven to access that repository (in settings.xml, for example) you can pull them down to your local repository by using the mvn command line option "-Pojdbc-driver". This will enable the RHQ-defined ojdbc-driver Maven profile, which will tell the build to add the Oracle JDBC driver to the set of dependencies that should be pulled down. The Oracle JDBC driver will then be added to the RHQ Server distribution, and the Oracle agent plugin (which has the Oracle JDBC driver as an explicit dependency).

The RHQ Agent's Oracle plugin module is always built as part of the default RHQ Maven build. But, it will be limited to discovery only without the Oracle JDBC driver. The Oracle JDBC driver can be added manually to the rhq-oracle-plugin.jar by placing it in <root>/lib of the JAR file.

If you do not have access to a Maven repository that contains the Oracle JDBC driver, you can manually create a repository locally. First, you must download the Oracle JDBC driver from somewhere (again, due to licensing restrictions, you will not find this JDBC driver anywhere in the RHQ website). Now you can create the local repository directory structure where you will place that Oracle JDBC driver. Determine where your local Maven repository's root directory is (it is typically in $HOME/.m2/repository). Under the local repository root directory, create the directory structure "/com/oracle/ojdbc6/#" where "#" is the Oracle JDBC driver's version, such as "11.1.0.7.0". In that directory, place the Oracle JDBC driver with the filename that matches the pattern "ojdbc6-#.jar", again where "#" is the JDBC driver version number. Now you should be able to build. Note this assumes you are using the ojdbc6 driver, not the older drivers.

Building the EMS Library

Several RHQ plugins use JMX to access managed resources. These plugins typically depend on, and extend, the JMX plugin. The JMX plugin, in turn, does its job with the help of the EMS library. EMS provides an API that allows you to access JMX resources without requiring your JVM to have fixed JMX vendor and version dependencies (i.e. you can use EMS to talk to different JMX MBeanServers that are implemented by different vendors and/or are different implementation versions) all within the same VM.

Sometimes, we have to fix bugs or add enhancements to EMS in order to allow plugins to provide better functionality. Below you will find instructions on how to build EMS and, when necessary, how to prepare a new version of EMS to be a dependency in the RHQ Maven build reactor.

Note that only those developers with the proper permissions can commit changes to the EMS SVN as well as the RHQ git. However, even if you do not have permissions to commit changes, you can still change this code and build/deploy it to your local machine. If you think you have changes that should be committed, post a patch to the dev mailing list or let us know at #rhq on freenode.

Compiling EMS

  1. svn co https://mc4j.svn.sourceforge.net/svnroot/mc4j/trunk/mc4j/modules/ems ems

  2. cd ems

  3. When ready for release, bump up version prop in build.xml

  4. ant clean dist

  5. The distribution binaries are located in the dist/ directory

    • org-mc4j-ems-impl-javadoc.jar

    • org-mc4j-ems-impl-sources.jar

    • org-mc4j-ems-javadoc.jar

    • org-mc4j-ems-sources.jar

    • org-mc4j-ems.jar

Publishing EMS

  1. Manually add the API, impl, javadoc and sources jars to a Maven repository

    #!/bin/sh
    _VERSION=1.2.12
    _MAVEN_REPO=/home/mazz/.m2/repository
    mvn deploy:deploy-file -Durl=file://${_MAVEN_REPO} \
                           -Dfile=org-mc4j-ems-${_VERSION}.jar \
                           -DgroupId=mc4j \
                           -DartifactId=org-mc4j-ems \
                           -Dpackaging=jar \
                           -Dversion=${_VERSION}
    mvn deploy:deploy-file -Durl=file://${_MAVEN_REPO} \
                           -Dfile=org-mc4j-ems-${_VERSION}-sources.jar \
                           -DgroupId=mc4j \
                           -DartifactId=org-mc4j-ems \
                           -Dpackaging=jar \
                           -Dversion=${_VERSION} \
                           -Dclassifier=sources
    mvn deploy:deploy-file -Durl=file://${_MAVEN_REPO} \
                           -Dfile=org-mc4j-ems-${_VERSION}-javadoc.jar \
                           -DgroupId=mc4j \
                           -DartifactId=org-mc4j-ems \
                           -Dpackaging=jar \
                           -Dversion=${_VERSION} \
                           -Dclassifier=javadoc
    mvn deploy:deploy-file -Durl=file://${_MAVEN_REPO} \
                           -Dfile=org-mc4j-ems-impl-${_VERSION}.jar \
                           -DgroupId=mc4j \
                           -DartifactId=org-mc4j-ems-impl \
                           -Dpackaging=jar \
                           -Dversion=${_VERSION}
    mvn deploy:deploy-file -Durl=file://${_MAVEN_REPO} \
                           -Dfile=org-mc4j-ems-impl-${_VERSION}-sources.jar \
                           -DgroupId=mc4j \
                           -DartifactId=org-mc4j-ems-impl \
                           -Dpackaging=jar \
                           -Dversion=${_VERSION} \
                           -Dclassifier=sources
    mvn deploy:deploy-file -Durl=file://${_MAVEN_REPO} \
                           -Dfile=org-mc4j-ems-impl-${_VERSION}-javadoc.jar \
                           -DgroupId=mc4j \
                           -DartifactId=org-mc4j-ems-impl \
                           -Dpackaging=jar \
                           -Dversion=${_VERSION} \
                           -Dclassifier=javadoc
  2. Upgrade ems.version in RHQ root pom to the new version

  3. Do a clean rebuild of the RHQ JMX plugin to ensure the new EMS jar is in the JMX plugin jar

  4. Tag the EMS SVN repository with the new version as the tag name. Historically, the tag is normally a copy of the entire MC4J trunk:

    svn copy \
    https://mc4j.svn.sourceforge.net/svnroot/mc4j/trunk/mc4j \
    https://mc4j.svn.sourceforge.net/svnroot/mc4j/tags/ems_1_2_16 \
    -m "tag for EMS 1.2.16"
  5. Publish the EMS jars to the JBoss Nexus thirdparty-uploads repository (ask someone on the RHQ team for help with this).

Downloading Source

If you want to see the source code for RHQ's third-party dependency libraries, you can ask Maven to download any and all available source jars by issuing the command mvn dependency:sources. If a public Maven repository has the sources available, they will be pulled down to your local repository.

Debugging a Running Agent

To start the agent such that you can connect a JPDA debugger (such as Eclipse) to it, simply set the RHQ_AGENT_ADDITIONAL_JAVA_OPTS environment variable so it contains your VM's appropriate JPDA settings and start the agent normally. You can then use your debugging tool to connect to your VM. Example options are:

set RHQ_AGENT_ADDITIONAL_JAVA_OPTS="-agentlib:jdwp=transport=dt_socket,address=9797,server=y,suspend=n"

Debugging a Running Server

To start the server such that you can connect a JPDA debugger (such as Eclipse) to it, to set the RHQ_SERVER_JAVA_OPTS environment variable so it contains your VM's appropriate JPDA settings and start the server normally. You can then use your debugging tool to connect to your VM. Example options are:

RHQ_SERVER_JAVA_OPTS="-Xmx256m -XX:MaxPermSize=256m \
-Djava.net.preferIPv4Stack=true \
-agentlib:jdwp=transport=dt_socket,address=8787,server=y,suspend=n \
-Djboss.platform.mbeanserver"

There is an even easier way if you are on Windows. Just build RHQ on your Windows box with the -Pdev profile. This will copy a rhq-server-wrapper.inc file to the rhq-server/bin/wrapper directory of your dev-container. The contents of this file will include JPDA settings so when the Java Service Wrapper starts your server, it does so with JPDA enabled (look at the contents of that file for the port it uses; at the time of writing, that port is 8787).

JBoss.org Content Archive (Read Only), exported from JBoss Community Documentation Editor at 2020-03-13 08:19:49 UTC, last content change 2013-09-18 19:40:40 UTC.