JBoss Community Archive (Read Only)

WildFly 8

Subsystem configuration

The following chapters will focus on the high level management use cases that are available through the CLI and the web interface. For a detailed description of each subsystem configuration property, please consult the respective component reference.

Schema Location

The configuration schemas can found in $JBOSS_HOME/docs/schema.

EE Subsystem Configuration

Overview

The EE subsystem provides common functionality in the Java EE platform, such as the EE Concurrency Utilities (JSR 236) and @Resource injection. The subsystem is also responsible for managing the lifecycle of Java EE application's deployments, that is, .ear files.

The EE subsystem configuration may be used to:

  • customise the deployment of Java EE applications

  • create EE Concurrency Utilities instances

  • define the default bindings

The subsystem name is ee and this document covers EE subsystem version 2.0, which XML namespace within WildFly XML configurations is urn:jboss:domain:ee:2.0. The path for the subsystem's XML schema, within WildFly's distribution, is docs/schema/jboss-as-ee_2_0.xsd.

Subsystem XML configuration example with all elements and attributes specified:

<subsystem xmlns="urn:jboss:domain:ee:2.0" >
    <global-modules>
        <module name="org.jboss.logging"
                slot="main"/>
        <module name="org.apache.log4j"
                annotations="true"
                meta-inf="true"
                services="false" />
    </global-modules>
    <ear-subdeployments-isolated>true</ear-subdeployments-isolated>
    <spec-descriptor-property-replacement>false</spec-descriptor-property-replacement>
    <jboss-descriptor-property-replacement>false</jboss-descriptor-property-replacement>
    <annotation-property-replacement>false</annotation-property-replacement>
    <concurrent>
        <context-services>
            <context-service
                    name="default"
                    jndi-name="java:jboss/ee/concurrency/context/default"
                    use-transaction-setup-provider="true" />
        </context-services>
        <managed-thread-factories>
            <managed-thread-factory
                    name="default"
                    jndi-name="java:jboss/ee/concurrency/factory/default"
                    context-service="default"
                    priority="1" />
        </managed-thread-factories>
        <managed-executor-services>
            <managed-executor-service
                    name="default"
                    jndi-name="java:jboss/ee/concurrency/executor/default"
                    context-service="default"
                    thread-factory="default"
                    hung-task-threshold="60000"
                    core-threads="5"
                    max-threads="25"
                    keepalive-time="5000"
                    queue-length="1000000"
                    reject-policy="RETRY_ABORT" />
        </managed-executor-services>
        <managed-scheduled-executor-services>
            <managed-scheduled-executor-service
                    name="default"
                    jndi-name="java:jboss/ee/concurrency/scheduler/default"
                    context-service="default"
                    thread-factory="default"
                    hung-task-threshold="60000"
                    core-threads="5"
                    keepalive-time="5000"
                    reject-policy="RETRY_ABORT" />
        </managed-scheduled-executor-services>
    </concurrent>
    <default-bindings
            context-service="java:jboss/ee/concurrency/context/default"
            datasource="java:jboss/datasources/ExampleDS"
            jms-connection-factory="java:jboss/DefaultJMSConnectionFactory"
            managed-executor-service="java:jboss/ee/concurrency/executor/default"
            managed-scheduled-executor-service="java:jboss/ee/concurrency/scheduler/default"
            managed-thread-factory="java:jboss/ee/concurrency/factory/default" />
</subsystem>

Java EE Application Deployment

The EE subsystem configuration allows the customisation of the deployment behaviour for Java EE Applications.

Global Modules

Global modules is a set of JBoss Modules that will be added as dependencies to the JBoss Module of every Java EE deployment. Such dependencies allows Java EE deployments to see the classes exported by the global modules.

Each global module is defined through the module resource, an example of its XML configuration:

  <global-modules>
    <module name="org.jboss.logging" slot="main"/>
    <module name="org.apache.log4j" annotations="true" meta-inf="true" services="false" />
  </global-modules>

The only mandatory attribute is the JBoss Module name, the slot attribute defaults to main, and both define the JBoss Module ID to reference.

The optional annotations attribute, which defaults to false, indicates if a pre-computed annotation index should be imported from META-INF/jandex.idx

The optional services attribute indicates if any services exposed in META-INF/services should be made available to the deployments class loader, and defaults to false.

The optional meta-inf attribute, which defaults to true, indicates if the Module's META-INF path should be available to the deployment's class loader.

EAR Subdeployments Isolation

A flag indicating whether each of the subdeployments within a .ear can access classes belonging to another subdeployment within the same .ear. The default value is false, which allows the subdeployments to see classes belonging to other subdeployments within the .ear.

  <ear-subdeployments-isolated>true</ear-subdeployments-isolated>

For example:

myapp.ear
|
|--- web.war
|
|--- ejb1.jar
|
|--- ejb2.jar

If the ear-subdeployments-isolated is set to false, then the classes in web.war can access classes belonging to ejb1.jar and ejb2.jar. Similarly, classes from ejb1.jar can access classes from ejb2.jar (and vice-versa).

This flag has no effect on the isolated classloader of the .war file(s), i.e. irrespective of whether this flag is set to true or false, the .war within a .ear will have a isolated classloader, and other subdeployments within that .ear will not be able to access classes from that .war. This is as per spec.

Property Replacement

The EE subsystem configuration includes flags to configure whether system property replacement will be done on XML descriptors and Java Annotations, included in Java EE deployments.

System properties etc are resolved in the security context of the application server itself, not the deployment that contains the file. This means that if you are running with a security manager and enable this property, a deployment can potentially access system properties or environment entries that the security manager would have otherwise prevented.

Spec Descriptor Property Replacement

Flag indicating whether system property replacement will be performed on standard Java EE XML descriptors. This defaults to true, however it is disabled in the default configurations.

  <spec-descriptor-property-replacement>false</spec-descriptor-property-replacement>

JBoss Descriptor Property Replacement

Flag indicating whether system property replacement will be performed on WildFly proprietary XML descriptors, such as jboss-app.xml. This defaults to true.

  <jboss-descriptor-property-replacement>false</jboss-descriptor-property-replacement>

Annotation Property Replacement

Flag indicating whether system property replacement will be performed on Java annotations. The default value is false.

  <annotation-property-replacement>false</annotation-property-replacement>

EE Concurrency Utilities

EE Concurrency Utilities (JSR 236) were introduced with Java EE 7, to ease the task of writing multithreaded Java EE applications. Instances of these utilities are managed by WildFly, and the related configuration provided by the EE subsystem.

Context Services

The Context Service is a concurrency utility which creates contextual proxies from existent objects. WildFly Context Services are also used to propagate the context from a Java EE application invocation thread, to the threads internally used by the other EE Concurrency Utilities. Context Service instances may be created using the subsystem XML configuration:

  <context-services>
    <context-service
	name="default"
	jndi-name="java:jboss/ee/concurrency/context/default"
	use-transaction-setup-provider="true" />
  </context-services>

The name attribute is mandatory, and it's value should be a unique name within all Context Services.

The jndi-name attribute is also mandatory, and defines where in the JNDI the Context Service should be placed.

The optional use-trasaction-setup-provider attribute indicates if the contextual proxies built by the Context Service should suspend transactions in context, when invoking the proxy objects, and its value defaults to true.

Management clients, such as the WildFly CLI, may also be used to configure Context Service instances. An example to add and remove one named other:

/subsystem=ee/context-service=other:add(jndi-name=java\:jboss\/ee\/concurrency\/other)
/subsystem=ee/context-service=other:remove

Managed Thread Factories

The Managed Thread Factory allows Java EE applications to create new threads. WildFly Managed Thread Factory instances may also, optionally, use a Context Service instance to propagate the Java EE application thread’s context to the new threads. Instance creation is done through the EE subsystem, by editing the subsystem XML configuration:

  <managed-thread-factories>
    <managed-thread-factory
	name="default"
	jndi-name="java:jboss/ee/concurrency/factory/default"
	context-service="default"
	priority="1" />
  </managed-thread-factories>

The name attribute is mandatory, and it's value should be a unique name within all Managed Thread Factories.

The jndi-name attribute is also mandatory, and defines where in the JNDI the Managed Thread Factory should be placed.

The optional context-service references an existent Context Service by its name. If specified then thread created by the factory will propagate the invocation context, present when creating the thread.

The optional priority indicates the priority for new threads created by the factory, and defaults to 5.

Management clients, such as the WildFly CLI, may also be used to configure Managed Thread Factory instances. An example to add and remove one named other:

/subsystem=ee/managed-thread-factory=other:add(jndi-name=java\:jboss\/ee\/factory\/other)
/subsystem=ee/managed-thread-factory=other:remove

Managed Executor Services

The Managed Executor Service is the Java EE adaptation of Java SE Executor Service, providing to Java EE applications the functionality of asynchronous task execution. WildFly is responsible to manage the lifecycle of Managed Executor Service instances, which are specified through the EE subsystem XML configuration:

  <managed-executor-services>
    <managed-executor-service
	name="default"
	jndi-name="java:jboss/ee/concurrency/executor/default"
	context-service="default"
	thread-factory="default"
	hung-task-threshold="60000"
	core-threads="5"
	max-threads="25"
	keepalive-time="5000"
	queue-length="1000000"
	reject-policy="RETRY_ABORT" />
  </managed-executor-services>

The name attribute is mandatory, and it's value should be a unique name within all Managed Executor Services.

The jndi-name attribute is also mandatory, and defines where in the JNDI the Managed Executor Service should be placed.

The optional context-service references an existent Context Service by its name. If specified then the referenced Context Service will capture the invocation context present when submitting a task to the executor, which will then be used when executing the task.

The optional thread-factory references an existent Managed Thread Factory by its name, to handle the creation of internal threads. If not specified then a Managed Thread Factory with default configuration will be created and used internally.

The mandatory core-threads provides the number of threads to keep in the executor's pool, even if they are idle. A value of 0 means there is no limit.

The optional queue-length indicates the number of tasks that can be stored in the input queue. The default value is 0, which means the queue capacity is unlimited.

The executor’s task queue is based on the values of the attributes core-threads and queue-length:

  • If queue-length is 0, or queue-length is Integer.MAX_VALUE (2147483647) and core-threads is 0, direct handoff queuing strategy will be used and a synchronous queue will be created.

  • If queue-length is Integer.MAX_VALUE but core-threads is not 0, an unbounded queue will be used.

  • For any other valid value for queue-length, a bounded queue wil be created.

The optional hung-task-threshold defines a threshold value, in milliseconds, to hung a possibly blocked task. A value of 0 will never hung a task, and is the default.

The optional long-running-tasks is a hint to optimize the execution of long running tasks, and defaults to false.

The optional max-threads defines the the maximum number of threads used by the executor, which defaults to Integer.MAX_VALUE (2147483647).

The optional keepalive-time defines the time, in milliseconds, that an internal thread may be idle. The attribute default value is 60000.

The optional reject-policy defines the policy to use when a task is rejected by the executor. The attribute value may be the default ABORT, which means an exception should be thrown, or RETRY_ABORT, which means the executor will try to submit it once more, before throwing an exception. 

Management clients, such as the WildFly CLI, may also be used to configure Managed Executor Service instances. An example to add and remove one named other:

/subsystem=ee/managed-executor-service=other:add(jndi-name=java\:jboss\/ee\/executor\/other, core-threads=2)
/subsystem=ee/managed-executor-service=other:remove

Managed Scheduled Executor Services

The Managed Scheduled Executor Service is the Java EE adaptation of Java SE Scheduled Executor Service, providing to Java EE applications the functionality of scheduling task execution. WildFly is responsible to manage the lifecycle of Managed Scheduled Executor Service instances, which are specified through the EE subsystem XML configuration:

  <managed-scheduled-executor-services>
    <managed-scheduled-executor-service
	name="default"
	jndi-name="java:jboss/ee/concurrency/scheduler/default"
	context-service="default"
	thread-factory="default"
	hung-task-threshold="60000"
	core-threads="5"
	keepalive-time="5000"
	reject-policy="RETRY_ABORT" />
  </managed-scheduled-executor-services>

The name attribute is mandatory, and it's value should be a unique name within all Managed Scheduled Executor Services.

The jndi-name attribute is also mandatory, and defines where in the JNDI the Managed Scheduled Executor Service should be placed.

The optional context-service references an existent Context Service by its name. If specified then the referenced Context Service will capture the invocation context present when submitting a task to the executor, which will then be used when executing the task.

The optional thread-factory references an existent Managed Thread Factory by its name, to handle the creation of internal threads. If not specified then a Managed Thread Factory with default configuration will be created and used internally.

The mandatory core-threads provides the number of threads to keep in the executor's pool, even if they are idle. A value of 0 means there is no limit.

The optional hung-task-threshold defines a threshold value, in milliseconds, to hung a possibly blocked task. A value of 0 will never hung a task, and is the default.

The optional long-running-tasks is a hint to optimize the execution of long running tasks, and defaults to false.

The optional keepalive-time defines the time, in milliseconds, that an internal thread may be idle. The attribute default value is 60000.

The optional reject-policy defines the policy to use when a task is rejected by the executor. The attribute value may be the default ABORT, which means an exception should be thrown, orRETRY_ABORT, which means the executor will try to submit it once more, before throwing an exception. 

Management clients, such as the WildFly CLI, may also be used to configure Managed Scheduled Executor Service instances. An example to add and remove one named other:

/subsystem=ee/managed-scheduled-executor-service=other:add(jndi-name=java\:jboss\/ee\/scheduler\/other, core-threads=2)
/subsystem=ee/managed-scheduled-executor-service=other:remove

Default EE Bindings

The Java EE Specification mandates the existence of a default instance for each of the following resources:

  • Context Service

  • Datasource

  • JMS Connection Factory

  • Managed Executor Service

  • Managed Scheduled Executor Service

  • Managed Thread Factory

The EE subsystem looks up the default instances from JNDI, using the names in the default bindings configuration, before placing those in the standard JNDI names, such as java:comp/DefaultManagedExecutorService:

  <default-bindings
	context-service="java:jboss/ee/concurrency/context/default"
	datasource="java:jboss/datasources/ExampleDS"
	jms-connection-factory="java:jboss/DefaultJMSConnectionFactory"
	managed-executor-service="java:jboss/ee/concurrency/executor/default"
	managed-scheduled-executor-service="java:jboss/ee/concurrency/scheduler/default"
	managed-thread-factory="java:jboss/ee/concurrency/factory/default" />

The default bindings are optional, if the jndi name for a default binding is not configured then the related resource will not be available to Java EE applications.

Data sources

Datasources are configured through the datasource subsystem. Declaring a new datasource consists of two separate steps: You would need to provide a JDBC driver and define a datasource that references the driver you installed. 

JDBC Driver Installation

The recommended way to install a JDBC driver into WildFly 8 is to deploy it as a regular JAR deployment.  The reason for this is that when you run WildFly 8 in domain mode, deployments are automatically propagated to all servers to which the deployment applies; thus distribution of the driver JAR is one less thing for you to worry about!

Any JDBC 4-compliant driver will automatically be recognized and installed into the system by name and version. A JDBC JAR is identified using the Java service provider mechanism. Such JARs will contain a text a file named META-INF/services/java.sql.Driver, which contains the name of the class(es) of the Drivers which exist in that JAR. If your JDBC driver JAR is not JDBC 4-compliant, it can be made deployable in one of a few ways.

Modify the JAR

The most straightforward solution is to simply modify the JAR and add the missing file. You can do this from your command shell by:

  1. Change to, or create, an empty temporary directory.

  2. Create a META-INF subdirectory.

  3. Create a META-INF/services subdirectory.

  4. Create a META-INF/services/java.sql.Driver file which contains one line - the fully-qualified class name of the JDBC driver.

  5. Use the jar command-line tool to update the JAR like this:

jar \-uf jdbc-driver.jar META-INF/services/java.sql.Driver

For a detailed explanation how to deploy JDBC 4 compliant driver jar, please refer to the chapter "Application Deployment".

Datasource Definitions

The datasource itself is defined within the subsystem datasources:

<subsystem xmlns="urn:jboss:domain:datasources:1.0">
    <datasources>
        <datasource jndi-name="java:jboss/datasources/ExampleDS" pool-name="ExampleDS">
            <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1</connection-url>
            <driver>h2</driver>
            <pool>
                <min-pool-size>10</min-pool-size>
                <max-pool-size>20</max-pool-size>
                <prefill>true</prefill>
            </pool>
            <security>
                <user-name>sa</user-name>
                <password>sa</password>
            </security>
        </datasource>
        <xa-datasource jndi-name="java:jboss/datasources/ExampleXADS" pool-name="ExampleXADS">
           <driver>h2</driver>
           <xa-datasource-property name="URL">jdbc:h2:mem:test</xa-datasource-property>
           <xa-pool>
                <min-pool-size>10</min-pool-size>
                <max-pool-size>20</max-pool-size>
                <prefill>true</prefill>
           </xa-pool>
           <security>
                <user-name>sa</user-name>
                <password>sa</password>
           </security>
        </xa-datasource>
        <drivers>
            <driver name="h2" module="com.h2database.h2">
                <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class>
            </driver>
        </drivers>
  </datasources>

</subsystem>

(See standalone/configuration/standalone.xml)

As you can see the datasource references a driver by it's logical name.

You can easily query the same information through the CLI:

[standalone@localhost:9999 /] /subsystem=datasources:read-resource(recursive=true)
{
    "outcome" => "success",
    "result" => {
        "data-source" => {"java:/H2DS" => {
            "connection-url" => "jdbc:h2:mem:test;DB_CLOSE_DELAY=-1",
            "jndi-name" => "java:/H2DS",
            "driver-name" => "h2",
            "pool-name" => "H2DS",
            "use-java-context" => true,
            "enabled" => true,
            "jta" => true,
            "pool-prefill" => true,
            "pool-use-strict-min" => false,
            "user-name" => "sa",
            "password" => "sa",
            "flush-strategy" => "FailingConnectionOnly",
            "background-validation" => false,
            "use-fast-fail" => false,
            "validate-on-match" => false,
            "use-ccm" => true
        }},
        "xa-data-source" => undefined,
        "jdbc-driver" => {"h2" => {
            "driver-name" => "h2",
            "driver-module-name" => "com.h2database.h2",
            "driver-xa-datasource-class-name" => "org.h2.jdbcx.JdbcDataSource"
        }}
    }
}


[standalone@localhost:9999 /] /subsystem=datasources:installed-drivers-list
{
    "outcome" => "success",
    "result" => [{
        "driver-name" => "h2",
        "deployment-name" => undefined,
        "driver-module-name" => "com.h2database.h2",
        "module-slot" => "main",
        "driver-xa-datasource-class-name" => "org.h2.jdbcx.JdbcDataSource",
        "driver-class-name" => "org.h2.Driver",
        "driver-major-version" => 1,
        "driver-minor-version" => 2,
        "jdbc-compliant" => true
    }]
}

Using the web console or the CLI greatly simplifies the deployment of JDBC drivers and the creation of datasources.

The CLI offers a set of commands to create and modify datasources:

[standalone@localhost:9999 /] help
Supported commands:

 [...]

 data-source             - allows to add new, modify and remove existing data sources
 xa-data-source          - allows add new, modify and remove existing XA data sources

For a more detailed description of a specific command,
execute the command with '--help' as the argument.

Using security domains

Information can be found at https://community.jboss.org/wiki/JBossAS7SecurityDomainModel

Deployment of -ds.xml files

Starting with WildFly 8, you have the ability to deploy a -ds.xml file following the schema:

http://www.ironjacamar.org/doc/schema/datasources_1_1.xsd

It is mandatory to use a reference to an already deployed / defined <driver> entry.

This feature is primarily intended for development, and thus has a few limitations to be aware of. It can not be altered in any of the management interfaces (consle, CLI, etc). Only limited runtime information is available. Also, password vaults and security domains are not deployable, so these can not be bundled with a datasource deployment.

Component Reference

The datasource subsystem is provided by the IronJacamar project. For a detailed description of the available configuration properties, please consult the project documentation.

Messaging

The JMS server configuration is done through the messaging subsystem. In this chapter we are going outline the frequently used configuration options. For a more detailed explanation please consult the HornetQ user guide (See "Component Reference"). 

Connectors

There are three kind of connectors that can be used to connect to WildFly JMS Server

  • invm-connector can be used by a local client (i.e. one running in the same JVM as the server)

  • netty-connector can be used by a remote client (and uses Netty over TCP for the communication)

  • http-connector can be used by a remote client (and uses Undertow Web Server to upgrade from a HTTP connection)

JMS Connection Factories

There are three kinds of basic JMS connection-factory that depends on the type of connectors that is used.

There is also a pooled-connection-factory which is special in that it is essentially a configuration facade for both the inbound and outbound connectors of the the HornetQ JCA Resource Adapter.  An MDB can be configured to use a pooled-connection-factory (e.g. using @ResourceAdapter).  In this context, the MDB leverages the inbound connector of the HornetQ JCA RA.  Other kinds of clients can look up the pooled-connection-factory in JNDI (or inject it) and use it to send messages.  In this context, such a client would leverage the outbound connector of the HornetQ JCA RA.  A pooled-connection-factory is also special because:

  • It is only available to local clients, although it can be configured to point to a remote server.

  • As the name suggests, it is pooled and therefore provides superior performance to the clients which are able to use it.  The pool size can be configured via the max-pool-size and min-pool-size attributes.

  • It should only be used to send (i.e. produce) messages when looked up in JNDI or injected.

  • It can be configured to use specific security credentials via the user and password attributes.  This is useful if the remote server to which it is pointing is secured.

  • Resources acquired from it will be automatically enlisted any on-going JTA transaction.  If you want to send a message from an EJB using CMT then this is likely the connection factory you want to use so the send operation will be atomically committed along with the rest of the EJB's transaction operations.

To be clear, the inbound connector of the HornetQ JCA RA (which is for consuming messages) is only used by MDBs and other JCA-based components.  It is not available to traditional clients.

Both a connection-factory and a pooled-connection-factory reference a connector declaration.  

A netty-connector is associated with a socket-binding which tells the client using the connection-factory where to connect.

  • A connection-factory referencing a netty-connector is suitable to be used by a remote client to send messages to or receive messages from the server (assuming the connection-factory has an appropriately exported entry).  

  • A pooled-connection-factory looked up in JNDI or injected which is referencing a netty-connector is suitable to be used by a local client to send messages to a remote server granted the socket-binding references an outbound-socket-binding pointing to the remote server in question.

  • pooled-connection-factory used by an MDB which is referencing a netty-connector is suitable to consume messages from a remote server granted the socket-binding references an outbound-socket-binding pointing to the remote server in question.

An in-vm-connector is associated with a server-id which tells the client using the connection-factory where to connect (since multiple HornetQ servers can run in a single JVM).

  • connection-factory referencing an in-vm-connector is suitable to be used by a local client to either send messages to or receive messages from a local server.  

  • pooled-connection-factory looked up in JNDI or injected which is referencing an in-vm-connector is suitable to be used by a local client only to send messages to a local server.

  • pooled-connection-factory used by an MDB which is referencing an in-vm-connector is suitable only to consume messages from a local server.

A http-connector is associated with the socket-binding that represents the HTTP socket (by default, named http).

  • A connection-factory referencing a http-connector is suitable to be used by a remote client to send messages to or receive messages from the server by connecting to its HTTP port before upgrading to the messaging protocol.

  • A pooled-connection-factory referencing a http-connector is suitable to be used by a local client to send messages to a remote server  granted the socket-binding references an outbound-socket-binding pointing to the remote server in question.

  • pooled-connection-factory used by an MDB which is referencing a http-connector is suitable only to consume messages from a remote server granted the socket-binding references an outbound-socket-binding pointing to the remote server in question.

The entry declaration of a connection-factory or a pooled-connection-factory specifies the JNDI name under which the factory will be exposed.  Only JNDI names bound in the "java:jboss/exported" namespace are available to remote clients.  If a connection-factory has an entry bound in the "java:jboss/exported" namespace a remote client would look-up the connection-factory using the text after "java:jboss/exported".  For example, the "RemoteConnectionFactory" is bound by default to "java:jboss/exported/jms/RemoteConnectionFactory" which means a remote client would look-up this connection-factory using "jms/RemoteConnectionFactory".  A pooled-connection-factory should not have any entry bound in the "java:jboss/exported" namespace because a pooled-connection-factory is not suitable for remote clients.

Since JMS 2.0, a default JMS connection factory is accessible to EE application under the JNDI name java:comp/DefaultJMSConnectionFactory. WildFly messaging subsystem defines a pooled-connection-factory that is used to provide this default connection factory. Any parameter change on this pooled-connection-factory will be take into account by any EE application looking the default JMS provider under the JNDI name java:comp/DefaultJMSConnectionFactory.

<subsystem xmlns="urn:jboss:domain:messaging:2.0">
   <hornetq-server>
      [...]
      <connectors>
         <http-connector name="http-connector" socket-binding="http">
            <param key="http-upgrade-endpoint" value="http-acceptor"/>
         </http-connector>
         <http-connector name="http-connector-throughput" socket-binding="http">
            <param key="http-upgrade-endpoint" value="http-acceptor-throughput"/>
            <param key="batch-delay" value="50"/>
         </http-connector>
         <in-vm-connector name="in-vm" server-id="0"/>
      </connectors>
      [...]
      <jms-connection-factories>
         <connection-factory name="InVmConnectionFactory">
            <connectors>
               <connector-ref connector-name="in-vm"/>
            </connectors>
            <entries>
               <entry name="java:/ConnectionFactory"/> 
            </entries>
         </connection-factory> 
         <connection-factory name="RemoteConnectionFactory">
            <connectors>
               <connector-ref connector-name="http-connector"/>
            </connectors>
            <entries>
               <entry name="java:jboss/exported/jms/RemoteConnectionFactory"/>
            </entries>
         </connection-factory>
         <pooled-connection-factory name="hornetq-ra">
            <transaction mode="xa"/>
            <connectors>
               <connector-ref connector-name="in-vm"/>
            </connectors>
            <entries>
               <entry name="java:/JmsXA"/>
               <!-- Global JNDI entry used to provide a default JMS Connection factory to EE application -->
               <entry name="java:jboss/DefaultJMSConnectionFactory"/>
            </entries>
         </pooled-connection-factory>
      </jms-connection-factories>
      [...]
   </hornetq-server>
</subsystem>

(See standalone/configuration/standalone-full.xml)

JMS Queues and Topics

JMS queues and topics are sub resources of the messaging subsystem.  They are defined in the jms-destinations section.  One can define either a jms-queue or jms-topic.  Each destination must be given a name and contain at least one entry element.

Each entry refers to a JNDI name of the queue or topic.  Keep in mind that any jms-queue or jms-topic which needs to be accessed by a remote client needs to have an entry in the "java:jboss/exported" namespace.  As with connection factories, if a jms-queue or jms-topic has an entry bound in the "java:jboss/exported" namespace a remote client would look it up using the text after "java:jboss/exported".  For example, the following jms-queue "testQueue" is bound to "java:jboss/exported/jms/queue/test" which means a remote client would look-up this jms-queue using "jms/queue/test".  A local client could look it up using "java:jboss/exported/jms/queue/test", "java:jms/queue/test", or more simply "jms/queue/test":

<subsystem xmlns="urn:jboss:domain:messaging:2.0">
   <hornetq-server>
      [...]
      <jms-destinations>
         <jms-queue name="testQueue">
             <entry name="jms/queue/test"/>
             <entry name="java:jboss/exported/jms/queue/test"/>
         </jms-queue>
         <jms-topic name="testTopic">
            <entry name="jms/topic/test"/>
            <entry name="java:jboss/exported/jms/topic/test"/>
         </jms-topic>
      </jms-destinations>
   </hornetq-server>
</subsystem>

(See standalone/configuration/standalone-full.xml)

JMS endpoints can easily be created through the CLI:

[standalone@localhost:9990 /] jms-queue add --queue-address=myQueue --entries=queues/myQueue
[standalone@localhost:9990 /] /subsystem=messaging/hornetq-server=default/jms-queue=myQueue:read-resource
{
    "outcome" => "success",
    "result" => {
        "durable" => true,
        "entries" => ["queues/myQueue"],
        "selector" => undefined
    }
}

A number of additional commands to maintain the JMS subsystem are available as well:

[standalone@localhost:9990 /] jms-queue --help --commands
add
...
remove
To read the description of a specific command execute 'jms-queue command_name --help'.

Dead Letter & Redelivery

Some of the settings are applied against an address wild card instead of a specific messaging destination. The dead letter queue and redelivery settings belong into this group:

<subsystem xmlns="urn:jboss:domain:messaging:2.0">
   <hornetq-server>
      [...]
      <address-settings>
         <address-setting match="#">
            <dead-letter-address>jms.queue.DLQ</dead-letter-address>
            <expiry-address>jms.queue.ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            [...]
         </address-setting>
      </address-settings>
      [...]
   </hornetq-server>
</subsystem>

(See standalone/configuration/standalone-full.xml)

Security Settings for HornetQ addresses and JMS destinations

Security constraints are matched against an address wildcard, similar to the DLQ and redelivery settings.  Note: Multiple roles are separated by spaces.

<subsystem xmlns="urn:jboss:domain:messaging:2.0">
   <hornetq-server>
      [...]
      <security-settings>
         <security-setting match="#">
            <permission type="send" roles="guest"/>
            <permission type="consume" roles="guest"/>
            <permission type="createNonDurableQueue" roles="role1"/>
            <permission type="deleteNonDurableQueue" roles="role1 role2"/>
         </security-setting>
      </security-settings>
      [...]
   </hornetq-server>
</subsystem>

(See standalone/configuration/standalone-full.xml)

Security Domain for Users

By default, HornetQ will use the "other" JAAS security domain.  This domain is used to authenticate users making connections to HornetQ and then they are authorized to perform specific functions based on their role(s) and the security-settings described above.  This domain can be changed by using security-domain, e.g.:

<subsystem xmlns="urn:jboss:domain:messaging:2.0">
   <hornetq-server>
      [...]
      <security-domain>mySecurityDomain</security-domain>
      [...]
   </hornetq-server>
</subsystem>

Cluster Authentication

If the HornetQ server is configured to be clustered ( <clustered>true</clustered> ), it will use the <cluster-user> and <cluster-password> attributes to connect to other HornetQ nodes in the cluster.

If you do not change the default value of <cluster-password>, HornetQ will fail to authenticate with the error:

HQ224018: Failed to create session: HornetQExceptionerrorType=CLUSTER_SECURITY_EXCEPTION message=HQ119099: Unable to authenticate cluster user: HORNETQ.CLUSTER.ADMIN.USER

To prevent this error, you must specify a value for <cluster-password>. It is possible to encrypt this value by following this guide.

Alternatively, you can use the system property jboss.messaging.cluster.password to specify the cluster password from the command line.

Deployment of -jms.xml files

Starting with WildFly 8, you have the ability to deploy a -jms.xml file defining JMS destinations, e.g.:

<?xml version="1.0" encoding="UTF-8"?>
<messaging-deployment xmlns="urn:jboss:messaging-deployment:1.0">
   <hornetq-server>
      <jms-destinations>
         <jms-queue name="sample">
            <entry name="jms/queue/sample"/>
            <entry name="java:jboss/exported/jms/queue/sample"/>
         </jms-queue>
      </jms-destinations>
   </hornetq-server>
</messaging-deployment>

images/author/images/icons/emoticons/warning.gif

This feature is primarily intended for development as destinations deployed this way can not be managed with any of the provided management tools (e.g. console, CLI, etc).

JMS Bridge

The function of a JMS bridge is to consume messages from a source JMS destination, and send them to a target JMS destination. Typically either the source or the target destinations are on different servers.
The bridge can also be used to bridge messages from other non HornetQ JMS servers, as long as they are JMS 1.1 compliant.

The JMS Bridge is provided by the HornetQ project. Fo a detailed description of the available configuration properties, please consult the project documentation.

Modules for other messaging brokers

Source and target JMS resources (destination and connection factories) are looked up using JNDI.
If either the source or the target resources are managed by another messaging server than WildFly, the required client classes must be bundled in a module. The name of the module must then be declared when the JMS Bridge is configured.

The use of a JMS bridges with any messaging provider will require to create a module containing the jar of this provider.

Let's suppose we want to use an hypothetical messaging provider named AcmeMQ. We want to bridge messages coming from a source AcmeMQ destination to a target destination on the local WildFly messaging server. To lookup AcmeMQ resources from JNDI, 2 jars are required, acmemq-1.2.3.jar, mylogapi-0.0.1.jar (please note these jars do not exist, this is just for the example purpose). We must not include a JMS jar since it will be provided by a WildFly module directly.

To use these resources in a JMS bridge, we must bundle them in a WildFly module:

in JBOSS_HOME/modules, we create the layout:

modules/
`-- org
    `-- acmemq
        `-- main
            |-- acmemq-1.2.3.jar
            |-- mylogapi-0.0.1.jar
            `-- module.xml

We define the module in module.xml:

<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.1" name="org.acmemq">
    <properties>
        <property name="jboss.api" value="private"/>
    </properties>


    <resources>
        <!-- insert resources required to connect to the source or target   -->
        <!-- messaging brokers if it not another WildFly instance           -->
        <resource-root path="acmemq-1.2.3.jar" />
        <resource-root path="mylogapi-0.0.1.jar" />
    </resources>


    <dependencies>
       <!-- add the dependencies required by JMS Bridge code                -->
       <module name="javax.api" />
       <module name="javax.jms.api" />
       <module name="javax.transaction.api"/>
       <module name="org.jboss.remote-naming"/>
       <!-- we depend on org.hornetq module since we will send messages to  -->
       <!-- the HornetQ server embedded in the local WildFly instance       -->
       <module name="org.hornetq" />
    </dependencies>
</module>
Configuration

A JMS bridge is defined inside a jms-bridge section of the `messaging` subsystem in the XML configuration files.

<subsystem xmlns="urn:jboss:domain:messaging:2.0">
   <jms-bridge name="myBridge" module="org.acmemq">
      <source>
         <connection-factory name="ConnectionFactory"/>
         <destination name="sourceQ"/>
         <user>user1</user>
         <password>pwd1</password>
         <context>
            <property key="java.naming.factory.initial" value="org.acmemq.jndi.AcmeMQInitialContextFactory"/>
            <property key="java.naming.provider.url"    value="tcp://127.0.0.1:9292"/>
         </context>
      </source>
      <target>
         <connection-factory name="/jms/invmTargetCF"/>
         <destination name="/jms/targetQ"/>
      </target>
      <quality-of-service>AT_MOST_ONCE</quality-of-service>
      <failure-retry-interval>500</failure-retry-interval>
      <max-retries>1</max-retries>
      <max-batch-size>500</max-batch-size>
      <max-batch-time>500</max-batch-time>
      <add-messageID-in-header>true</add-messageID-in-header>
   </jms-bridge>
</subsystem>

The source and target sections contain the name of the JMS resource (connection-factory and destination) that will be looked up in JNDI.
It optionally defines the user and password credentials. If they are set, they will be passed as arguments when creating the JMS connection from the looked up ConnectionFactory.
It is also possible to define JNDI context properties in the context section. If the context section is absent, the JMS resources will be looked up in the local WildFly instance (as it is the case in the target section in the example above).

Management commands

A JMS Bridge can also be managed using the WildFly command line interface:

[standalone@localhost:9990 /] /subsystem=messaging/jms-bridge=myBridge/:add(module="org.acmemq",      \
      source-destination="sourceQ",                                                                   \
      source-connection-factory="ConnectionFactory",                                                  \
      source-user="user1",                                                                            \
      source-password="pwd1",                                                                         \
      source-context={"java.naming.factory.initial" => "org.acmemq.jndi.AcmeMQInitialContextFactory", \
                      "java.naming.provider.url" => "tcp://127.0.0.1:9292"},                          \
      target-destination="/jms/targetQ",                                                              \
      target-connection-factory="/jms/invmTargetCF",                                                  \
      quality-of-service=AT_MOST_ONCE,                                                                \
      failure-retry-interval=500,                                                                     \
      max-retries=1,                                                                                  \
      max-batch-size=500,                                                                             \
      max-batch-time=500,                                                                             \
      add-messageID-in-header=true)
{"outcome" => "success"}

You can also see the complete JMS Bridge resource description from the CLI:

[standalone@localhost:9990 /] /subsystem=messaging/jms-bridge=*/:read-resource-description
{
    "outcome" => "success",
    "result" => [{
        "address" => [
            ("subsystem" => "messaging"),
            ("jms-bridge" => "*")
        ],
        "outcome" => "success",
        "result" => {
            "description" => "A JMS bridge instance.",
            "attributes" => {
                ...
        }
    }]
}

Component Reference

The messaging subsystem is provided by the HornetQ project. For a detailed description of the available configuration properties, please consult the project documentation.

Web

Couldn't find a page to include called: Web subsystem configuration

Web services

Domain management

JBossWS components are provided to the application server through the webservices subsystem.  JBossWS components handle the processing of WS endpoints.  The subsystem supports the configuration of published endpoint addresses, and endpoint handler chains.  A default webservice subsystem is provided in the server's domain and standalone configuration files.

Structure of the webservices subsystem

Published endpoint address

JBossWS supports the rewriting of the <soap:address> element of endpoints published in WSDL contracts.  This feature is useful for controlling the server address that is advertised to clients for each endpoint.

The following elements are available and can be modified (all are optional and require server restart upon modification):

Name

Type

Description

modify-wsdl-address

boolean

This boolean enables and disables the address rewrite functionality.

When modify-wsdl-address is set to true and the content of <soap:address> is a valid URL, JBossWS will rewrite the URL using the values of wsdl-host and wsdl-port or wsdl-secure-port.
       
When modify-wsdl-address is set to false and the content of <soap:address> is a valid URL, JBossWS will not rewrite the URL.  The <soap:address> URL will be used.
        
When the content of <soap:address> is not a valid URL, JBossWS will rewrite it no matter what the setting of modify-wsdl-address.

If modify-wsdl-address is set to true and wsdl-host is not defined or explicitly set to 'jbossws.undefined.host the content of <soap:address> URL is use.  JBossWS uses the requester's host when rewriting the <soap:address>

When modify-wsdl-address is not defined JBossWS uses a default value of true.

wsdl-host

string

The hostname / IP address to be used for rewriting <soap:address>.
If wsdl-host is set to jbossws.undefined.host, JBossWS uses the requester's host when rewriting the <soap:address>
When wsdl-host is not defined JBossWS uses a default value of 'jbossws.undefined.host'.

wsdl-port

int

Set this property to explicitly define the HTTP port that will be used for rewriting the SOAP address.
Otherwise the HTTP port will be identified by querying the list of installed HTTP connectors.

wsdl-secure-port

int

Set this property to explicitly define the HTTPS port that will be used for rewriting the SOAP address.
Otherwise the HTTPS port will be identified by querying the list of installed HTTPS connectors.

Predefined endpoint configurations

JBossWS enables extra setup configuration data to be predefined and associated with an endpoint implementation.  Predefined endpoint configurations can be used for JAX-WS client and JAX-WS endpoint setup.  Endpoint configurations can include JAX-WS handlers and key/value properties declarations.  This feature provides a convenient way to add handlers to WS endpoints and to set key/value properties that control JBossWS and Apache CXF internals (see Apache CXF configuration).

The webservices subsystem provides schema to support the definition of named sets of endpoint configuration data.  Annotation, org.jboss.ws.api.annotation.EndpointConfig is provided to map the named configuration to the endpoint implementation.

There is no limit to the number of endpoint configurations that can be defined within the webservices subsystem.  Each endpoint configuration must have a name that is unique within the webservices subsystem.  Endpoint configurations defined in the webservices subsystem are available for reference by name through the annotation to any endpoint in a deployed application.

WildFly ships with two predefined endpoint configurations.  Standard-Endpoint-Config is the default configuration.  Recording-Endpoint-Config is an example of custom endpoint configuration and includes a recording handler.

[standalone@localhost:9999 /] /subsystem=webservices:read-resource
{
    "outcome" => "success",
    "result" => {
        "endpoint" => {},
        "modify-wsdl-address" => true,
        "wsdl-host" => expression "${jboss.bind.address:127.0.0.1}",
        "endpoint-config" => {
            "Standard-Endpoint-Config" => undefined,
            "Recording-Endpoint-Config" => undefined
        }
    }
}

The Standard-Endpoint-Config is a special endpoint configuration.  It is used for any endpoint that does not have an explicitly assigned endpoint configuration.

Endpoint configs

Endpoint configs are defined using the endpoint-config element.  Each endpoint configuration may include properties and handlers set to the endpoints associated to the configuration.

[standalone@localhost:9999 /] /subsystem=webservices/endpoint-config=Recording-Endpoint-Config:read-resource
{
    "outcome" => "success",
    "result" => {
        "post-handler-chain" => undefined,
        "property" => undefined,
        "pre-handler-chain" => {"recording-handlers" => undefined}
    }
}

A new endpoint configuration can be added as follows:

[standalone@localhost:9999 /] /subsystem=webservices/endpoint-config=My-Endpoint-Config:add
{
    "outcome" => "success",
    "response-headers" => {
        "operation-requires-restart" => true,
        "process-state" => "restart-required"
    }
}
Handler chains

Each endpoint configuration may be associated with zero or more PRE and POST handler chains.  Each handler chain may include JAXWS handlers.  For outbound messages the PRE handler chains are executed before any handler that is attached to the endpoint using the standard means, such as with annotation @HandlerChain, and POST handler chains are executed after those objects have executed.  For inbound messages the POST handler chains are executed before any handler that is attached to the endpoint using the standard means and the PRE handler chains are executed after those objects have executed.

* Server inbound messages
Client --> ... --> POST HANDLER --> ENDPOINT HANDLERS --> PRE HANDLERS --> Endpoint

* Server outbound messages
Endpoint --> PRE HANDLER --> ENDPOINT HANDLERS --> POST HANDLERS --> ... --> Client

The protocol-binding attribute must be used to set the protocols for which the chain will be triggered.

[standalone@localhost:9999 /] /subsystem=webservices/endpoint-config=Recording-Endpoint-Config/pre-handler-chain=recording-handlers:read-resource
{
    "outcome" => "success",
    "result" => {
        "protocol-bindings" => "##SOAP11_HTTP ##SOAP11_HTTP_MTOM ##SOAP12_HTTP ##SOAP12_HTTP_MTOM",
        "handler" => {"RecordingHandler" => undefined}
    },
    "response-headers" => {"process-state" => "restart-required"}
}

A new handler chain can be added as follows:

[standalone@localhost:9999 /] /subsystem=webservices/endpoint-config=My-Endpoint-Config/post-handler-chain=my-handlers:add(protocol-bindings="##SOAP11_HTTP")
{
    "outcome" => "success",
    "response-headers" => {
        "operation-requires-restart" => true,
        "process-state" => "restart-required"
    }
}
[standalone@localhost:9999 /] /subsystem=webservices/endpoint-config=My-Endpoint-Config/post-handler-chain=my-handlers:read-resource
{
    "outcome" => "success",
    "result" => {
        "handler" => undefined,
        "protocol-bindings" => "##SOAP11_HTTP"
    },
    "response-headers" => {"process-state" => "restart-required"}
}
Handlers

JAXWS handler can be added in handler chains:

[standalone@localhost:9999 /] /subsystem=webservices/endpoint-config=Recording-Endpoint-Config/pre-handler-chain=recording-handlers/handler=RecordingHandler:read-resource
{
    "outcome" => "success",
    "result" => {"class" => "org.jboss.ws.common.invocation.RecordingServerHandler"},
    "response-headers" => {"process-state" => "restart-required"}
}
[standalone@localhost:9999 /] /subsystem=webservices/endpoint-config=My-Endpoint-Config/post-handler-chain=my-handlers/handler=foo-handler:add(class="org.jboss.ws.common.invocation.RecordingServerHandler")
{
    "outcome" => "success",
    "response-headers" => {
        "operation-requires-restart" => true,
        "process-state" => "restart-required"
    }
}
Endpoint-config handler classloading

The class attribute is used to provide the fully qualified class name of the handler.  At deploy time, an instance of the class is created for each referencing deployment.  For class creation to succeed, either the deployment classloader or the classloader for module, org.jboss.as.webservices.server.integration, must to be able to load the handler class.

Runtime information

Each web service endpoint is exposed through the deployment that provides the endpoint implementation. Each endpoint can be queried as a deployment resource. For further information please consult the chapter "Application Deployment". Each web service endpoint specifies a web context and a WSDL Url:

[standalone@localhost:9999 /] /deployment="*"/subsystem=webservices/endpoint="*":read-resource
{
   "outcome" => "success",
   "result" => [{
       "address" => [
           ("deployment" => "jaxws-samples-handlerchain.war"),
           ("subsystem" => "webservices"),
           ("endpoint" => "jaxws-samples-handlerchain:TestService")
       ],
       "outcome" => "success",
       "result" => {
           "class" => "org.jboss.test.ws.jaxws.samples.handlerchain.EndpointImpl",
           "context" => "jaxws-samples-handlerchain",
           "name" => "TestService",
           "type" => "JAXWS_JSE",
           "wsdl-url" => "http://localhost:8080/jaxws-samples-handlerchain?wsdl"
       }
   }]
}

Component Reference

The web service subsystem is provided by the JBossWS project. For a detailed description of the available configuration properties, please consult the project documentation.

Overview

The overall server logging configuration is represented by the logging subsystem. It consists of four notable parts: handler configurations, logger, the root logger declarations (aka log categories) and logging profiles. Each logger does reference a handler (or set of handlers). Each handler declares the log format and output:

<subsystem xmlns="urn:jboss:domain:logging:1.0">
   <console-handler name="CONSOLE" autoflush="true">
       <level name="DEBUG"/>
       <formatter>
           <pattern-formatter pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/>
       </formatter>
   </console-handler>
   <periodic-rotating-file-handler name="FILE" autoflush="true">
       <formatter>
           <pattern-formatter pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/>
       </formatter>
       <file relative-to="jboss.server.log.dir" path="server.log"/>
       <suffix value=".yyyy-MM-dd"/>
   </periodic-rotating-file-handler>
   <logger category="com.arjuna">
       <level name="WARN"/>
   </logger>
   [...]
   <root-logger>
       <level name="DEBUG"/>
       <handlers>
           <handler name="CONSOLE"/>
           <handler name="FILE"/>
       </handlers>
   </root-logger>
</subsystem>

Attributes

The root resource contains two notable attributes add-logging-api-dependencies and use-deployment-logging-config.

logging-api-dependencies

The add-logging-api-dependencies controls whether or not the container adds implicit logging API dependencies to your deployments. If set to true, the default, all the implicit logging API dependencies are added. If set to false the dependencies are not added to your deployments.

deployment-logging-config

The use-deployment-logging-config controls whether or not your deployment is scanned for per-deployment logging. If set to true, the default, per-deployment logging is enabled. Set to false to disable this feature.

deployment Logging

Per-deployment logging allows you to add a logging configuration file to your deployment and have the logging for that deployment configured according to the configuration file. In an EAR the configuration should be in the META-INF directory. In a WAR or JAR deployment the configuration file can be in either the META-INF or WEB-INF/classes directories.

The following configuration files are allowed:

  • logging.properties

  • jboss-logging.properties

  • log4j.properties

  • log4j.xml

  • jboss-log4j.xml

You can also disable this functionality by changing the use-deployment-logging-config attribute to false.

Logging Profiles

Logging profiles are like additional logging subsystems. Each logging profile constists of three of the four notable parts listed above: handler configurations, logger and the root logger declarations.

You can assign a logging profile to a deployment via the deployments manifest. Add a Logging-Profile entry to the MANIFEST.MF file with a value of the logging profile id. For example a logging profile defined on /subsystem=logging/logging-profile=ejbs the MANIFEST.MF would look like:

Manifest-Version: 1.0
Logging-Profile: ejbs

A logging profile can be assigned to any number of deployments. Using a logging profile also allows for runtime changes to the configuration. This is an advantage over the per-deployment logging configuration as the redeploy is not required for logging changes to take affect.

Default Log File Locations

Managed Domain

In a managed domain two types of log files do exist: Controller and server logs. The controller components govern the domain as whole. It's their responsibility to start/stop server instances and execute managed operations throughout the domain. Server logs contain the logging information for a particular server instance. They are co-located with the host the server is running on.

For the sake of simplicity we look at the default setup for managed domain. In this case, both the domain controller components and the servers are located on the same host:

Process

Log File

Host Controller

./domain/log/host-controller.log

Process Controller

./domain/log/process-controller.log

"Server One"

./domain/servers/server-one/log/server.log

"Server Two"

./domain/servers/server-two/log/server.log

"Server Three"

./domain/servers/server-three/log/server.log

Standalone Server

The default log files for a standalone server can be found in the log subdirectory of the distribution:

Process

Log File

Server

./standalone/log/server.log

Filter Expressions

Filter Type

Expression

Description

Parameter(s)

Examples

accept

accept

Accepts all log messages.

None

accept

deny

deny

enies all log messages.

None

deny

not

not(filterExpression)

Accepts a filter as an argument and inverts the returned value.

The expression takes a single filter for it's argument.

not(match("JBAS"))

all

all(filterExpressions)

A filter consisting of several filters in a chain. If any filter find the log message to be unloggable, the message will not be logged and subsequent filters will not be checked.

The expression takes a comma delimited list of filters for it's argument.

all(match("JBAS"), match("WELD"))

any

any(filterExpressions)

A filter consisting of several filters in a chain. If any filter fins the log message to be loggable, the message will be logged and the subsequent filters will not be checked.

The expression takes a comma delimited list of filters for it's argument.

any(match("JBAS"), match("WELD"))

levelChange

levelChange(level)

A filter which modifies the log record with a new level.

The expression takes a single string based level for it's argument.

levelChange(WARN)

levels

levels(levels)

A filter which includes log messages with a level that is listed in the list of levels.

The expression takes a comma delimited list of string based levels for it's argument.

levels(DEBUG, INFO, WARN, ERROR)

levelRange

levelRange([minLevel,maxLevel])

A filter which logs records that are within the level range.

The filter expression uses a "[" to indicate a minimum inclusive level and a "]" to indicate a maximum inclusive level. Otherwise use "(" or ")" respectively indicate exclusive. The first argument for the expression is the minimum level allowed, the second argument is the maximum level allowed.

  • minimum level must be greater than DEBUG and the maximum level must be less than to ERROR

    levelRange(DEBUG, ERROR)

  • minimum level must be greater than or equal to DEBUG and the maximum level must be less than ERROR

    levelRange[DEBUG, ERROR)

  • minimum level must be greater than or equal to INFO and the maximum level must be less than or equal to ERROR

    levelRange[INFO, ERROR]

match

match("pattern")

A regular-expression based filter. The raw unformatted message is used against the pattern.

The expression takes a regular expression for it's argument. match("JBAS\d+")

substitute

substitute("pattern", "replacement value")

A filter which replaces the first match to the pattern with the replacement value.

The first argument for the expression is the pattern the second argument is the replacement text.

substitute("JBAS", "EAP")

substituteAll

substituteAll("pattern", "replacement value")

A filter which replaces all matches of the pattern with the replacement value.

The first argument for the expression is the pattern the second argument is the replacement text.

substituteAll("JBAS", "EAP")

List Log Files and Reading Log Files

Log files can be listed and viewed via management operations. The log files allowed to be listed and/or viewed is intentionally limited to files that exist in the jboss.server.log.dir and are associated with a known file handler. Known file handler types include file-handler, periodic-rotating-file-handler and size-rotating-file-handler. The operations are valid in both standalone and domain modes.

List Log Files

The list-log-files operation is available on the root logging resource, /subsystem=logging in standalone CLI syntax. The files listed are the only files allowed to be read by the read-log-file operation.

CLI command and output
[standalone@localhost:9990 /] /subsystem=logging:list-log-files
{
    "outcome" => "success",
    "result" => [
        {
            "file-name" => "server.log",
            "file-size" => 10973L,
            "last-modified-date" => "2014-02-14T14:16:49.000-0800"
        },
        {
            "file-name" => "server.log.2014-02-12",
            "file-size" => 27688L,
            "last-modified-date" => "2014-02-12T11:20:35.000-0800"
        },
        {
            "file-name" => "server.log.2014-02-13",
            "file-size" => 359154L,
            "last-modified-date" => "2014-02-13T20:43:49.000-0800"
        }
    ]
}

Read Log File

The read-log-file operation is available on the root logging resource, /subsystem=logging in standalone CLI syntax. Only files available in the list-log-files operation are allowed to be read. This operation has one required parameters and 4 additional parameters.

Name

Description

name (required)

the name of the log file to be read

encoding

the encoding the file should be read in

lines

the number of lines from the file. A value of -1 indicates all lines should be read.

skip

the number of lines to skip before reading.

tail

true to read from the end of the file up or false to read top down.

CLI command and output
[standalone@localhost:9990 /] /subsystem=logging:read-log-file(name=server.log)
{
    "outcome" => "success",
    "result" => [
        "2014-02-14 14:16:48,781 INFO  [org.jboss.as.server.deployment.scanner] (MSC service thread 1-11) JBAS015012: Started FileSystemDeploymentService for directory /home/jperkins/servers/wildfly-8.0.0.Final/standalone/deployments",
        "2014-02-14 14:16:48,782 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-8) JBAS010400: Bound data source [java:jboss/myDs]",
        "2014-02-14 14:16:48,782 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-15) JBAS010400: Bound data source [java:jboss/datasources/ExampleDS]",
        "2014-02-14 14:16:48,786 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-9) JBAS015876: Starting deployment of \"simple-servlet.war\" (runtime-name: \"simple-servlet.war\")",
        "2014-02-14 14:16:48,978 INFO  [org.jboss.ws.common.management] (MSC service thread 1-10) JBWS022052: Starting JBoss Web Services - Stack CXF Server 4.2.3.Final",
        "2014-02-14 14:16:49,160 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-16) JBAS017534: Registered web context: /simple-servlet",
        "2014-02-14 14:16:49,189 INFO  [org.jboss.as.server] (Controller Boot Thread) JBAS018559: Deployed \"simple-servlet.war\" (runtime-name : \"simple-servlet.war\")",
        "2014-02-14 14:16:49,224 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015961: Http management interface listening on http://127.0.0.1:9990/management",
        "2014-02-14 14:16:49,224 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on http://127.0.0.1:9990",
        "2014-02-14 14:16:49,225 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015874: WildFly 8.0.0.Final \"WildFly\" started in 1906ms - Started 258 of 312 services (90 services are lazy, passive or on-demand)"
    ]
}

FAQ

Why is there a logging.properties file?

You may have noticed that there is a logging.properties file in the configuration directory. This logging configuration is used when the server boots up until the logging subsystem kicks in. If the logging subsystem is not included in your configuration, then this would act as the logging configuration for the entire server.

The logging.properties file is overwritten at boot and with each change to the logging subsystem. Any changes made to the file are not persisted. Any changes made to the XML configuration or via management operations will be persisted to the logging.properties file and used on the next boot.

JMX

The JMX subsystem registers a service with the Remoting endpoint so that remote access to JMX can be obtained over the exposed Remoting connector.

This is switched on by default in standalone mode and accessible over port 9990 but in domain mode is switched off so needs to be enabled - in domain mode the port will be the port of the Remoting connector for the WildFly 8 instance to be monitored.

To use the connector you can access it in the standard way using a service:jmx URL:

import javax.management.MBeanServerConnection;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;

public class JMXExample {

    public static void main(String[] args) throws Exception {
        //Get a connection to the WildFly 8 MBean server on localhost
        String host = "localhost";
        int port = 9990;  // management-web port
        String urlString =
            System.getProperty("jmx.service.url","service:jmx:http-remoting-jmx://" + host + ":" + port);
        JMXServiceURL serviceURL = new JMXServiceURL(urlString);
        JMXConnector jmxConnector = JMXConnectorFactory.connect(serviceURL, null);
        MBeanServerConnection connection = jmxConnector.getMBeanServerConnection();

        //Invoke on the WildFly 8 MBean server
        int count = connection.getMBeanCount();
        System.out.println(count);
        jmxConnector.close();
    }
}

You also need to set your classpath when running the above example.  The following script covers Linux.  If your environment is much different, paste your script when you have it working.

!/bin/bash

# specify your WildFly 8 folder
export YOUR_JBOSS_HOME=~/WildFly8

java -classpath $YOUR_JBOSS_HOME/bin/client/jboss-client.jar:./ JMXExample

You can also connect using jconsole.

If using jconsole use the jconsole.sh and jconsole.bat scripts included in the /bin directory of the WildFly 8 distribution as these set the classpath as required to connect over Remoting.

In addition to the standard JVM MBeans, the WildFly 8 MBean server contains the following MBeans:

JMX ObjectName

Description

jboss.msc:type=container,name=jboss-as

Exposes management operations on the JBoss Modular Service Container, which is the dependency injection framework at the heart of WildFly 8. It is useful for debugging dependency problems, for example if you are integrating your own subsystems, as it exposes operations to dump all services and their current states

jboss.naming:type=JNDIView

Shows what is bound in JNDI

jboss.modules:type=ModuleLoader,name=*

This collection of MBeans exposes management operations on JBoss Modules classloading layer. It is useful for debugging dependency problems arising from missing module dependencies

Audit logging

Audit logging for the JMX MBean server managed by the JMX subsystem. The resource is at /subsystem=jmx/configuration=audit-log and its attributes are similar to the ones mentioned for /core-service=management/access=audit/logger=audit-log in Audit logging.

Attribute

Description

enabled

true to enable logging of the JMX operations

log-boot

true to log the JMX operations when booting the server, false otherwise

log-read-only

If true all operations will be audit logged, if false only operations that change the model will be logged

Then which handlers are used to log the management operations are configured as handler=* children of the logger. These handlers and their formatters are defined in the global /core-service=management/access=audit section mentioned in Audit logging.

JSON Formatter

The same JSON Formatter is used as described in Audit logging. However the records for MBean Server invocations have slightly different fields from those logged for the core management layer.

2013-08-29 18:26:29 - {
    "type" : "jmx",
    "r/o" : false,
    "booting" : false,
    "version" : "8.0.0.Beta1-SNAPSHOT",
    "user" : "$local",
    "domainUUID" : null,
    "access" : "JMX",
    "remote-address" : "127.0.0.1/127.0.0.1",
    "method" : "invoke",
    "sig" : [
        "javax.management.ObjectName",
        "java.lang.String",
        "[Ljava.lang.Object;",
        "[Ljava.lang.String;"
    ],
    "params" : [
        "java.lang:type=Threading",
        "getThreadInfo",
        "[Ljava.lang.Object;@5e6c33c",
        "[Ljava.lang.String;@4b681c69"
    ]
}

It includes an optional timestamp and then the following information in the json record

Field name

Description

type

This will have the value jmx meaning it comes from the jmx subsystem

r/o

true if the operation has read only impact on the MBean(s)

booting

true if the operation was executed during the bootup process, false if it was executed once the server is up and running

version

The version number of the WildFly instance

user

The username of the authenticated user.

domainUUID

This is not currently populated for JMX operations

access

This can have one of the following values:
*NATIVE - The operation came in through the native management interface, for example the CLI
*HTTP - The operation came in through the domain HTTP interface, for example the admin console
*JMX - The operation came in through the JMX subsystem. See JMX for how to configure audit logging for JMX.

remote-address

The address of the client executing this operation

method

The name of the called MBeanServer method

sig

The signature of the called called MBeanServer method

params

The actual parameters passed in to the MBeanServer method, a simple Object.toString() is called on each parameter.

error

If calling the MBeanServer method resulted in an error, this field will be populated with Throwable.getMessage()

Undertow

Couldn't find a page to include called: Undertow (web) subsystem configuration

Deployment Scanner

The deployment scanner is only used in standalone mode. Its job is to monitor a directory for new files and to deploy those files. It can be found in standalone.xml:

<subsystem xmlns="urn:jboss:domain:deployment-scanner:1.0">
   <deployment-scanner scan-interval="5000"
      relative-to="jboss.server.base.dir" path="deployments" />
</subsystem>

You can define more deployment-scanner entries to scan for deployments from more locations. The configuration showed will scan the JBOSS_HOME/standalone/deployments directory every five seconds. The runtime model is shown below, and uses default values for attributes not specified in the xml:

[standalone@localhost:9999 /] /subsystem=deployment-scanner:read-resource(recursive=true)
{
    "outcome" => "success",
    "result" => {"scanner" => {"default" => {
        "auto-deploy-exploded" => false,
        "auto-deploy-zipped" => true,
        "deployment-timeout" => 60L,
        "name" => "default",
        "path" => "deployments",
        "relative-to" => "jboss.server.base.dir",
        "scan-enabled" => true,
        "scan-interval" => 5000
    }}}
}

The attributes are

Name

Type

Description

name

STRING

The name of the scanner. default is used if not specified

path

STRING

The actual filesystem path to be scanned. Treated as an absolute path, unless the 'relative-to' attribute is specified, in which case the value is treated as relative to that path.

relative-to

STRING

Reference to a filesystem path defined in the "paths" section of the server configuration, or one of the system properties specified on startup. In the example above jboss.server.base.dir resolves to JBOSS_HOME/standalone

scan-enabled

BOOLEAN

If true scanning is enabled

scan-interval

INT

Periodic interval, in milliseconds, at which the repository should be scanned for changes. A value of less than 1 indicates the repository should only be scanned at initial startup.

auto-deploy-zipped

BOOLEAN

Controls whether zipped deployment content should be automatically deployed by the scanner without requiring the user to add a .dodeploy marker file.

auto-deploy-exploded

BOOLEAN

Controls whether exploded deployment content should be automatically deployed by the scanner without requiring the user to add a .dodeploy marker file. Setting this to 'true' is not recommended for anything but basic development scenarios, as there is no way to ensure that deployment will not occur in the middle of changes to the content.

deployment-timeout

LONG

Timeout, in seconds, a deployment is allows to execute before being canceled. The default is 60 seconds.

Deployment scanners can be added by modifying standalone.xml before starting up the server or they can be added and removed at runtime using the CLI

[standalone@localhost:9999 /] /subsystem=deployment-scanner/scanner=new:add(scan-interval=10000,relative-to="jboss.server.base.dir",path="other-deployments")
{"outcome" => "success"}
[standalone@localhost:9999 /] /subsystem=deployment-scanner/scanner=new:remove
{"outcome" => "success"}

You can also change the attributes at runtime, so for example to turn off scanning you can do

[standalone@localhost:9999 /] /subsystem=deployment-scanner/scanner=default:write-attribute(name="scan-enabled",value=false)
{"outcome" => "success"}
[standalone@localhost:9999 /] /subsystem=deployment-scanner:read-resource(recursive=true)
{
    "outcome" => "success",
    "result" => {"scanner" => {"default" => {
        "auto-deploy-exploded" => false,
        "auto-deploy-zipped" => true,
        "deployment-timeout" => 60L,
        "name" => "default",
        "path" => "deployments",
        "relative-to" => "jboss.server.base.dir",
        "scan-enabled" => false,
        "scan-interval" => 5000
    }}}
}

Threads subsystem configuration

Defining thread pools

Subsystems can reference thread pools defined by the threading subsystem. Externalizing thread pool in this way has the additional advantage of being able to manage the thread pools via native WildFly management mechanisms, and allows you to share thread pools across subsystems. For example:

<subsystem xmlns="urn:jboss:domain:threads:1.0">
    <thread-factory name="infinispan-factory" priority="1"/>
    <bounded-queue-thread-pool name="infinispan-transport">
        <core-threads count="1"/>
        <queue-length count="100000"/>
        <max-threads count="25"/>
        <thread-factory name="infinispan-factory"/>
    </bounded-queue-thread-pool>
    <bounded-queue-thread-pool name="infinispan-listener">
        <core-threads count="1"/>
        <queue-length count="100000"/>
        <max-threads count="1"/>
        <thread-factory name="infinispan-factory"/>
    </bounded-queue-thread-pool>
    <scheduled-thread-pool name="infinispan-eviction">
        <max-threads count="1"/>
        <thread-factory name="infinispan-factory"/>
    </scheduled-thread-pool>
    <scheduled-thread-pool name="infinispan-repl-queue">
        <max-threads count="1"/>
        <thread-factory name="infinispan-factory"/>
    </scheduled-thread-pool>
</subsystem>

Infinispan configuration:

<cache-container name="web" default-cache="repl" listener-executor="infinispan-listener" eviction-executor="infinispan-eviction" replication-queue-executor="infinispan-repl-queue">
    <transport executor="infinispan-transport"/>
    <replicated-cache name="repl" mode="ASYNC" batching="true">
        <locking isolation="REPEATABLE_READ"/>
        <file-store/>
    </replicated-cache>
</cache-container>

Operation request examples



/profile=production/subsystem=threads/bounded-queue-thread-pool=pool1:write-core-threads (count=0, per-cpu=20)

Simple configuration subsystems

The following subsystems currently have no configuration beyond its root element in the configuration

<subsystem xmlns="urn:jboss:domain:ejb3:1.0"/>
<subsystem xmlns="urn:jboss:domain:jaxrs:1.0"/>
<subsystem xmlns="urn:jboss:domain:remoting:1.0"/>
<subsystem xmlns="urn:jboss:domain:sar:1.0"/>
<subsystem xmlns="urn:jboss:domain:threads:1.0"/>
<subsystem xmlns="urn:jboss:domain:weld:1.0"/>

The presence of each of these turns on a piece of functionality:

Name

Description

EJB3

Enables the deployment and functionality of EJB 3.1 applications.

JAXRS

Enables the deployment and functionality of JAX-RS applications. This is implemented by the RestEasy project

Remoting

Turns on the remoting subsystem, which is used for the management communication and will be what underlies remote JNDI lookups and remote EJB calls in a future release.

Sar

Enables the deployment of .SAR archives containing MBean services, as supported by previous versions of WildFly

Threads

This subsystem is being deprecated and will not be part of the next release

Weld

Enables the deployment and functionality of CDI applications

JBoss.org Content Archive (Read Only), exported from JBoss Community Documentation Editor at 2020-03-13 13:47:02 UTC, last content change 2015-02-03 14:45:38 UTC.