Remove Unsupported Subsystems and Extensions
The following JBoss EAP 6 subsystems, and related extensions, are not supported by JBoss EAP 7:
Subsystem Name
|
Subsystem Configuration Namespace
|
Extension Module
|
cmp
|
urn:jboss:domain:cmp:*
|
org.jboss.as.cmp
|
configadmin
|
urn:jboss:domain:configadmin:*
|
org.jboss.as.configadmin
|
jaxr
|
urn:jboss:domain:jaxr:*
|
org.jboss.as.jaxr
|
osgi
|
urn:jboss:domain:osgi:*
|
org.jboss.as.osgi
|
threads
|
urn:jboss:domain:threads:*
|
org.jboss.as.threads
|
All JBoss EAP 7 unsupported subsystem configurations, and related extensions, must be removed from migrated server configurations, otherwise JBoss EAP 7 will fail to start. Such removal must be done offline, by editing the migrated server configuration’s XML, as an example the following JBoss EAP 6 standalone server configuration XML includes unsupported subsystems cmp, jaxr and threads, and extensions org.jboss.as.cmp, org.jboss.as.jaxr and org.jboss.as.threads:
<?xml version='1.0' encoding='UTF-8'?>
<server xmlns="urn:jboss:domain:1.7">
<extensions>
<extension module="org.jboss.as.cmp"/>
<extension module="org.jboss.as.connector"/>
<extension module="org.jboss.as.jaxr"/>
<extension module="org.jboss.as.naming"/>
<extension module="org.jboss.as.threads"/>
</extensions>
<!-- ... -->
<profile>
<subsystem xmlns="urn:jboss:domain:cmp:1.1"/>
<subsystem xmlns="urn:jboss:domain:datasources:1.2">
<datasources>
<datasource jndi-name="java:jboss/datasources/ExampleDS" pool-name="ExampleDS" enabled="true" use-java-context="true">
<connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE</connection-url>
<driver>h2</driver>
<security>
<user-name>sa</user-name>
<password>sa</password>
</security>
</datasource>
<drivers>
<driver name="h2" module="com.h2database.h2">
<xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class>
</driver>
</drivers>
</datasources>
</subsystem>
<subsystem xmlns="urn:jboss:domain:jaxr:1.1">
<connection-factory jndi-name="java:jboss/jaxr/ConnectionFactory"/>
</subsystem>
<subsystem xmlns="urn:jboss:domain:naming:1.4">
<remote-naming/>
</subsystem>
<subsystem xmlns="urn:jboss:domain:threads:1.1"/>
</profile>
<!-- ... -->
</server>
And such configuration when migrated to JBoss EAP 7, after removing the unsupported subsystems and extensions:
<?xml version='1.0' encoding='UTF-8'?>
<server xmlns="urn:jboss:domain:1.7">
<extensions>
<extension module="org.jboss.as.connector"/>
<extension module="org.jboss.as.naming"/>
</extensions>
<!-- ... -->
<profile>
<subsystem xmlns="urn:jboss:domain:datasources:1.2">
<datasources>
<datasource jndi-name="java:jboss/datasources/ExampleDS" pool-name="ExampleDS" enabled="true" use-java-context="true">
<connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE</connection-url>
<driver>h2</driver>
<security>
<user-name>sa</user-name>
<password>sa</password>
</security>
</datasource>
<drivers>
<driver name="h2" module="com.h2database.h2">
<xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class>
</driver>
</drivers>
</datasources>
</subsystem>
<subsystem xmlns="urn:jboss:domain:naming:1.4">
<remote-naming/>
</subsystem>
</profile>
<!-- ... -->
</server>
Migrate or Remove Deployments and Deployment Overlays
Both JBoss EAP 6 and 7 use same content repository design, which is used to store managed deployments, and related overlays. When migrating a server configuration containing deployments and overlays, these either must be removed, offline, from the configuration, or in the case it is desired to keep these, then the referenced content must be migrated to JBoss EAP 7 prior to start the server.
The migration of deployment and overlays merely consists of finding the referenced content files in JBoss EAP 6, and then copy these files to JBoss EAP 7, maintaining the same file paths, relative to the specific data content repository directory.
Regarding the data content repositories, each server has one specific to store content specified by standalone server configurations, which by default is located at the server’s standalone/data/content directory; and another to store content specified by managed domain’s host configurations, which by default is located at the server’s domain/data/content directory.
With respect to the references found in server configurations, the value of the sha1 attribute may be used to locate the content file in the repository where it is stored. The content file is named content, and is located in the directory which name is the sha1 value stripped of the first 2 chars, located in the directory which name is the first 2 chars of that same sha1 attribute value.
For instance, let’s consider the following JBoss EAP 6 standalone server configuration, which specifies one managed deployment, and one managed deployment overlay:
<server xmlns="urn:jboss:domain:4.1">
...
<deployments>
<deployment name="cmtool-helloworld1.war" runtime-name="cmtool-helloworld1.war">
<content sha1="ea69fbffdf08b320c09ad1acc7d31deba5a7787b"/>
</deployment>
</deployments>
<deployment-overlays>
<deployment-overlay name="overlay1">
<content path="/WEB-INF/classes/org/jboss/as/quickstarts/helloworld/HelloWorldServlet1.class" content="23b62a37ba8a4830622bfcdb960280577cc6796e"/>
<deployment name="cmtool-helloworld1.war"/>
</deployment-overlay>
</deployment-overlays>
</server>
The file path for the deployment content would be standalone/data/content/23/b62a37ba8a4830622bfcdb960280577cc6796e/content, and the file path for the overlay would be standalone/data/content/ea/69fbffdf08b320c09ad1acc7d31deba5a7787b. When migrating the deployment and overlay, such files should be copied to same path relative to JBoss EAP 7 root directory.
Migrate Legacy Subsystem Configurations
The following JBoss EAP 6 legacy subsystems are deprecated in JBoss EAP 7, and their subsystem configurations should be migrated for JBoss EAP 7, by converting these to configurations of the JBoss EAP 7 subsystems that provide similar functionality.
JacORB Subsystem
WildFly ORB support is provided by the JDK itself, instead of relying on JacORB. A subsystem configuration migration is required.
h7. JacORB Subsystem Configuration
The extension's module org.jboss.as.jacorb *is replaced by module *org.wildfly.iiop-openjdk, while the subsystem configuration namespace urn:jboss:domain:jacorb:2.0 is replaced by urn:jboss:domain:iiop-openjdk:1.0.
The XML configuration of the new subsystem accepts only a subset of the legacy elements/attributes. Consider the following example of the JacORB subsystem configuration, containing all valid elements and attributes:
<subsystem xmlns="urn:jboss:domain:jacorb:1.3">
<orb name="JBoss" print-version="off" use-imr="off" use-bom="off" cache-typecodes="off"
cache-poa-names="off" giop-minor-version ="2" socket-binding="jacorb" ssl-socket-binding="jacorb-ssl">
<connection retries="5" retry-interval="500" client-timeout="0" server-timeout="0"
max-server-connections="500" max-managed-buf-size="24" outbuf-size="2048"
outbuf-cache-timeout="-1"/>
<initializers security="off" transactions="spec"/>
</orb>
<poa monitoring="off" queue-wait="on" queue-min="10" queue-max="100">
<request-processors pool-size="10" max-threads="32"/>
</poa>
<naming root-context="JBoss/Naming/root" export-corbaloc="on"/>
<interop sun="on" comet="off" iona="off" chunk-custom-rmi-valuetypes="on"
lax-boolean-encoding="off" indirection-encoding-disable="off" strict-check-on-tc-creation="off"/>
<security support-ssl="off" add-component-via-interceptor="on" client-supports="MutualAuth"
client-requires="None" server-supports="MutualAuth" server-requires="None"/>
<properties>
<property name="some_property" value="some_value"/>
</properties>
</subsystem>
Properties that are not supported and have to be removed:
-
<orb/>: client-timeout, max-managed-buf-size, max-server-connections, outbuf-cache-timeout, outbuf-size, connection retries, retry-interval, name,server-timeout
-
<poa/>: queue-min, queue-max, pool-size, max-threads
On-off properties: have to either be removed or in off mode:
-
<orb/>:
cache-poa-names, cache-typecodes, print-version, use-bom, use-imr
-
<interop/>:
all except sun
-
<poa/>:
monitoring, queue-wait
In case the legacy subsystem configuration is available, such configuration may be migrated to the new subsystem by invoking its migrate operation, using the CLI management client:
/subsystem=jacorb:migrate
There is also a describe-migration operation that returns a list of all the management operations that are performed to migrate from the legacy subsystem to the new one:
/subsystem=jacorb:describe-migration
Both migrate and describe-migration will also display a list of migration-warnings if there are some resource or attributes that can not be migrated automatically. The following is a list of these warnings:
-
Properties X cannot be emulated using OpenJDK ORB and are not supported
This warning means that mentioned properties are not supported and won't be included in the new subsystem configuration. As a result of that admin must be aware that any behaviour implied by those properties would be inexistent. Admin has to check whether subsystem is able to operate correctly without that behaviour on the new server.Unsupported properties: cache-poa-names, cache-typecodes, chunk-custom-rmi-valuetypes, client-timeout, comet, indirection-encoding-disable, iona, lax-boolean-encoding, max-managed-buf-size, max-server-connections, max-threads, outbuf-cache-timeout, outbuf-size, queue-max, queue-min, poa-monitoring, print-version, retries, retry-interval, queue-wait, server-timeout, strict-check-on-tc-creation, use-bom, use-imr.
JBoss Web Subsystem
JBoss Web is replaced by Undertow in WildFly, which means that the legacy subsystem configuration should be migrated to WildFly's Undertow subsystem configuration.
h7. JBoss Web Subsystem Configuration
The extension's module org.jboss.as.web *is replaced by module *org.wildfly.extension.undertow, while the subsystem configuration namespace urn:jboss:domain:web:* is replaced by urn:jboss:domain:undertow:3.0.
The XML configuration of the new subsystem is relatively different. Consider the following example of the JBoss Web subsystem configuration, containing all valid elements and attributes:
<?xml version="1.0" encoding="UTF-8"?>
<subsystem xmlns="urn:jboss:domain:web:2.2" default-virtual-server="default-host" native="true" default-session-timeout="30" instance-id="foo">
<configuration>
<static-resources listings="true"
sendfile="1000"
file-encoding="utf-8"
read-only="true"
webdav="false"
secret="secret"
max-depth="5"
disabled="false"
/>
<jsp-configuration development="true"
disabled="false"
keep-generated="true"
trim-spaces="true"
tag-pooling="true"
mapped-file="true"
check-interval="20"
modification-test-interval="1000"
recompile-on-fail="true"
smap="true"
dump-smap="true"
generate-strings-as-char-arrays="true"
error-on-use-bean-invalid-class-attribute="true"
scratch-dir="/some/dir"
source-vm="1.7"
target-vm="1.7"
java-encoding="utf-8"
x-powered-by="true"
display-source-fragment="true" />
<mime-mapping name="ogx" value="application/ogg" />
<welcome-file>titi</welcome-file>
</configuration>
<connector name="http" scheme="http"
protocol="HTTP/1.1"
socket-binding="http"
enabled="true"
enable-lookups="false"
proxy-binding="reverse-proxy"
max-post-size="2097153"
max-save-post-size="512"
redirect-binding="https"
max-connections="300"
secure="false"
executor="some-executor"
/>
<connector name="https" scheme="https" protocol="HTTP/1.1" secure="true" socket-binding="https">
<ssl certificate-key-file="${file-base}/server.keystore"
ca-certificate-file="${file-base}/jsse.keystore"
key-alias="test"
password="changeit"
cipher-suite="SSL_RSA_WITH_3DES_EDE_CBC_SHA"
protocol="SSLv3"
verify-client="true"
verify-depth="3"
certificate-file="certificate-file.ext"
ca-revocation-url="https://example.org/some/url"
ca-certificate-password="changeit"
keystore-type="JKS"
truststore-type="JKS"
session-cache-size="512"
session-timeout="3000"
ssl-protocol="RFC4279"
/>
</connector>
<connector name="http-vs" scheme="http" protocol="HTTP/1.1" socket-binding="http" >
<virtual-server name="vs1" />
<virtual-server name="vs2" />
</connector>
<virtual-server name="default-host" enable-welcome-root="true" default-web-module="foo.war">
<alias name="localhost" />
<alias name="example.com" />
<access-log resolve-hosts="true" extended="true" pattern="extended" prefix="prefix" rotate="true" >
<directory relative-to="jboss.server.base.dir" path="toto" />
</access-log>
<rewrite name="myrewrite" pattern="^/helloworld(.*)" substitution="/helloworld/test.jsp" flags="L" />
<rewrite name="with-conditions" pattern="^/helloworld(.*)" substitution="/helloworld/test.jsp" flags="L" >
<condition name="https" pattern="off" test="%{HTTPS}" flags="NC"/>
<condition name="user" test="%{USER}" pattern="toto" flags="NC"/>
<condition name="no-flags" test="%{USER}" pattern="toto"/>
</rewrite>
<sso reauthenticate="true" domain="myDomain" cache-name="myCache"
cache-container="cache-container" http-only="true"/>
</virtual-server>
<virtual-server name="vs1" />
<virtual-server name="vs2" />
<valve name="myvalve" module="org.jboss.some.module" class-name="org.jboss.some.class" enabled="true">
<param param-name="param-name" param-value="some-value"/>
</valve>
<valve name="accessLog" module="org.jboss.as.web" class-name="org.apache.catalina.valves.AccessLogValve">
<param param-name="prefix" param-value="myapp_access_log." />
<param param-name="suffix" param-value=".log" />
<param param-name="rotatable" param-value="true" />
<param param-name="fileDateFormat" param-value="yyyy-MM-dd" />
<param param-name="pattern" param-value="common" />
<param param-name="directory" param-value="${jboss.server.log.dir}" />
<param param-name="resolveHosts" param-value="false"/>
<param param-name="conditionIf" param-value="log-enabled"/>
</valve>
<valve name="request-dumper" module="org.jboss.as.web" class-name="org.apache.catalina.valves.RequestDumperValve"/>
<valve name="remote-addr" module="org.jboss.as.web" class-name="org.apache.catalina.valves.RemoteAddrValve">
<param param-name="allow" param-value="127.0.0.1,127.0.0.2" />
<param param-name="deny" param-value="192.168.1.20" />
</valve>
<valve name="crawler" class-name="org.apache.catalina.valves.CrawlerSessionManagerValve" module="org.jboss.as.web" >
<param param-name="sessionInactiveInterval" param-value="1" />
<param param-name="crawlerUserAgents" param-value="Google" />
</valve>
<valve name="proxy" class-name="org.apache.catalina.valves.RemoteIpValve" module="org.jboss.as.web" >
<param param-name="internalProxies" param-value="192\.168\.0\.10|192\.168\.0\.11" />
<param param-name="remoteIpHeader" param-value="x-forwarded-for" />
<param param-name="proxiesHeader" param-value="x-forwarded-by" />
<param param-name="trustedProxies" param-value="proxy1|proxy2" />
</valve>
</subsystem>
FIXME compare with Undertow, list unsupported features
It's possible to do a migration of the legacy subsystem configuration, and related persisted data. , by invoking the legacy’s subsystem’s migrate operation, using the CLI management client:
There is also a describe-migration operation that returns a list of all the management operations that are performed to migrate from the legacy subsystem to the new one:
/subsystem=web:describe-migration
Both migrate and describe-migration will also display a list of migration-warnings if there are some resource or attributes that can not be migrated automatically. The following is a list of these warnings:
-
Could not migrate resource X
This warning means that mentioned resource configuration is not supported and won't be included in the new subsystem configuration. As a result of that admin must be aware that any behaviour implied by those resources would be inexistent. Admin has to check whether subsystem is able to operate correctly without that behaviour on the new server.
FIXME must document which are the resources that trigger this
-
Could not migrate attribute X from resource Y.
This warning means that mentioned resource configuration property is not supported and won't be included in the new subsystem configuration. As a result of that admin must be aware that any behaviour implied by those properties would be inexistent. Admin has to check whether subsystem is able to operate correctly without that behaviour on the new server.
FIXME must document which are the properties that trigger this
-
Could not migrate SSL connector as no SSL config is defined
-
Could not migrate verify-client attribute %s to the Undertow equivalent
-
Could not migrate verify-client expression %s
-
Could not migrate valve X
This warning means that mentioned valve configuration is not supported and won't be included in the new subsystem configuration. As a result of that admin must be aware that any behaviour implied by those resources would be inexistent. Admin has to check whether subsystem is able to operate correctly without that behaviour on the new server. This warning may happen for :
-
org.apache.catalina.valves.RemoteAddrValve : must have at least one allowed or denied value.
-
org.apache.catalina.valves.RemoteHostValve : must have at least one allowed or denied value.
-
org.apache.catalina.authenticator.BasicAuthenticator
-
org.apache.catalina.authenticator.DigestAuthenticator
-
org.apache.catalina.authenticator.FormAuthenticator
-
org.apache.catalina.authenticator.SSLAuthenticator
-
org.apache.catalina.authenticator.SpnegoAuthenticator
-
custom valves
-
Could not migrate attribute X from valve Y
This warning means that mentioned valve configuration property is not supported and won't be included in the new subsystem configuration. As a result of that admin must be aware that any behaviour implied by those properties would be inexistent. Admin has to check whether subsystem is able to operate correctly without that behaviour on the new server. This warning may happen for :
-
org.apache.catalina.valves.AccessLogValve : if you use the following parameters resolveHosts, fileDateFormat, renameOnRotate, encoding, locale, requestAttributesEnabled, buffered.
-
org.apache.catalina.valves.ExtendedAccessLogValve : if you use the following parameters resolveHosts, fileDateFormat, renameOnRotate, encoding, locale, requestAttributesEnabled, buffered.
-
org.apache.catalina.valves.RemoteIpValve:
-
if remoteIpHeader is defined and isn't set to "x-forwarded-for".
-
if protocolHeader is defined and isn't set to "x-forwarded-proto".
-
if you use the following parameters httpServerPort and httpsServerPort .
Also, please note that Undertow doesn't support JBoss Web valves, but some of these may be migrated to Undertow handlers, and JBoss Web subsystem’s migrate operation do that too.
Here is a list of those valves and their corresponding Undertow handler:
Valve
|
Handler
|
org.apache.catalina.valves.AccessLogValve
|
io.undertow.server.handlers.accesslog.AccessLogHandler
|
org.apache.catalina.valves.ExtendedAccessLogValve
|
io.undertow.server.handlers.accesslog.AccessLogHandler
|
org.apache.catalina.valves.RequestDumperValve
|
io.undertow.server.handlers.RequestDumpingHandler
|
org.apache.catalina.valves.RewriteValve
|
io.undertow.server.handlers.SetAttributeHandler
|
org.apache.catalina.valves.RemoteHostValve
|
io.undertow.server.handlers.AccessControlListHandler
|
org.apache.catalina.valves.RemoteAddrValve
|
io.undertow.server.handlers.IPAddressAccessControlHandler
|
org.apache.catalina.valves.RemoteIpValve
|
io.undertow.server.handlers.ProxyPeerAddressHandler
|
org.apache.catalina.valves.StuckThreadDetectionValve
|
io.undertow.server.handlers.StuckThreadDetectionHandler
|
org.apache.catalina.valves.CrawlerSessionManagerValve
|
io.undertow.servlet.handlers.CrawlerSessionManagerHandler
|
The org.apache.catalina.valves.JDBCAccessLogValve can't be automatically migrated to io.undertow.server.handlers.JDBCLogHandler as the expectations differ.
The migration can be done manually thought :
-
create the driver module and add the driver to the list of available drivers
-
create a datasource pointing to the database where the log entries are going to be stored
-
add an expression-filter definition with the following expression: "jdbc-access-log(datasource='datasource-jndi-name')
<valve name="jdbc" module="org.jboss.as.web" class-name="org.apache.catalina.valves.JDBCAccessLogValve">
<param param-name="driverName" param-value="com.mysql.jdbc.Driver" />
<param param-name="connectionName" param-value="root" />
<param param-name="connectionPassword" param-value="password" />
<param param-name="connectionURL" param-value="jdbc:mysql://localhost:3306/wildfly?zeroDateTimeBehavior=convertToNull" />
<param param-name="format" param-value="combined" />
</valve>
should become:
<subsystem xmlns="urn:jboss:domain:datasources:1.2">
<datasources>
<datasource jndi-name="java:jboss/datasources/accessLogDS" pool-name="ccessLogDS" enabled="true" use-java-context="true">
<connection-url>jdbc:mysql://localhost:3306/wildfly?zeroDateTimeBehavior=convertToNull</connection-url>
<driver>mysql</driver>
<security>
<user-name>root</user-name>
<password>password</password>
</security>
</datasource>
...
<drivers>
<driver name="mysql" module="com.mysql">
<driver-class>com.mysql.jdbc.Driver</driver-class>
</driver>
...
</drivers>
</datasources>
</subsystem>
...
<subsystem xmlns="urn:jboss:domain:undertow:3.1" default-virtual-host="default-virtual-host" default-servlet-container="myContainer"
default-server="some-server" instance-id="some-id" statistics-enabled="true">
...
<server name="some-server" default-host="other-host" servlet-container="myContainer">
...
<host name="other-host" alias="www.mysite.com, ${prop.value:default-alias}" default-web-module="something.war" disable-console-redirect="true">
<location name="/" handler="welcome-content" />
<filter-ref name="jdbc-access"/>
</host>
</server>
...
<filters>
<expression-filter name="jdbc-access" expression="jdbc-access-log(datasource='java:jboss/datasources/accessLogDS')" />
...
</filters>
</subsystem>
Please note that any custom valve won't be migrated at all and will just be removed from the configuration.
Also the authentication related valves are to be replaced by Undertow authentication mechanisms, and this have to be done manually.
FIXME how this last “manual” replacement is done? Need whole process documented and concrete example
h7. WebSockets
In AS7, to use WebSockets, you had to configure the 'http' connector in the web subsystem of the server configuration file to use the NIO2 protocol. The following is an example of the Management CLI command to configure WebSockets in the previous releases.
/subsystem=web/connector=http/:write-attribute(name=protocol,value=org.apache.coyote.http11.Http11NioProtocol)
WebSockets are a requirement of the Java EE 7 specification and the default configuration is included in WildFly. More complex WebSocket configuration is done in the servlet-container of the undertow subsystem of the server configuration file.
You no longer need to configure the server for default WebSocket support.
FIXME isn’t <websockets /> required for that?
Messaging Subsystem
WildFly JMS support is provided by ActiveMQ Artemis, instead of HornetQ. It's possible to do a migration of the legacy subsystem configuration, and related persisted data.
h7. Messaging Subsystem Configuration
The extension's module org.jboss.as.messaging is replaced by module org.wildfly.extension.messaging-activemq, while the subsystem configuration namespace urn:jboss:domain:messaging:3.0 is replaced by urn:jboss:domain:messaging-activemq:1.0.
h7. Management model
In most cases, an effort was made to keep resource and attribute names as similar as possible to those used in previous releases. The following table lists some of the changes.
HornetQ name
|
ActiveMQ name
|
hornetq-server
|
server
|
hornetq-serverType
|
serverType
|
connectors
|
connector
|
discovery-group-name
|
discovery-group
|
The management operations invoked on the new messaging-subsystem starts with /subsystem=messaging-activemq/server=X while the legacy messaging subsystem was at /subsystem=messaging/hornetq-server=X.
In case the legacy subsystem configuration is available, such configuration may be migrated to the new subsystem by invoking its migrate operation, using the CLI management client:
/subsystem=messaging:migrate
There is also a describe-migration operation that returns a list of all the management operations that are performed to migrate from the legacy subsystem to the new one:
/subsystem=messaging:describe-migration
Both migrate and describe-migration will also display a list of migration-warnings if there are some resource or attributes that can not be migrated automatically. The following is a list of these warnings:
-
The migrate operation can not be performed: the server must be in admin-only mode
The migrate operation requires starting the server in admin-only mode, which is done by adding parameter --admin-only to the server start command, e.g.
./standalone.sh --admin-only
-
Can not migrate attribute local-bind-address from resource X. Use instead the socket-attribute to configure this broadcast-group.
-
Can not migrate attribute local-bind-port from resource X. Use instead the socket-binding attribute to configure this broadcast-group.
-
Can not migrate attribute group-address from resource X. Use instead the socket-binding attribute to configure this broadcast-group.
-
Can not migrate attribute group-port from resource X. Use instead the socket-binding attribute to configure this broadcast-group.
Broadcast-group resources no longer accept local-bind-address, local-bind-port, group-address, group-port attributes. It only accepts a socket-binding. The warning notifies that resource X has an unsupported attribute. The user will have to set the socket-binding attribute on the resource and ensures it corresponds to a defined socket-binding resource.
-
Classes providing the %s are discarded during the migration. To use them in the new messaging-activemq subsystem, you will have to extend the Artemis-based Interceptor.
Messaging interceptors support is significantly different in WildFly 10, any interceptors configured in the legacy subsystem are discarded during migration. Please refer to the Messaging Interceptors section to learn how to migrate legacy Messaging interceptors.
-
Can not migrate the HA configuration of X. Its shared-store and backup attributes holds expressions and it is not possible to determine unambiguously how to create the corresponding ha-policy for the messaging-activemq's server.
If the hornetq-server X’s shared-store or backup attributes hold an expression, such as ${xxx}, then it’s not possible to determine the actual ha-policy of the migrated server. In that case, we discard it and the user will have to add the correct ha-policy afterwards (ha-policy is a single resource underneath the messaging-activemq's server resource).
-
Can not migrate attribute local-bind-address from resource X. Use instead the socket-binding attribute to configure this discovery-group.Can not migrate attribute local-bind-port from resource X. Use instead the socket-binding attribute to configure this discovery-group.
-
Can not migrate attribute group-address from resource X. Use instead the socket-binding attribute to configure this discovery-group.
-
Can not migrate attribute group-port from resource X. Use instead the socket-binding attribute to configure this discovery-group.
discovery-group resources no longer accept local-bind-address, local-bind-port, group-address, group-port attributes. It only accepts a socket-binding. The warning notifies that resource X has an unsupported attribute.
The user will have to set the socket-binding attribute on the resource and ensures it corresponds to a defined socket-binding resource.
-
Can not create a legacy-connection-factory based on connection-factory X. It uses a HornetQ in-vm connector that is not compatible with Artemis in-vm connector
Legacy subsystem’s remote connection-factory resources are migrated into legacy-connection-factory resources, to allow old EAP6 clients to connect to EAP7. However a connection-factory using in-vm will not be migrated, because a in-vm client will be based on EAP7, not EAP 6. In other words, legacy-connection-factory are created only when the CF is using remote connectors, and this warning notifies about in-vm connection-factory X not migrated.
-
Can not migrate attribute X from resource Y. The attribute uses an expression that can be resolved differently depending on system properties. After migration, this attribute must be added back with an actual value instead of the expression.
This warning appears when the migration logic needs to know the concrete value of attribute X during migration, but instead such value includes an expression that’s can’t be resolved, so the actual value can not be determined, and the attribute is discarded. It happens in several cases, for instance:
-
cluster-connection forward-when-no-consumers. This boolean attribute has been replaced by the message-load-balancing-type attribute (which is an enum of OFF, STRICT, ON_DEMAND)
-
broadcast-group and discovery-group’s jgroups-stack and jgroups-channel attributes. They reference other resources and we no longer accept expressions for them.
h7. XML Configuration
The XML configuration has changed significantly with the new messaging-activemq subsystem to provide a XML scheme more consistent with other WildFly subsystems.
It is not advised to change the XML configuration of the legacy messaging subsystem to conform to the new messaging-activemq subsystem. Instead, invoke the legacy subsystem migrate operation. This operation will write the XML configuration of the new messaging-activemq subsystem as a part of its execution.
h7. Messaging Interceptors
Messaging Interceptors are significantly different in EAP 7, requiring both code and configuration changes by the user. In concrete the interceptor base Java class is now org.apache.artemis.activemq.api.core.interceptor.Interceptor, and the user interceptor implementation classes may now be loaded by any server module. Note that prior to EAP 7 the interceptor classes could only be installed by adding these to the HornetQ module, thus requiring the user to change such module XML descriptor, its module.xml.
With respect to the server XML configuration, the user must now specify the module to load its interceptors in the new messaging-activemq subsystem XML config, e.g:
<subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">
<server name=“default”>
...
<incoming-interceptors>
<class name="org.foo.incoming.myInterceptor" module="org.foo" />
<class name="org.bar.incoming.myOtherInterceptor" module="org.bar" />
</incoming-interceptors>
<outgoing-interceptors>
<class name="org.foo.outgoing.myInterceptor" module="org.foo" />
<class name="org.bar.outgoing.myOtherInterceptor" module="org.bar" />
</outgoing-interceptors>
</server>
</subsystem>
h7. JMS Destinations
In previous releases, JMS destination queues were configured in the <jms-destinations> element under the hornetq-server section of the messaging subsystem.
<jms-destinations>
<jms-queue name="testQueue">
<entry name="queue/test"/>
<entry name="java:jboss/exported/jms/queue/test"/>
</jms-queue>
</jms-destinations>
In WildFly, the JMS destination queue is configured in the default server of the messaging-activemq subsystem.
<jms-queue name="testQueue" entries="queue/test java:jboss/exported/jms/queue/test"/>
h7. Messaging Logging
The prefix of messaging log messages in WildFly is WFLYMSGAMQ, instead of WFLYMSG.
h7. Messaging Data
The location of the messaging data has been changed in the new messaging-activemq subsystem:
-
messagingbindings/ -> activemq/bindings/
-
messagingjournal/ -> activemq/journal/
-
messaginglargemessages/ -> activemq/largemessages/
-
messagingpaging/ -> activemq/paging/
To migrate legacy messaging data, you will have to export the directories used by the legacy messaging subsystem and import them into the new subsystem's server by using its import-journal operation:
/subsystem=messaging-activemq/server=default:import-journal(file=<path to XML dump>)
The XML dump is a XML file generated by HornetQ XmlDataExporter util class.
Mandatory Subsystem Configuration Updates
The following changes require that subsystem(s) configurations are updated, otherwise JBoss EAP 7 may not work as expected.
Update Infinispan Hibernate Cache Configurations
The Infinispan subsystem functionality includes a cache manager tailored to Hibernate, which implementation in JBoss EAP 7 was improved and is provided by a new module. When migrating an Infinispan subsystem configuration it should be verified if it includes a Hibernate cache manager, and in such case it should be ensured that the value of the module atribute is set with the new module’s name, org.hibernate.infinispan.
As an example, in JBoss EAP 6.4 the default standalone server configuration, the Infinipan subsystem configuration includes a Hibernate cache manager:
<subsystem xmlns="urn:jboss:domain:infinispan:1.5">
…
<cache-container name="hibernate" default-cache="local-query" module="org.jboss.as.jpa.hibernate:4">
<local-cache name="entity">
<transaction mode="NON_XA"/>
<eviction strategy="LRU" max-entries="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<local-cache name="local-query">
<transaction mode="NONE"/>
<eviction strategy="LRU" max-entries="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<local-cache name="timestamps">
<transaction mode="NONE"/>
<eviction strategy="NONE"/>
</local-cache>
</cache-container>
…
</subsystem>
And such configuration migrated to JBoss EAP 7 should be, after correcting the module attribute:
<subsystem xmlns="urn:jboss:domain:infinispan:1.5">
…
<cache-container name="hibernate" default-cache="local-query" module="org.hibernate.infinispan">
<local-cache name="entity">
<transaction mode="NON_XA"/>
<eviction strategy="LRU" max-entries="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<local-cache name="local-query">
<transaction mode="NONE"/>
<eviction strategy="LRU" max-entries="10000"/>
<expiration max-idle="100000"/>
</local-cache>
<local-cache name="timestamps">
<transaction mode="NONE"/>
<eviction strategy="NONE"/>
</local-cache>
</cache-container>
…
</subsystem>
Update EJB Cache and Passivation Store Configurations
The JBoss EAP 6’s EJB subsystem file and clustered implementations of passivation stores are deprecated in the JBoss EAP 7’s EJB subsystem, and both were replaced with a new unified passivation store. Also, as a consequence of the passivation store changes, all the caches,using the deprecated passivation stores, should also be replaced by a single cache referring the unified passivation store.
For instance, all JBoss EAP 6 default not clustered server configurations use the following EJB subsystem configuration, which defines a file passivation store, and a passivating cache referring it:
<subsystem xmlns="urn:jboss:domain:ejb3:1.5">
<session-bean>
…
<stateful default-access-timeout="5000" cache-ref="simple"/>
…
</session-bean>
…
<caches>
…
<cache name="passivating" passivation-store-ref="file" aliases="SimpleStatefulCache"/>
</caches>
<passivation-stores>
<file-passivation-store name="file"/>
</passivation-stores>
…
</subsystem>
And all JBoss EAP 6 default clustered server configurations use the following EJB subsystem configuration, which defines the file passivation store, referenced by a passivating cache; and a clustered passivation store, referenced by a clustered cache:
<subsystem xmlns="urn:jboss:domain:ejb3:1.5">
<session-bean>
…
<stateful default-access-timeout="5000" cache-ref="simple" clustered-cache-ref="clustered"/>
—
</session-bean>
<caches>
…
<cache name="passivating" passivation-store-ref="file" aliases="SimpleStatefulCache"/>
<cache name="clustered" passivation-store-ref="infinispan" aliases="StatefulTreeCache"/>
</caches>
<passivation-stores>
<file-passivation-store name="file"/>
<cluster-passivation-store name="infinispan" cache-container="ejb"/>
</passivation-stores>
…
</subsystem>
When migrated to JBoss EAP 7 all should instead use a single passivation store, and a single cache referring the unified passivation store, which should include the merged cache names as aliases:
<subsystem xmlns="urn:jboss:domain:ejb3:4.0">
<session-bean>
…
<stateful default-access-timeout="5000" cache-ref="simple" passivation-disabled-cache-ref="simple"/>
</session-bean>
<caches>
…
<cache name="distributable" passivation-store-ref="infinispan" aliases="passivating clustered"/>
</caches>
<passivation-stores>
<passivation-store name="infinispan" cache-container="ejb" max-size="10000"/>
</passivation-stores>
</subsystem>
Please note that JBoss EAP 6 non clustered server configurations may not include an ejb cache container in Infinispan subsystem configuration(s), referenced above by the new unified passivation store. In such case JBoss EAP 7 default ejb cache container should be added:
<subsystem xmlns="urn:jboss:domain:infinispan:4.0">
…
<cache-container name="ejb" aliases="sfsb" default-cache="passivation" module="org.wildfly.clustering.ejb.infinispan">
<local-cache name="passivation">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="BATCH"/>
<file-store passivation="true" purge="false"/>
</local-cache>
<local-cache name="persistent">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="BATCH"/>
<file-store passivation="false" purge="false"/>
</local-cache>
</cache-container>
…
</subsystem>
Update Clustered Singleton Configuration
The JBoss EAP 6 Infinispan subsystem’s clustered singleton functionality has been reworked, and in Jboss EAP 7 it is provided separately by a new subsystem named Singleton.
In the default JBoss EAP 6 clustered server configurations, the singleton functionality was configured as:
<subsystem xmlns="urn:jboss:domain:infinispan:1.5">
<cache-container name="singleton" aliases="cluster ha-partition" default-cache="default">
<transport lock-timeout="60000"/>
<replicated-cache name="default" mode="SYNC" batching="true">
<locking isolation="REPEATABLE_READ"/>
</replicated-cache>
</cache-container>
…
</subsystem>
And such configuration should be instead, when migrated to JBoss EAP 7:
<subsystem xmlns="urn:jboss:domain:infinispan:4.0">
<cache-container name="server" aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server">
<transport lock-timeout="60000"/>
<replicated-cache name="default" mode="SYNC">
<transaction mode="BATCH"/>
</replicated-cache>
</cache-container>
…
</subsystem>
…
<subsystem xmlns="urn:jboss:domain:singleton:1.0">
<singleton-policies default="default">
<singleton-policy name="default" cache-container="server">
<simple-election-policy/>
</singleton-policy>
</singleton-policies>
</subsystem>