JBoss.orgCommunity Documentation
HornetQ has a fully pluggable and highly flexible transport layer and defines its own Service Provider Interface (SPI) to make plugging in a new transport provider relatively straightforward.
In this chapter we'll describe the concepts required for understanding HornetQ transports and where and how they're configured.
One of the most important concepts in HornetQ transports is the
acceptor. Let's dive straight in and take a look at an acceptor
defined in xml in the configuration file hornetq-configuration.xml
.
<acceptors> <acceptor name="netty"> <factory-class> org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory </factory-class> <param key="port" value="5446"/> </acceptor> </acceptors>
Acceptors are always defined inside an acceptors
element. There can
be one or more acceptors defined in the acceptors
element. There's no
upper limit to the number of acceptors per server.
Each acceptor defines a way in which connections can be made to the HornetQ server.
In the above example we're defining an acceptor that uses Netty to listen for connections at port
5446
.
The acceptor
element contains a sub-element factory-class
, this element defines the factory used to create acceptor
instances. In this case we're using Netty to listen for connections so we use the Netty
implementation of an AcceptorFactory
to do this. Basically, the
factory-class
element determines which pluggable transport we're
going to use to do the actual listening.
The acceptor
element can also be configured with zero or more
param
sub-elements. Each param
element defines
a key-value pair. These key-value pairs are used to configure the specific transport,
the set of valid key-value pairs depends on the specific transport be used and are
passed straight through to the underlying transport.
Examples of key-value pairs for a particular transport would be, say, to configure the IP address to bind to, or the port to listen at.
Whereas acceptors are used on the server to define how we accept connections, connectors are used by a client to define how it connects to a server.
Let's look at a connector defined in our hornetq-configuration.xml
file:
<connectors> <connector name="netty"> <factory-class> org.hornetq.core.remoting.impl.netty.NettyConnectorFactory </factory-class> <param key="port" value="5446"/> </connector> </connectors>
Connectors can be defined inside a connectors
element. There can be
one or more connectors defined in the connectors
element. There's no
upper limit to the number of connectors per server.
You make ask yourself, if connectors are used by the client to make connections then why are they defined on the server? There are a couple of reasons for this:
Sometimes the server acts as a client itself when it connects to another server, for example when one server is bridged to another, or when a server takes part in a cluster. In this cases the server needs to know how to connect to other servers. That's defined by connectors.
If you're using JMS and the server side JMS service to instantiate JMS
ConnectionFactory instances and bind them in JNDI, then when creating the
HornetQConnectionFactory
it needs to know what server
that connection factory will create connections to.
That's defined by the connector-ref
element in the hornetq-jms.xml
file on the server side. Let's take a look at a
snipped from a hornetq-jms.xml
file that shows a JMS
connection factory that references our netty connector defined in our hornetq-configuration.xml
file:
<connection-factory name="ConnectionFactory"> <connectors> <connector-ref connector-name="netty"/> </connectors> <entries> <entry name="ConnectionFactory"/> <entry name="XAConnectionFactory"/> </entries> </connection-factory>
How do we configure a core ClientSessionFactory
with the
information that it needs to connect with a server?
Connectors are also used indirectly when directly configuring a core ClientSessionFactory
to directly talk to a server. Although in this case
there's no need to define such a connector in the server side configuration, instead we
just create the parameters and tell the ClientSessionFactory
which
connector factory to use.
Here's an example of creating a ClientSessionFactory
which will
connect directly to the acceptor we defined earlier in this chapter, it uses the
standard Netty TCP transport and will try and connect on port 5446 to localhost
(default):
Map<String, Object> connectionParams = new HashMap<String, Object>(); connectionParams.put(org.hornetq.core.remoting.impl.netty.TransportConstants.PORT_PROP_NAME, 5446); TransportConfiguration transportConfiguration = new TransportConfiguration( "org.hornetq.core.remoting.impl.netty.NettyConnectorFactory", connectionParams); ServerLocator locator = HornetQClient.createServerLocatorWithoutHA(transportConfiguration); ClientSessionFactory sessionFactory = locator.createClientSessionFactory(); ClientSession session = sessionFactory.createSession(...); etc
Similarly, if you're using JMS, you can configure the JMS connection factory directly
on the client side without having to define a connector on the server side or define a
connection factory in hornetq-jms.xml
:
Map<String, Object> connectionParams = new HashMap<String, Object>(); connectionParams.put(org.hornetq.core.remoting.impl.netty.TransportConstants.PORT_PROP_NAME, 5446); TransportConfiguration transportConfiguration = new TransportConfiguration( "org.hornetq.core.remoting.impl.netty.NettyConnectorFactory", connectionParams); ConnectionFactory connectionFactory = HornetQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.CF, transportConfiguration); Connection jmsConnection = connectionFactory.createConnection(); etc
Out of the box, HornetQ currently uses Netty, a high performance low level network library.
Our Netty transport can be configured in several different ways; to use old (blocking) Java IO, or NIO (non-blocking), also to use straightforward TCP sockets, SSL, or to tunnel over HTTP or HTTPS, on top of that we also provide a servlet transport.
We believe this caters for the vast majority of transport requirements.
Netty TCP is a simple unencrypted TCP sockets based transport. Netty TCP can be configured to use old blocking Java IO or non blocking Java NIO. We recommend you use the Java NIO on the server side for better scalability with many concurrent connections. However using Java old IO can sometimes give you better latency than NIO when you're not so worried about supporting many thousands of concurrent connections.
If you're running connections across an untrusted network please bear in mind this transport is unencrypted. You may want to look at the SSL or HTTPS configurations.
With the Netty TCP transport all connections are initiated from the client side. I.e. the server does not initiate any connections to the client. This works well with firewall policies that typically only allow connections to be initiated in one direction.
All the valid Netty transport keys are defined in the class org.hornetq.core.remoting.impl.netty.TransportConstants
. Most
parameters can be used either with acceptors or connectors, some only work with
acceptors. The following parameters can be used to configure Netty for simple
TCP:
use-nio
. If this is true
then Java
non blocking NIO will be used. If set to false
then old
blocking Java IO will be used.
If you require the server to handle many concurrent connections, we highly
recommend that you use non blocking Java NIO. Java NIO does not maintain a
thread per connection so can scale to many more concurrent connections than
with old blocking IO. If you don't require the server to handle many
concurrent connections, you might get slightly better performance by using
old (blocking) IO. The default value for this property is false
on the server side and false
on the
client side.
host
. This specifies the host name or IP address to
connect to (when configuring a connector) or to listen on (when configuring
an acceptor). The default value for this property is localhost
. When configuring acceptors, multiple hosts or IP
addresses can be specified by separating them with commas. It is also
possible to specify 0.0.0.0
to accept connection from all the
host's network interfaces. It's not valid to specify multiple addresses when
specifying the host for a connector; a connector makes a connection to one
specific address.
Don't forget to specify a host name or ip address! If you want your server able to accept connections from other nodes you must specify a hostname or ip address at which the acceptor will bind and listen for incoming connections. The default is localhost which of course is not accessible from remote nodes!
port
. This specified the port to connect to (when
configuring a connector) or to listen on (when configuring an acceptor). The
default value for this property is 5445
.
tcp-no-delay
. If this is true
then
Nagle's
algorithm will be enabled. The default value for this property
is true
.
tcp-send-buffer-size
. This parameter determines the
size of the TCP send buffer in bytes. The default value for this property is
32768
bytes (32KiB).
TCP buffer sizes should be tuned according to the bandwidth and latency of your network. Here's a good link that explains the theory behind this.
In summary TCP send/receive buffer sizes should be calculated as:
buffer_size = bandwidth * RTT.
Where bandwidth is in bytes per second and network
round trip time (RTT) is in seconds. RTT can be easily measured using the
ping
utility.
For fast networks you may want to increase the buffer sizes from the defaults.
tcp-receive-buffer-size
. This parameter determines the
size of the TCP receive buffer in bytes. The default value for this property
is 32768
bytes (32KiB).
batch-delay
. Before writing packets to the transport,
HornetQ can be configured to batch up writes for a maximum of batch-delay
milliseconds. This can increase overall
throughput for very small messages. It does so at the expense of an increase
in average latency for message transfer. The default value for this property
is 0
ms.
direct-deliver
. When a message arrives on the server
and is delivered to waiting consumers, by default, the delivery is done on a
different thread to that which the message arrived on. This gives the best
overall throughput and scalability, especially on multi-core machines.
However it also introduces some extra latency due to the extra context
switch required. If you want the lowest latency and the possible expense of
some reduction in throughput then you can make sure direct-deliver
to true. The default value for this parameter
is true
. If you are willing to take some small extra hit
on latency but want the highest throughput set this parameter to false
.
nio-remoting-threads
. When configured to use NIO,
HornetQ will, by default, use a number of threads equal to three times the
number of cores (or hyper-threads) as reported by Runtime.getRuntime().availableProcessors()
for processing
incoming packets. If you want to override this value, you can set the number
of threads by specifying this parameter. The default value for this
parameter is -1
which means use the value from Runtime.getRuntime().availableProcessors()
* 3.
cluster-connection
. If you define more than one cluster
connection, you may define what cluster connection will be used to notify
topologies. This will be very useful when you are playing with multiple
network interface cards (NIC) and need to isolate the cluster
definitions.
The default value is the first cluster connection defined at the main configuration.
stomp-consumer-credits
. When consuming messages in stomp, the server will flow control the channel
as messages are acknowledged. The default value is 10K. The server won't send more messages than 10K bytes until you ack more messages.
Notice that if you use auto-ack subscriptions on stomp, you have to consume as fast as you can on your client, or you may flood the channels on Netty what will lead to OutOfMemory errors.
Netty SSL is similar to the Netty TCP transport but it provides additional security by encrypting TCP connections using the Secure Sockets Layer SSL
Please see the examples for a full working example of using Netty SSL.
Netty SSL uses all the same properties as Netty TCP but adds the following additional properties:
ssl-enabled
. Must be true
to enable
SSL.
key-store-path
. This is the path to the SSL key store
on the client which holds the client certificates.
key-store-password
. This is the password for the client
certificate key store on the client.
trust-store-path
. This is the path to the trusted
client certificate store on the server.
trust-store-password
. This is the password to the
trusted client certificate store on the server.
Netty HTTP tunnels packets over the HTTP protocol. It can be useful in scenarios where firewalls only allow HTTP traffice to pass.
Please see the examples for a full working example of using Netty HTTP.
Netty HTTP uses the same properties as Netty TCP but adds the following additional properties:
http-enabled
. Must be true
to enable
HTTP.
http-client-idle-time
. How long a client can be idle
before sending an empty http request to keep the connection alive
http-client-idle-scan-period
. How often, in
milliseconds, to scan for idle clients
http-response-time
. How long the server can wait before
sending an empty http response to keep the connection alive
http-server-scan-period
. How often, in milliseconds, to
scan for clients needing responses
http-requires-session-id
. If true the client will wait
after the first call to receive a session id. Used the http connector is
connecting to servlet acceptor (not recommended)
We also provide a Netty servlet transport for use with HornetQ. The servlet transport allows HornetQ traffic to be tunneled over HTTP to a servlet running in a servlet engine which then redirects it to an in-VM HornetQ server.
The servlet transport differs from the Netty HTTP transport in that, with the HTTP transport HornetQ effectively acts a web server listening for HTTP traffic on, e.g. port 80 or 8080, whereas with the servlet transport HornetQ traffic is proxied through a servlet engine which may already be serving web site or other applications. This allows HornetQ to be used where corporate policies may only allow a single web server listening on an HTTP port, and this needs to serve all applications including messaging.
Please see the examples for a full working example of the servlet transport being used.
To configure a servlet engine to work the Netty Servlet transport we need to do the following things:
Deploy the servlet. Here's an example web.xml describing a web application that uses the servlet:
<?xml version="1.0" encoding="UTF-8"?> <web-app xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd" version="2.4"> <servlet> <servlet-name>HornetQServlet</servlet-name> <servlet-class>org.jboss.netty.channel.socket.http.HttpTunnelingServlet</servlet-class> <init-param> <param-name>endpoint</param-name> <param-value>local:org.hornetq</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>HornetQServlet</servlet-name> <url-pattern>/HornetQServlet</url-pattern> </servlet-mapping> </web-app>
We also need to add a special Netty invm acceptor on the server side configuration.
Here's a snippet from the hornetq-configuration.xml
file showing that acceptor being defined:
<acceptors> <acceptor name="netty-invm"> <factory-class> org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory </factory-class> <param key="use-invm" value="true"/> <param key="host" value="org.hornetq"/> </acceptor> </acceptors>
Lastly we need a connector for the client, this again will be configured
in the hornetq-configuration.xml
file as such:
<connectors> <connector name="netty-servlet"> <factory-class> org.hornetq.core.remoting.impl.netty.NettyConnectorFactory </factory-class> <param key="host" value="localhost"/> <param key="port" value="8080"/> <param key="use-servlet" value="true"/> <param key="servlet-path" value="/messaging/HornetQServlet"/> </connector> </connectors>
Heres a list of the init params and what they are used for
endpoint - This is the name of the netty acceptor that the servlet will
forward its packets to. You can see it matches the name of the host
param.
The servlet pattern configured in the web.xml
is the path of
the URL that is used. The connector param servlet-path
on the
connector config must match this using the application context of the web app if
there is one.
Its also possible to use the servlet transport over SSL. simply add the following configuration to the connector:
<connector name="netty-servlet"> <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class> <param key="host" value="localhost"/> <param key="port" value="8443"/> <param key="use-servlet" value="true"/> <param key="servlet-path" value="/messaging/HornetQServlet"/> <param key="ssl-enabled" value="true"/> <param key="key-store-path" value="path to a keystoree"/> <param key="key-store-password" value="keystore password"/> </connector>
You will also have to configure the Application server to use a KeyStore. Edit the
server.xml
file that can be found under server/default/deploy/jbossweb.sar
of the Application Server
installation and edit the SSL/TLS connector configuration to look like the
following:
<Connector protocol="HTTP/1.1" SSLEnabled="true" port="8443" address="${jboss.bind.address}" scheme="https" secure="true" clientAuth="false" keystoreFile="path to a keystore" keystorePass="keystore password" sslProtocol = "TLS" />
In both cases you will need to provide a keystore and password. Take a look at the servlet ssl example shipped with HornetQ for more detail.