JBoss.org: Netty - The Client Server Framework and ToolsCommunity Documentation
This chapter tours around the core constructs of Netty with simple examples to let you get started quickly. You will be able to write a client and a server on top of Netty right away when you are at the end of this chapter.
If you prefer top-down approach in learning something, you might want to start from Chapter 2, Architectural Overview and get back here.
The minimum requirements to run the examples which are introduced in this chapter are only two; the latest version of Netty and JDK 1.5 or above. The latest version of Netty is available in the project download page. To download the right version of JDK, please refer to your preferred JDK vendor's web site.
Is that all? To tell the truth, you should find these two are just enough to implement almost any type of protocols. Otherwise, please feel free to contact the Netty project community and let us know what's missing.
At last but not least, please refer to the API reference whenever you want to know more about the classes introduced here. All class names in this document are linked to the online API reference for your convenience. Also, please don't hesitate to contact the Netty project community and let us know if there's any incorrect information, errors in grammar and typo, and if you have a good idea to improve the documentation.
The most simplistic protocol in the world is not 'Hello, World!' but DISCARD. It's a protocol which discards any received data without any response.
To implement the DISCARD protocol, the only thing you need to do is to ignore all received data. Let us start straight from the handler implementation, which handles I/O events generated by Netty.
package org.jboss.netty.example.discard; @ChannelPipelineCoverage
("all") public class DiscardServerHandler extendsSimpleChannelHandler
{ @Override public void messageReceived(ChannelHandlerContext
ctx,MessageEvent
e) { } @Override public void exceptionCaught(ChannelHandlerContext
ctx,ExceptionEvent
e) { e.getCause().printStackTrace();Channel
ch = e.getChannel(); ch.close(); } }
| |
| |
We override the | |
|
So far so good. We have implemented the first half of the DISCARD server.
What's left now is to write the main
method
which starts the server with the DiscardServerHandler
.
package org.jboss.netty.example.discard; import java.net.InetSocketAddress; import java.util.concurrent.Executors; public class DiscardServer { public static void main(String[] args) throws Exception {ChannelFactory
factory = newNioServerSocketChannelFactory
( Executors.newCachedThreadPool(), Executors.newCachedThreadPool());ServerBootstrap
bootstrap = newServerBootstrap
(factory); DiscardServerHandler handler = new DiscardServerHandler();ChannelPipeline
pipeline = bootstrap.getPipeline(); pipeline.addLast("handler", handler); bootstrap.setOption("child.tcpNoDelay", true); bootstrap.setOption("child.keepAlive", true); bootstrap.bind(new InetSocketAddress(8080)); } }
| |
| |
Here, we add the | |
You can also set the parameters which are specific to the bootstrap.setOption("reuseAddress", true);
| |
We are ready to go now. What's left is to bind to the port and to
start the server. Here, we bind to the port |
Congratulations! You've just finished your first server on top of Netty.
Now that we have written our first server, we need to test if it really
works. The easiest way to test it is to use the telnet
command. For example, you could enter "telnet localhost
8080
" in the command line and type something.
However, can we say that the server is working fine? We cannot really know that because it is a discard server. You will not get any response at all. To prove it is really working, let us modify the server to print what it has received.
We already know that MessageEvent
is generated whenever data is
received and the messageReceived
handler method
will be invoked. Let us put some code into the
messageReceived
method of the
DiscardServerHandler
:
@Override public void messageReceived(ChannelHandlerContext
ctx,MessageEvent
e) {ChannelBuffer
buf = (ChannelBuffer) e.getMessage(); while(buf.readable()) { System.out.println((char) buf.readByte()); } }
It is safe to assume the message type in socket transports is always
Although it resembles to NIO |
If you run the telnet
command again, you will see the
server prints what has received.
The full source code of the discard server is located in the
org.jboss.netty.example.discard
package of the
distribution.
So far, we have been consuming data without responding at all. A server, however, is usually supposed to respond to a request. Let us learn how to write a response message to a client by implementing the ECHO protocol, where any received data is sent back.
The only difference from the discard server we have implemented in the
previous sections is that it sends the received data back instead of
printing the received data out to the console. Therefore, it is enough
again to modify the messageReceived
method:
@Override public void messageReceived(ChannelHandlerContext
ctx,MessageEvent
e) {Channel
ch = e.getChannel(); ch.write(e.getMessage()); }
A |
If you run the telnet
command again, you will see the
server sends back whatever you have sent to it.
The full source code of the echo server is located in the
org.jboss.netty.example.echo
package of the
distribution.
The protocol to implement in this section is the TIME protocol. It is different from the previous examples in that it sends a message, which contains a 32-bit integer, without receiving any requests and loses the connection once the message is sent. In this example, you will learn how to construct and send a message, and to close the connection on completion.
Because we are going to ignore any received data but to send a message
as soon as a connection is established, we cannot use the
messageReceived
method this time. Instead,
we should override the channelConnected
method.
The following is the implementation:
package org.jboss.netty.example.time; @ChannelPipelineCoverage
("all") public class TimeServerHandler extendsSimpleChannelHandler
{ @Override public void channelConnected(ChannelHandlerContext
ctx,ChannelStateEvent
e) {Channel
ch = e.getChannel();ChannelBuffer
time =ChannelBuffers
.buffer(4); time.writeInt(System.currentTimeMillis() / 1000);ChannelFuture
f = ch.write(time); f.addListener(newChannelFutureListener
() { public void operationComplete(ChannelFuture
future) {Channel
ch = future.getChannel(); ch.close(); } }); } @Override public void exceptionCaught(ChannelHandlerContext
ctx,ExceptionEvent
e) { e.getCause().printStackTrace(); e.getChannel().close(); } }
As explained, | |
To send a new message, we need to allocate a new buffer which will
contain the message. We are going to write a 32-bit integer, and
therefore we need a
On the other hand, it is a good idea to use static imports for
import static org.jboss.netty.buffer.
| |
As usual, we write the constructed message.
But wait, where's the
In contrast, NIO buffer does not provide a clean way to figure out
where the message content starts and ends without calling the
Another point to note is that the
Therefore, you need to call the | |
How do we get notified when the write request is finished then?
This is as simple as adding a Alternatively, you could simplify the code using a pre-defined listener: f.addListener(
|
Unlike DISCARD and ECHO servers, we need a client for the TIME protocol because a human cannot translate a 32-bit binary data into a date on a calendar. In this section, we discuss how to make sure the server works correctly and learn how to write a client with Netty.
The biggest and only difference between a server and a client in Netty
is that different Bootstrap
and ChannelFactory
are required. Please
take a look at the following code:
package org.jboss.netty.example.time; import java.net.InetSocketAddress; import java.util.concurrent.Executors; public class TimeClient { public static void main(String[] args) throws Exception { String host = args[0]; int port = Integer.parseInt(args[1]);ChannelFactory
factory = newNioClientSocketChannelFactory
( Executors.newCachedThreadPool(), Executors.newCachedThreadPool());ClientBootstrap
bootstrap = newClientBootstrap
(factory); TimeClientHandler handler = new TimeClientHandler(); bootstrap.getPipeline().addLast("handler", handler); bootstrap.setOption("tcpNoDelay", true); bootstrap.setOption("keepAlive", true); bootstrap.connect(new InetSocketAddress(host, port)); } }
| |
| |
Please note that there's no | |
We should call the |
As you can see, it is not really different from the server side startup.
What about the ChannelHandler
implementation? It should receive a
32-bit integer from the server, translate it into a human readable format,
print the translated time, and close the connection:
package org.jboss.netty.example.time; import java.util.Date; @ChannelPipelineCoverage
("all") public class TimeClientHandler extendsSimpleChannelHandler
{ @Override public void messageReceived(ChannelHandlerContext
ctx,MessageEvent
e) {ChannelBuffer
buf = (ChannelBuffer
) e.getMessage(); long currentTimeMillis = buf.readInt() * 1000L; System.out.println(new Date(currentTimeMillis)); e.getChannel().close(); } @Override public void exceptionCaught(ChannelHandlerContext
ctx,ExceptionEvent
e) { e.getCause().printStackTrace(); e.getChannel().close(); } }
It looks very simple and does not look any different from the server side
example. However, this handler sometimes will refuse to work raising an
IndexOutOfBoundsException
. We discuss why
this happens in the next section.
In a stream-based transport such as TCP/IP, received data is stored into a socket receive buffer. Unfortunately, the buffer of a stream-based transport is not a queue of packets but a queue of bytes. It means, even if you sent two messages as two independent packets, an operating system will not treat them as two messages but as just a bunch of bytes. Therefore, there is no guarantee that what you read is exactly what your remote peer wrote. For example, let us assume that the TCP/IP stack of an operating system has received three packets:
+-----+-----+-----+ | ABC | DEF | GHI | +-----+-----+-----+
Because of this general property of a stream-based protocol, there's high chance of reading them in the following fragmented form in your application:
+----+-------+---+---+ | AB | CDEFG | H | I | +----+-------+---+---+
Therefore, a receiving part, regardless it is server-side or client-side, should defrag the received data into one or more meaningful frames that could be easily understood by the application logic. In case of the example above, the received data should be framed like the following:
+-----+-----+-----+ | ABC | DEF | GHI | +-----+-----+-----+
Now let us get back to the TIME client example. We have the same problem here. A 32-bit integer is a very small amount of data, and it is not likely to be fragmented often. However, the problem is that it can be fragmented, and the possibility of fragmentation will increase as the traffic increases.
The simplistic solution is to create an internal cumulative buffer and
wait until all 4 bytes are received into the internal buffer. The
following is the modified TimeClientHandler
implementation that fixes the problem:
package org.jboss.netty.example.time; import static org.jboss.netty.buffer.ChannelBuffers
.*; import java.util.Date; @ChannelPipelineCoverage
("one") public class TimeClientHandler extendsSimpleChannelHandler
{ private finalChannelBuffer
buf = dynamicBuffer(); @Override public void messageReceived(ChannelHandlerContext
ctx,MessageEvent
e) {ChannelBuffer
m = (ChannelBuffer
) e.getMessage(); buf.writeBytes(m); if (buf.readableBytes() >= 4) { long currentTimeMillis = buf.readInt() * 1000L; System.out.println(new Date(currentTimeMillis)); e.getChannel().close(); } } @Override public void exceptionCaught(ChannelHandlerContext
ctx,ExceptionEvent
e) { e.getCause().printStackTrace(); e.getChannel().close(); } }
This time, | |
A dynamic buffer is a | |
First, all received data should be cumulated into
| |
And then, the handler must check if |
There's another place that needs a fix. Do you remember that we
added a TimeClientHandler
instance to the
default ChannelPipeline
of the ClientBootstrap
?
It means one same TimeClientHandler
instance is
going to handle multiple Channel
s and consequently the data will be
corrupted. To create a new TimeClientHandler
instance per Channel
, we have to implement a ChannelPipelineFactory
:
package org.jboss.netty.example.time; public class TimeClientPipelineFactory implementsChannelPipelineFactory
{ publicChannelPipeline
getPipeline() {ChannelPipeline
pipeline =Channels
.pipeline(); pipeline.addLast("handler", new TimeClientHandler()); return pipeline; } }
Now let us replace the following lines of TimeClient
:
TimeClientHandler handler = new TimeClientHandler(); bootstrap.getPipeline().addLast("handler", handler);
with the following:
bootstrap.setPipelineFactory(new TimeClientPipelineFactory());
It might look somewhat complicated at the first glance, and it is true
that we don't need to introduce TimeClientPipelineFactory
in this particular case because TimeClient
creates
only one connection.
However, as your application gets more and more complex, you will
almost always end up with writing a ChannelPipelineFactory
, which
yields much more flexibility to the pipeline configuration.
Although the first solution has resolved the problem with the TIME
client, the modified handler does not look that clean. Imagine a more
complicated protocol which is composed of multiple fields such as a
variable length field. Your ChannelHandler
implementation will
become unmaintainable very quickly.
As you may have noticed, you can add more than one ChannelHandler
to
a ChannelPipeline
, and therefore, you can split one monolithic
ChannelHandler
into multiple modular ones to reduce the complexity of
your application. For example, you could split
TimeClientHandler
into two handlers:
TimeDecoder
which deals with the
fragmentation issue, and
the initial simple version of TimeClientHandler
.
Fortunately, Netty provides an extensible class which helps you write the first one out of the box:
package org.jboss.netty.example.time; public class TimeDecoder extendsFrameDecoder
{ @Override protected Object decode(ChannelHandlerContext
ctx,Channel
channel,ChannelBuffer
buffer) { if (buffer.readableBytes() < 4) { return null; } return buffer.readBytes(4); } }
There's no | |
| |
If | |
If non- |
If you are an adventurous person, you might want to try the
ReplayingDecoder
which simplifies the decoder even more. You will
need to consult the API reference for more information though.
package org.jboss.netty.example.time; public class TimeDecoder extendsReplayingDecoder
<VoidEnum
> { @Override protected Object decode(ChannelHandlerContext
ctx,Channel
channel,ChannelBuffer
buffer,VoidEnum
state) { return buffer.readBytes(4); } }
Additionally, Netty provides out-of-the-box decoders which enables you to implement most protocols very easily and helps you avoid from ending up with a monolithic unmaintainable handler implementation. Please refer to the following packages for more detailed examples:
org.jboss.netty.example.factorial
for
a binary protocol, and
org.jboss.netty.example.telnet
for
a text line-based protocol.
All the examples we have reviewed so far used a ChannelBuffer
as a
primary data structure of a protocol message. In this section, we will
improve the TIME protocol client and server example to use a
POJO instead of a
ChannelBuffer
.
The advantage of using a POJO in your ChannelHandler
is obvious;
your handler becomes more maintainable and reusable by separating the
code which extracts information from ChannelBuffer
out from the
handler. In the TIME client and server examples, we read only one
32-bit integer and it is not a major issue to use ChannelBuffer
directly.
However, you will find it is necessary to make the separation as you
implement a real world protocol.
First, let us define a new type called UnixTime
.
package org.jboss.netty.example.time; import java.util.Date; public class UnixTime { private final int value; public UnixTime(int value) { this.value = value; } public int getValue() { return value; } @Override public String toString() { return new Date(value * 1000L).toString(); } }
We can now revise the TimeDecoder
to return
a UnixTime
instead of a ChannelBuffer
.
@Override protected Object decode(ChannelHandlerContext
ctx,Channel
channel,ChannelBuffer
buffer) { if (buffer.readableBytes() < 4) { return null; } return new UnixTime(buffer.readInt()); }
|
With the updated decoder, the TimeClientHandler
does not use ChannelBuffer
anymore:
@Override public void messageReceived(ChannelHandlerContext
ctx,MessageEvent
e) { UnixTime m = (UnixTime) e.getMessage(); System.out.println(m); e.getChannel().close(); }
Much simpler and elegant, right? The same technique can be applied on
the server side. Let us update the
TimeServerHandler
first this time:
@Override public void channelConnected(ChannelHandlerContext
ctx,ChannelStateEvent
e) { UnixTime time = new UnixTime(System.currentTimeMillis() / 1000);ChannelFuture
f = e.getChannel().write(time); f.addListener(ChannelFutureListener
.CLOSE); }
Now, the only missing piece is the ChannelHandler
which translates a
UnixTime
back into a ChannelBuffer
. It's much
simpler than writing a decoder because there's no need to deal with
packet fragmentation and assembly when encoding a message.
package org.jboss.netty.example.time; import static org.jboss.netty.buffer.ChannelBuffers
.*; @ChannelPipelineCoverage
("all") public class TimeEncoder extendsSimpleChannelHandler
{ public void writeRequested(ChannelHandlerContext
ctx,MessageEvent
e) { UnixTime time = (UnixTime) e.getMessage();ChannelBuffer
buf = buffer(4); buf.writeInt(time.getValue());Channels
.write(ctx, e.getFuture(), buf); } }
The | |
An encoder overrides the | |
Once done with transforming a POJO into a
On the other hand, it is a good idea to use static imports for
import static org.jboss.netty.channel.
|
The last task left is to insert a TimeEncoder
into the ChannelPipeline
on the server side, and it is left as a
trivial exercise.
If you ran the TimeClient
, you must have noticed
that the application doesn't exit but just keep running doing nothing.
Looking from the full stack trace, you will also find a couple I/O threads
are running. To shut down the I/O threads and let the application exit
gracefully, you need to release the resources allocated by ChannelFactory
.
The shutdown process of a typical network application is composed of the following three steps:
Close all server sockets if there are any,
Close all non-server sockets (i.e. client sockets and accepted sockets) if there are any, and
Release all resources used by ChannelFactory
.
To apply the three steps above to the TimeClient
,
TimeClient.main()
could shut itself down
gracefully by closing the only one client connection and releasing all
resources used by ChannelFactory
:
package org.jboss.netty.example.time; public class TimeClient { public static void main(String[] args) throws Exception { ...ChannelFactory
factory = ...;ClientBootstrap
bootstrap = ...; ...ChannelFuture
future = bootstrap.connect(...); future.awaitUninterruptible(); if (!future.isSuccess()) { future.getCause().printStackTrace(); } future.getChannel().getCloseFuture().awaitUninterruptibly(); factory.releaseExternalResources(); } }
The | |
Wait for the returned | |
If failed, we print the cause of the failure to know why it failed.
the | |
Now that the connection attempt is over, we need to wait until the
connection is closed by waiting for the
Even if the connection attempt has failed the | |
All connections have been closed at this point. The only task left
is to release the resources being used by |
Shutting down a client was pretty easy, but how about shutting down a
server? You need to unbind from the port and close all open accepted
connections. To do this, you need a data structure that keeps track of
the list of active connections, and it's not a trivial task. Fortunately,
there is a solution, ChannelGroup
.
ChannelGroup
is a special extension of Java collections API which
represents a set of open Channel
s. If a Channel
is added to a
ChannelGroup
and the added Channel
is closed, the closed Channel
is removed from its ChannelGroup
automatically. You can also perform
an operation on all Channel
s in the same group. For instance, you can
close all Channel
s in a ChannelGroup
when you shut down your server.
To keep track of open sockets, you need to modify the
TimeServerHandler
to add a new open Channel
to
the global ChannelGroup
, TimeServer.allChannels
:
@Override public void channelOpen(ChannelHandlerContext
ctx,ChannelStateEvent
e) { TimeServer.allChannels.add(e.getChannel()); }
Yes, |
Now that the list of all active Channel
s are maintained automatically,
shutting down a server is as easy as shutting down a client:
package org.jboss.netty.example.time; public class TimeServer { static finalChannelGroup
allChannels = newDefaultChannelGroup
("time-server"); public static void main(String[] args) throws Exception { ...ChannelFactory
factory = ...;ServerBootstrap
bootstrap = ...; ...Channel
channel = bootstrap.bind(...); allChannels.add(channel); waitForShutdownCommand();ChannelGroupFuture
future = allChannels.close(); future.awaitUninterruptibly(); factory.releaseExternalResources(); } }
| |
The | |
Any type of | |
| |
You can perform the same operation on all channels in the same
|
In this chapter, we had a quick tour of Netty with a demonstration on how to write a fully working network application on top of Netty. More questions you may have will be covered in the upcoming chapters and the revised version of this chapter. Please also note that the community is always waiting for your questions and ideas to help you and keep improving Netty based on your feed back.