Eviction refers to the process by which old, relatively unused, or excessively voluminous data can be dropped from the cache, allowing the cache to remain within a memory budget. Generally, applications that use the Second Level Cache should configure eviction, unless only a relatively small amount of reference data is cached. This chapter provides a brief overview of how JBoss Cache eviction works, and then explains how to configure eviction to effectively manage the data stored in a Hibernate Second Level Cache. A basic understanding of JBoss Cache eviction and of concepts like FQNs is assumed; see the JBoss Cache User Guide for more information.
The JBoss Cache eviction process is fairly straightforward. Whenever a node in a cache read or written to, added or removed, the cache finds the eviction region (see below) that contains the node and passes an eviction event object to the eviction policy (see below) associated with the region. The eviction policy uses the stream of events it receives to track activity in the region. Periodically, a background thread runs and contacts each region's eviction policy. The policy uses its knowledge of the activity in the region, along with any configuration it was provided at startup, to determine which if any cache nodes should be evicted from memory. It then tells the cache to evict those nodes. Evicting a node means dropping it from the cache's in-memory state. The eviction only occurs on that cache instance; there is no cluster-wide eviction.
An important point to understand is that eviction proceeds independently on each peer in the cluster, with what gets evicted depending on the activity on that peer. There is no "global eviction" where JBoss Cache removes a piece of data in every peer in the cluster in order to keep memory usage inside a budget. The Hibernate/JBC integration layer may remove some data globally, but that isn't done for the kind of memory management reasons we're discussing in this chapter.
An effect of this is that even if a cache is configured for replication, if eviction is enabled the contents of a cache will be different between peers in the cluster; some may have evicted some data, while others will have evicted different data. What gets evicted is driven by what data is accessed by users on each peer.
Controlling when data is evicted from the cache is a matter of setting up appropriate eviction regions and configuring appropriate eviction policies for each region.
JBoss Cache stores its data in a set of nodes organized in a tree
structure. An eviction region is a just a portion of the tree
to which an eviction policy has been assigned. The name of the
region is the FQN of the topmost node in that portion of the tree.
An eviction configuration always needs to include a special region
_default_; this region is rooted in the
root node of the tree and includes all nodes not covered by
It's possible to define regions that overlap. In other words, one region can be defined for /a/b/c, and another defined for /a/b/c/d (which is just the d subtree of the /a/b/c sub-tree). The algorithm that assigns eviction events to eviction regions handles scenarios like this consistently by always choosing the first region it encounters. So, if the algorithm needed to decide how to handle an event affecting /a/b/c/d/e, it would start from there and work its way up the tree until it hits the first defined region - in this case /a/b/c/d.
An Eviction Policy is a class that knows how to handle eviction events to track the activity in its region. It may have a specialized set of configuration properties that give it rules for when a particular node in the region should be evicted. It can then use that configuration and its knowledge of activity in the region to to determine what nodes to evict.
JBoss Cache ships with a number of eviction policies. See the JBoss Cache User Guide for a discussion of all of them. Here we are going to focus on just two.
nodes that have been Least Recently Used. It has the following
maxNodes- This is the maximum number of nodes allowed in this region. 0 denotes no limit. If the region has more nodes than this, the least recently used nodes will be evicted until the number of nodes equals this limit.
timeToLiveSeconds- The amount of time a node is not written to or read (in seconds) before the node should be evicted. 0 denotes no limit. Nodes that exceed this limit will be evicted whether or not a
maxNodeslimit has been breached.
maxAgeSeconds- Lifespan of a node (in seconds) regardless of idle time before the node is swept away. 0 denotes no limit. Nodes that exceed this limit will be evicted whether or not a
timeToLiveSecondslimit has been breached.
minTimeToLiveSeconds- the minimum amount of time a node must be allowed to live after being accessed before it is allowed to be considered for eviction. 0 denotes that this feature is disabled, which is the default value. Should be set to a value less than
timeToLiveSeconds. It is recommended that this be set to a value slightly greater than the maximum amount of time a transaction that affects the region should take to complete. Configuring this is particularly important when optimistic locking is used in conjunction with invalidation.
is a simple policy that very efficiently does ... nothing. It
is used to efficiently short-circuit eviction handling for regions
where you don't want objects to be evicted (e.g. the timestamps
cache, which should never have data
evicted). Since the
actually evict anything, it doesn't take any configuration parameters.